Thanks for your fast responding.
I tried to run LAM only between the two Alpha nodes and the same error
appeard. Do you have any advice?
thanks
Eugen
On Mon, 2005-08-22 at 18:13 -0400, Jeff Squyres wrote:
> LAM should work if the data sizes are the same -- it will take care of
> the different endian representations for you.
>
> However, your app looks right -- there's no reason that you're getting
> bogus rank and size values on 2 out of 3 processes (I assume those are
> the alpha processors).
>
> I'm guessing that if you run exclusively on the Alphas, you also get
> bogus answers (because the MPI_COMM_SIZE|RANK functions really don't do
> any communication; it's just reading the information determined during
> MPI_COMM_INIT). Can you verify?
>
> Also, can you verify that you compiled your application with the
> "right" mpif77 on the Alphas (i.e., the one that was configured with
> g95)? One possible explanation is that you're using a LAM that was
> configured with a different fortran compiler, and it has different data
> sizes, and therefore didn't interoperate properly in the fortran
> translation layer...?
>
>
> On Aug 22, 2005, at 3:36 AM, Eugen Wintersberger wrote:
>
> > Hi there (sorry I forgot the testing code in my last mail)
> > I use LAM MPI (7.1.1) on a Debian Sarge 3.1 System. Since I need the
> > Fortran interface I recompiled the library instead of using the Debian
> > packages. I use a single Intel P4 PC and two Alpha worstations as a
> > (very) small cluster for testing purposes. On all machines Debian 3.1
> > is
> > used.
> > For the PC I used the subsequent configure command:
> >
> > ./configure --prefix=/mypath/ --with-fc=ifort --with-rsh="ssh -x"
> >
> > and on the Alpha workstations
> >
> > ./configure --prefix=/mypath/ --with-fc=g95 --with-rsh="ssh -x"
> >
> > on all machines make produces no errors. For testing purposes I wrote a
> > simple Fortran program (mpi_test.f90) that you will find attached to
> > this mail. However, after compilation with (mpif77) I get the following
> > output:
> >
> > eugen_at_hubbard:~/src/mpi_test$ mpirun -np 3 mpi_test
> > Processor 0 of 3
> > Processor 4294967298 of 2199023255555
> > Processor 4294967297 of 2199023255555
> > eugen_at_hubbard:~/src/mpi_test$
> >
> > Obviously rank and size of the MPI environment is not correct on the
> > Alpha workstations. Is this a known problem? Did I something wrong
> > (maybe with g95)?
> >
> > best regards
> >
> > Eugen Wintersberger
> > --
> > --------------------------------------------
> > | |
> > | Dipl.- Ing. Eugen Wintersberger |
> > | |
> > | Department of semiconductor physics |
> > | Johannes Kepler University |
> > | Altenbergerstr. 69 |
> > | A-4040 Linz, Austria |
> > | |
> > | Tel.: +43 732 2468 9605 |
> > | Mobil: +43 664 311 286 1 |
> > | |
> > | mail: eugen.wintersberger_at_[hidden] |
> > | eugen.wintersberger_at_[hidden] |
> > | |
> > --------------------------------------------
> > <mpi_test.f90>_______________________________________________
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
|