On Jun 17, 2006, at 10:50 PM, Carlos Cruz wrote:
> I am using a Fortran 90 CFD code on a Linux cluster (AMD64 K8 proc.).
> The fortran compiler is PathScale and we also use LAM-MPI . We compile
> with pathf90 and include the link to all the MPI libraries.
> I am experiencing problems with MPI_Allreduce Calls. The
> compilation is
> successful, but the run stops with the following error message:
>
> MPI_Allreduce: invalid operation: Invalid argument (rank 0,
> MPI_COMM_WORLD)
> Rank (0, MPI_COMM_WORLD): Call stack within LAM:
> Rank (0, MPI_COMM_WORLD): - MPI_Allreduce()
> Rank (0, MPI_COMM_WORLD): - main()
>
> There are 2 types of MPI_Allreduce calls, typically:
> call MPI_Allreduce(tstep_l,tstep_g,
> 1,MPI_2DOUBLE_PRECISION,MPI_MINLOC,gcomm,ierr)
> call MPI_Allreduce(rkerr_l,rkerr_g, 1,MPI_REAL8,MPI_MIN,gcomm,ierr)
>
> These data_types are listed in LAM/MPI docs, how can it not recognize
> them?
>
> I have also seen MPI_Allreduce calls with 8 arguments instead of 7,
> the
> extra argument being a '0' (integer) before 'gcomm' (i.e. call
> MPI_Allreduce(rkerr_l,rkerr_g, 1,MPI_REAL8,MPI_MIN,0,gcomm,ierr).
> Please see the example on the following link:
> http://www.mpi-forum.org/docs/mpi-11-html/node82.html#Node82
>
> Thanks a lot for any clarification on (or suggestion to solve this)
> MPI_Allreduce error.
The MPI standard does not require an MPI implementation to support
MPI_REAL8 and MPI_MIN together for calls for MPI_Reduce /
MPI_Allreduce. However, since it has come up a couple of times
recently, I have added support for the combination in our release
branch today. Updating to the recently posted 7.1.3a1 pre-release
should solve your problem:
http://www.lam-mpi.org/beta/
Hope this helps,
Brian
--
Brian Barrett
LAM/MPI developer and all around nice guy
Have a LAM/MPI day: http://www.lam-mpi.org/
|