lam 7.1.2 doesn't work either.
laminfo output:
LAM/MPI: 7.1.2
Prefix: /ll/wlp/laminstall
Architecture: x86_64-unknown-linux-gnu
Configured by: wlp
Configured on: Mon Jul 3 08:25:58 EDT 2006
Configure host: adam
Memory manager: ptmalloc2
C bindings: yes
C++ bindings: yes
Fortran bindings: yes
C compiler: icc
C++ compiler: icpc
Fortran compiler: g77
Fortran symbols: double_underscore
C profiling: yes
C++ profiling: yes
Fortran profiling: yes
C++ exceptions: no
Thread support: yes
ROMIO support: yes
IMPI support: no
Debug support: no
Purify clean: no
SSI boot: globus (API v1.1, Module v0.6)
SSI boot: rsh (API v1.1, Module v1.1)
SSI boot: slurm (API v1.1, Module v1.0)
SSI coll: lam_basic (API v1.1, Module v7.1)
SSI coll: shmem (API v1.1, Module v1.0)
SSI coll: smp (API v1.1, Module v1.2)
SSI rpi: crtcp (API v1.1, Module v1.1)
SSI rpi: lamd (API v1.0, Module v7.1)
SSI rpi: sysv (API v1.0, Module v7.1)
SSI rpi: tcp (API v1.0, Module v7.1)
SSI rpi: usysv (API v1.0, Module v7.1)
SSI cr: self (API v1.0, Module v1.0)
mpirun output:
MPI_Comm_rank: invalid communicator: Invalid argument (rank 0,
MPI_COMM_WORLD)
Rank (0, MPI_COMM_WORLD): Call stack within LAM:
Rank (0, MPI_COMM_WORLD): - MPI_Comm_rank()
Rank (0, MPI_COMM_WORLD): - main()
-----------------------------------------------------------------------------
One of the processes started by mpirun has exited with a nonzero exit
code. This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.
PID 30043 failed on node n0 (127.0.0.1) with exit status 22.
-----------------------------------------------------------------------------
Brian Barrett wrote:
> On Jun 28, 2006, at 2:10 PM, William Pughe wrote:
>
>> Hello all. I'm having problems running a simple program. I built lam
>> 7.1.1 with the intel 9.1 compiler. When running the following
>> program:
>
> Could you try upgrading to LAM 7.1.2? We fixed a couple of issues in
> the C++ bindings between 7.1.1 and 7.1.2, and I think we've resolved
> this issue already.
>
> Thanks,
>
> Brian
>
|