Greetings,
I have compiled my code on a machine with a single CPU. Then to check if it
runs on a cluster I ran it with mpirun -np 4 on a single CPU.
I use lamboot with no options to start the network. The result is that four
processes start but they all have the same 0 rank, and there is no communication
between processes.
Then I tried to use a cluster but results were the same: four processes with 0
rank.
It seems that some settings are incorrect.
A partial output from run is below:
4776 /home/gregory/linux/test-mpi.exe running on n0 (o)
4777 /home/gregory/linux/test-mpi-exe running on n0 (o)
4778 /home/gregory/linux/test-mpi-exe running on n0 (o)
4779 /home/gregory/linux/test-mpi.exe running on n0 (o)
In addition I am sending laminfo and test(command file) files
There is also a problem setting environment variables.
I have tried two lines in the test.com with no effect
mpirun -x INBOUND,INMESH,INFLOW,INTURB
mpirun -x OUTBOUND,OUTMESH,OUTFLOW,OUTURB,OUTSUMM,OUTCONV
LAM/MPI: 7.1.1
Prefix: /usr
Architecture: i386-redhat-linux-gnu
Configured by: bhcompile
Configured on: Tue Mar 8 16:47:54 EST 2005
Configure host: tweety.build.redhat.com
Memory manager: ptmalloc2
C bindings: yes
C++ bindings: yes
Fortran bindings: yes
C compiler: i386-redhat-linux-gcc
C++ compiler: i386-redhat-linux-g++
Fortran compiler: f95
Fortran symbols: double_underscore
C profiling: yes
C++ profiling: yes
Fortran profiling: yes
C++ exceptions: no
Thread support: yes
ROMIO support: yes
IMPI support: no
Debug support: no
Purify clean: no
SSI boot: globus (API v1.1, Module v0.6)
SSI boot: rsh (API v1.1, Module v1.1)
SSI boot: slurm (API v1.1, Module v1.0)
SSI coll: lam_basic (API v1.1, Module v7.1)
SSI coll: shmem (API v1.1, Module v1.0)
SSI coll: smp (API v1.1, Module v1.2)
SSI rpi: crtcp (API v1.1, Module v1.1)
SSI rpi: lamd (API v1.0, Module v7.1)
SSI rpi: sysv (API v1.0, Module v7.1)
SSI rpi: tcp (API v1.0, Module v7.1)
SSI rpi: usysv (API v1.0, Module v7.1)
SSI cr: self (API v1.0, Module v1.0)
Test command file:
#!/bin/tcsh
# *********Input files************************
setenv INBOUND ~/tnsample/010607/stg/4msh/stg.711u
setenv INMESH ~/tnsample/010607/stg/4msh/stg.722
setenv INFLOW ~/tnsample/010607/stg/4msh/stg.732
setenv INTURB ~/tnsample/010607/stg/4msh/stg.t02
# *********Output files***********************
setenv OUTBOUND ~/tnsample/010607/stg/4msh/stg-cur.712
setenv OUTMESH ~/tnsample/010607/stg/4msh/stg-mpi.722u
setenv OUTSUMM ~/tnsample/010607/stg/4msh/test-mpi.761u
setenv OUTCONV ~/tnsample/010607/stg/4msh/test-mpi.771u
setenv OUTURB ~/tnsample/010607/stg/4msh/test-mpi.t01u
# *********Executable**************************
mpirun -x INBOUND,INMESH,INFLOW,INTURB
mpirun -x OUTBOUND,OUTMESH,OUTFLOW,OUTURB,OUTSUMM,OUTCONV
mpirun -np 4 -v $TEST/$TARGETSYSTEM/test-mpi.exe
Appreciate any comments.
Greg
---------------------------------
Any questions? Get answers on any topic at Yahoo! Answers. Try it now.
|