LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Alex Krouglov (alexkrouglov_at_[hidden])
Date: 2003-08-18 14:32:33


Hello,

I migrated my application to LAM 7.0, which I installed it as a RPM package
for Red Hat 9, and got the following output.

Could you please clarify to me what the problem is?

Thanks a lot in advance.

Best regards,

Alex Krouglov

================================================================

[alex_at_alextest work]$ lamboot -v server-client1-client2

LAM 7.0/MPI 2 C++/ROMIO - Indiana University

n0<6833> ssi:boot:base:linear: booting n0 (Alextest)
n0<6833> ssi:boot:base:linear: booting n1 (Alextest1)
n0<6833> ssi:boot:base:linear: booting n2 (Alextest2)
n0<6833> ssi:boot:base:linear: finished
[alex_at_alextest work]$ mpirun -v uir.schema
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
6842 uir_mpi running on n0 (o)
3806 uir_pu running on n1
4094 uir_pu running on n2
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
----------------------------------------------------------------------------
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
----------------------------------------------------------------------------
-
[alex_at_alextest work]$ tkill
[alex_at_alextest work]$ lamboot -v server-client1-client2

LAM 7.0/MPI 2 C++/ROMIO - Indiana University

n0<6844> ssi:boot:base:linear: booting n0 (Alextest)
n0<6844> ssi:boot:base:linear: booting n1 (Alextest1)
n0<6844> ssi:boot:base:linear: booting n2 (Alextest2)
n0<6844> ssi:boot:base:linear: finished
[alex_at_alextest work]$ mpirun -v uir.schema
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
6853 uir_mpi running on n0 (o)
3963 uir_pu running on n1
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
4251 uir_pu running on n2
----------------------------------------------------------------------------
-

It seems that there is no lamd running on the host .

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init" function).

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
----------------------------------------------------------------------------
-
Incorrectly built binary which accesses errno, h_errno or _res directly.
Needs to be fixed.
----------------------------------------------------------------------------
-
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
----------------------------------------------------------------------------
-
[alex_at_alextest work]$