LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2005-12-13 20:24:40


What is happening here is that LAM is informing you that one of your
processes died due to a seg fault -- it was MPI_COMM_WORLD rank 1.

You might want to run your application through a memory-checking
debugger such as Valgrind. See the LAM/MPI FAQ for helpful debugging
tips, especially when using Valgrind.

On Dec 13, 2005, at 8:44 AM, Dilani Perera wrote:

>
> Hi what can be the reason for the following error ?
>
> When i give the full path its gives the following error. When i am not
> giving the full path its working up to 12 processors.
>
> ----------------------------------------------------------------------
> -------
> nuthatch% mpirun -np 2 /users/cs/grad/dilani/Research_2005/out
> MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD)
> Rank (1, MPI_COMM_WORLD): Call stack within LAM:
> Rank (1, MPI_COMM_WORLD): - MPI_Recv()
> Rank (1, MPI_COMM_WORLD): - main()
> ----------------------------------------------------------------------
> -------
> One of the processes started by mpirun has exited with a nonzero exit
> code. This typically indicates that the process finished in error.
> If your process did not finish in error, be sure to include a "return
> 0" or "exit(0)" in your C code before exiting the application.
>
> PID 4508 failed on node n0 (134.153.50.235) due to signal 11.
> ----------------------------------------------------------------------
> -------
> nuthatch%
>
>
> Thanks.
>
> Dilani Perera.
> (MSC Candidate for Computational Sciences)
> Department of Computer Science,
> St. John's, NL
> Canada,A1B 3X5
> Tel: 709-737-6142 (office)
>
> email : dilani_at_[hidden]
> Visit me at : www.cs.mun.ca/~dilani
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/