Hi Carsten,
I have written a minimal MPI program that does nothing:
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char* argv[]) {
int my_id, other_id, size;
int length = 1, tag = 1;
int myvalue, othervalue;
MPI_Status satus;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
printf("\n[size is %d and rank is %d]\n", size, my_id);
MPI_Finalize();
return 0;
}
I compile it like:
mpicc -Wall -ansi -o mpiTest mpiTest.c
Then I run it with:
mpirun -v -np 5 mpiTest
But it prints
[size is 1 and rank is 0]
21168 mpiTest running on n0 (o)
[size is 1 and rank is 0]
21169 mpiTest running on n0 (o)
[size is 1 and rank is 0]
21170 mpiTest running on n0 (o)
[size is 1 and rank is 0]
21171 mpiTest running on n0 (o)
[size is 1 and rank is 0]
21172 mpiTest running on n0 (o)
followed by:
-----------------------------------------------------------------------------
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).
mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-----------------------------------------------------------------------------
Any ideas what probably is the problem?
Regards,
Al
On 10/10/07, Carsten Kutzner <ckutzne_at_[hidden]> wrote:
>
> Altu 59 wrote:
> > Hi,
> >
> > I want to write an MPI program that is comprised of 5 nodes.
> > Can I test this with lam on one machine?
> >
> > How can I create 5 nodes on one machine? Is it even possible?
> Hi Al,
>
> just boot LAM without arguments ('lamboot') and let mpirun know the
> number of processes you want, e.g.
>
> mpirun -np 5 myprog.x
>
> That should do exactly what you want.
> Carsten
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
|