LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Gabriel Antoine Louis Paillard (gap_at_[hidden])
Date: 2004-12-07 04:07:39


>Note from my first mail on this thread:

  Thank you again for the attention and for your answer,

>"As per MPI-2 p84, the command, argv, maxprocs, and info arguments to
>MPI_COMM_SPAWN are only significant in the root process."

>Specifically: MPI says that it doesn't matter what you put in argv on
>any process other than the root. The argv from the root is the only
>argv that matters.

 Now I really understood the function of the root, but the problem,
using the MPI_Comm_spawn function is that: for example, using my program
below, for the rank (mon_rang) equal to 0, it will execute 8 times with
for the process which corresponds to the rank 0 and seven more times to
the othes 7 processes and not just one time for each process. How can I
remedy that ?

>By changing your application to use MPI_COMM_SELF, this effectively
>makes every process the root, and therefore it launches 8 different
>processes (each with their own MPI_COMM_WORLD). That's why you see 8
>"rank 0"s. Yes, they're all rank 0, but from different
>MPI_COMM_WORLD's. Additionally, none of those processes can communicate
>with each other -- they can only communicate with their parent, which,
>in this case, is one process from the original MPI application that
>you ran (probably via mpirun).

Yes, I'm using mpirun.

>Also note from my previous mail:

>"If you want to launch N processes, each with different arguments, then
>you need to use either multiple calls to MPI_COMM_SPAWN (i.e., each
>one with a different argv), or MPI_COMM_SPAWN_MULTIPLE, where you can
>specify an array of argv."

But what I want, is to launch a new process from every process existing,
 and try to merge all worlds after that. And for that, I can't employ
MPI_comm_spawn_multiple, because if I start the program with 8
processes, just the root process will launch the new processes and
it isn't the purpose of the program.

>If you want these 8 processes to share a common MPI_COMM_WORLD, and you
>only want one launch command, then you really need to use
>MPI_COMM_SPAWN_MULTIPLE.

Thank you again,

Gabriel Paillard

On Dec 6, 2004, at 11:32 AM, Gabriel Antoine Louis Paillard wrote:

        Now, after some modifications in the same code, finally I have
        the desired output (all processes received their argument), but
        the rank
        0 executed 8 times the same thing.
        
        
        #include <stdio.h>
        #include <mpi.h>
        
        int slave ();
        
        
        int main(int argc,char *argv[])
        {
        int myrank;
        double starttime,endtime;
        
        MPI_Init(&argc, &argv);
        starttime = MPI_Wtime();
        MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
        
        switch(myrank) {
        case 0:
        slave(1);
        break;
        case 1:
        slave(7);
        break;
        case 2:
        slave(11);
        break;
        case 3:
        slave(13);
        break;
        case 4:
        slave(17);
        break;
        case 5:
        slave(19);
        break;
        case 6:
        slave(23);
        break;
        case 7:
        slave(29);
        break;
        }
        endtime = MPI_Wtime();
        printf("time: %1.15f\n", endtime-starttime);
        
        MPI_Finalize();
        exit(0);
        }
        
        int slave (unsigned long int argument)
        {
        char command[] = "Program";
        char **argv;
        int errcode;
        int err,mon_rang;
        MPI_Comm nouveau_monde;
        
        mon_rang=MPI_Comm_rank(MPI_COMM_WORLD, &mon_rang);
        
        argv=(char **)malloc(2*sizeof(char *));
        argv[0]=(char*)malloc(100*sizeof(char));
        sprintf(argv[0],"%ld",argument);
        argv[1] = NULL;
        
        MPI_Comm_set_errhandler(MPI_COMM_WORLD,MPI_ERRORS_RETURN);
        
        err=MPI_Comm_spawn(command,argv,8,MPI_INFO_NULL,mon_rang,MPI_COMM_SELF, &nouveau_monde,&errcode);
        
        return(0);
        }
        
        Thanks again,
        
        Gabriel Paillard
        
        _______________________________________________
        This list is archived at
        http://www.lam-mpi.org/MailArchives/lam/
        

--
{+} Jeff Squyres
{+} jsquyres_at_xxxxxxxxxxx
{+} http://www.lam-mpi.org/
________________________________________________________________________
      * References