LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jim Lasc (jimlasc_at_[hidden])
Date: 2005-08-18 17:13:39


Thanks a lot.
Your code works fine.

Jim.

On 8/18/05, Jeff Squyres <jsquyres_at_[hidden]> wrote:
>
> For the general case, you can probably make your code a lot simpler
> with MPI_COMM_SPLIT. Perhaps something like the following (off the
> top of my head -- be sure to check this for correctness!):
>
> // easy case -- even number of procs
> if (size % 2 == 0) {
> MPI_Comm_split(MPI_COMM_WORLD, rank / 2, 0, &comm1);
> MPI_Comm_split(MPI_COMM_WORLD, ((rank + 1) % size) / 2, 0,
> &comm2);
> if (rank % 2 == 0) {
> comm_right = comm1;
> comm_left = comm2;
> } else {
> comm_right = comm2;
> comm_left = comm1;
> }
> }
> // odd number of procs
> else {
> // first exclude the last proc
> color = (rank == size - 1) ? MPI_UNDEFINED : rank / 2;
> MPI_Comm_split(MPI_COMM_WORLD, color, 0, &comm1);
> // now exclude the first proc
> color = (rank == 0) ? MPI_UNDEFINED : (rank + 1) / 2;
> MPI_Comm_split(MPI_COMM_WORLD, color, 0, &comm2);
> if (rank % 2 == 0) {
> comm_right = comm1;
> comm_left = comm2;
> } else {
> comm_right = comm2;
> comm_left = comm1;
> }
>
> // now make the comm between 0 and the last proc
> color = (rank == 0 || rank == size - 1) ? 0 : MPI_UNDEFINED;
> MPI_Comm_split(MPI_COMM_WORLD, color, 0, &comm1);
> if (rank == 0) {
> comm_left = comm1;
> } else if (rank == size - 1) {
> comm_right = comm1;
> }
> }
>
> You could probably reduce this logic down a bit, but you get the idea.
>
>
>
>
> On Aug 18, 2005, at 8:41 AM, Jim Lasc wrote:
>
> > Hi,
> > I tried to do the following:
> > make a ring where there is a communicator between every two nodes
> > (processes) of the ring.
> > The code I made is added below. Every MPI_Comm_create returns
> > allways MPI_SUCCESS,
> > but I skipped all the error checking to make the code more readable.
> > The "problem"-part is for if you have a unpair number of nodes.
> > 0 -com1- 1 -com2- 3 -com1- 4 -com2- 5 -COM3- 0
> > (otherwise 0 has two com1's, but it will be clair in the code...)
> >
> > I know that I could allso use the following technique to set up the
> > communicators:
> > * comm 0-1
> > * 1 sends msg to 2
> > * comm 1-2
> > * 2 sends msg to 3
> > * ...
> > But I prefer to use my method.
> >
> > Now the problem is:
> > There's allways one link which doesn't works (though
> > MPI_Comm_create didn't gave an error).
> > It's allways either the link 0-n (n is highest MPI-rank in
> > MPI_COMM_WORLD), 0-1 or n-(n-1).
> > Which link it is depends of the order of the three "blocks" ( FIRST
> > - SECOND -PROBLEM),
> > which can be swapped; strange enough it's the block in the middle
> > which determines the bad link.
> >
> > My question:
> > Does anyone knows what is wrong with the code below?
> > Are there other methods to attain the same (not the msg-technique I
> > described above) ?
> >
> > Thanks in advance for any ideas and suggestions.
> >
> > Jim.
> >
> > #include <iostream>
> > #include <windows.h>
> > #include "mpi.h"
> >
> > #pragma comment(lib,"mpi.lib")
> >
> > static MPI_Comm* pCommL=NULL;
> > static MPI_Comm* pCommR=NULL;
> > int* pIntL=(int*)malloc(sizeof(int));
> > int* pIntR=(int*)malloc(sizeof(int));
> >
> >
> > void SendLeft(void *buf, int count, MPI_Datatype datatype,int tag);
> >
> > int main( int argc, char **argv)
> > {
> > int size, rank,newrank;
> > MPI_Init(&argc,&argv);
> > MPI_Comm_size( MPI_COMM_WORLD, &size);
> > MPI_Comm_rank( MPI_COMM_WORLD, &rank);
> >
> > MPI_Comm c_buur1, c_buur2, c_buur3;
> > MPI_Group g_buren, world;
> >
> > int tempgroup[2];
> > bool problem=(rank==size-1 && rank%2==0);
> > bool problem2=(rank==0 && size%2==1);
> >
> > /***************FIRST***************/
> >
> > if(!problem){
> > if(rank%2==0){
> > tempgroup[0]=(rank+1+size)%size;
> > tempgroup[1]=rank;}
> > else{
> > tempgroup[0]=rank;
> > tempgroup[1]=(rank-1+size)%size;
> > }
> >
> > MPI_Comm_group(MPI_COMM_WORLD, &world);
> > MPI_Group_incl(world, 2, tempgroup, &g_buren);
> > MPI_Comm_create(MPI_COMM_WORLD, g_buren, &c_buur1);
> > MPI_Group_free(&g_buren);
> >
> > if(rank%2==0){
> > pCommL=&c_buur1;
> > *pIntL=0;
> > }
> > else{
> > pCommR=&c_buur1;
> > *pIntR=1;
> > }
> > }
> > /***************SECOND***************/
> > if(!problem2){
> > if(rank%2==0){
> > tempgroup[0]=(rank-1+size)%size;
> > tempgroup[1]=rank;
> > }
> > else{
> > tempgroup[0]=rank;
> > tempgroup[1]=(rank+1+size)%size;
> > }
> >
> > MPI_Comm_group(MPI_COMM_WORLD, &world);
> > MPI_Group_incl(world, 2, tempgroup, &g_buren);
> > MPI_Comm_create(MPI_COMM_WORLD, g_buren, &c_buur2);
> > MPI_Group_free(&g_buren);
> >
> > if(rank%2==0){
> > pCommR=&c_buur2;
> > *pIntR=0;
> > }
> > else{
> > pCommL=&c_buur2;
> > *pIntL=1;
> > }
> > }
> > /***************PROBLEMS...***************/
> > if(problem || problem2){
> > tempgroup[0]=0;
> > tempgroup[1]=size-1;
> > MPI_Comm_group(MPI_COMM_WORLD, &world);
> > MPI_Group_incl(world, 2, tempgroup, &g_buren);
> > MPI_Comm_create(MPI_COMM_WORLD, g_buren, &c_buur3);
> > MPI_Group_free(&g_buren);
> > if(rank==0){
> > pCommR=&c_buur3;
> > *pIntR=1;
> > }
> > else{
> > pCommL=&c_buur3;
> > *pIntL=0;
> > }
> > }
> > /
> > **********************************************************************
> > */
> > int aa=1;
> > MPI_Request request;
> > MPI_Irecv(&aa, 1, MPI_INT, MPI_ANY_SOURCE,22, *pCommR, &request);
> > if(rank==0) SendLeft(&aa, 1, MPI_INT,22);
> >
> > int ontvangen=0;
> >
> > while(!ontvangen){
> > MPI_Status status;
> > MPI_Test(&request, &ontvangen, &status);
> > if(ontvangen) SendLeft(&aa, 1, MPI_INT,22);
> > }
> > printf("\n%d RECEIVED",rank);fflush(stdout);
> >
> > MPI_Finalize();
> > return 0;
> > }
> >
> >
> > void SendLeft(void *buf, int count, MPI_Datatype datatype,int tag){
> > MPI_Send(buf, count, datatype, *pIntL, tag,*pCommL);
> > }
> > _______________________________________________
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>