LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Sriram Rallabhandi (sriramr_at_[hidden])
Date: 2004-10-08 11:07:00


Hi all,

I'm trying to create a new data type for a structure defined below. Am I
doing it right?

// These define the maximum sizes of the arrays in the structure
#define max1 1000
#define max2 80
#define max3 10
#define max4 50

typedef struct
{
   int g1[max1], // g1 array not to exceed max1 items. Actual size is
entered by user : psizec
          rank,
     flag;
   float x1[max2], // Actual size : psizev
     x2[max2]; // Actual size psizev2
   float f[max3], // Actual size: psizef
     c[max4], // Actual size : psizeco
     clen,
     error;
} individual;

MPI_Datatype Individ,oldtypes[2];
int blockcounts[2];
MPI_Aint offsets[2],extent;

offsets[0]=0;
oldtypes[0] = MPI_INT;
blockcounts[0] = 2+psizec;
MPI_Type_extent(MPI_INT,&extent);
offsets[1] = (2+psizec)*extent;
oldtypes[1] = MPI_FLOAT;
blockcounts[1] = psizev+psizev2+psizef+psizeco+2;

MPI_Type_struct(2,blockcounts,offsets,oldtypes,&Individ);
MPI_Type_commit(&Individ);

Further, I have to do an MPI_Alltoall operation exchanging "individuals"
between nodes. I do the following:

MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&psize);

individual *sendbuf, *recvbuf;
int *sendcnt, *recvcnt, recv;
int *sdisp, *rdisp;

// Stop all processes at this point so that all nodes proceed from here
simultaneously
MPI_Barrier(MPI_COMM_WORLD);

// ppsize is the number of individuals with each node.
ktmp1 = (floor) (ppsize/psize);
kslct = (psize-1)*ktmp1;

indx1 = (int *) calloc(ppsize,sizeof(int));

// Randomly shuffle the location of individuals and place it in send buffer
// Have to allocate memory for send buffer and Recv buffer
shuffle_index(&indx1,ppsize);

// Create a send buffer of a group of individuals
sendbuf = (individual *) malloc(ppsize*sizeof(individual));

// Put data into the sendbuf array
  for (pp=0;pp<ppsize;pp++) {
           sendbuf[pp] = oldpop.ind[indx1[pp]];
   }

recv = ppsize-((psize-1)*ktmp1);
sendcnt = (int *) calloc(psize,sizeof(int));
recvcnt = (int *) calloc(psize,sizeof(int));

for (pp=0;pp<psize;pp++) {
           if (rank==pp) {
                 sendcnt[pp] = recv;
           }
         else {
                 sendcnt[pp] = ktmp1;
         }
   }

// I think this call would tell the nodes how much info is coming
//So, every node gets recvnnt
MPI_Alltoall(sendcnt,1,MPI_INT,recvcnt,1,MPI_INT,MPI_COMM_WORLD);

sdisp[0]=0;
for (pp=1;pp<psize;pp++) {
         sdisp[pp] = sendcnt[pp-1]+sdisp[pp-1];
}
rdisp[0]=0;
for (pp=1;pp<psize;pp++) {
           rdisp[pp] = recvcnt[pp-1]+rdisp[pp-1];
}

  recvbuf = (individual *) malloc(ppsize*sizeof(individual));
// Each node scatters some of its individuals to other nodes including itself
MPI_Alltoallv(sendbuf,sendcnt,sdisp,Individ,recvbuf,recvcnt,rdisp,Individ,MPI_COMM_WORLD);

// Now use recvbuf
for (pp=0;pp<popsize;pp++) {
         oldpop.ind[pp] = recvbuf[pp];
}

Is there anything wrong with the above code?

Sorry for a slightly lengthy email.

Thanks
Sriram

-------------------------------------------------------------------------------
Sriram K. Rallabhandi
Graduate Research Assistant Work: 404 385 2789
Aerospace Engineering Res: 404 603 9160
Georgia Inst. of Technology
-------------------------------------------------------------------------------