LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Sriram Rallabhandi (sriramr_at_[hidden])
Date: 2004-10-14 15:15:25


Hi,

Thanks Jeff and David for replying. After I had sent the email to the
group, I was trying to figure out if the structure was being passed as it
should be.

I had to use the maximum values for all the arrays, instead of the actual
sizes when defining the new MPI data type. Furthermore, since I had
9 components in the structure, I used 9 blockcounts,offsets and oldtypes
instead of using just 2 as I had written earlier.

Earlier, I was getting some mysterious errors. I thought those were due to
MPI not passing the new data type. Turns out that was not the only reason.
I accidentally commented out a line and the compiler was assigning
arbitrary value to the actual value to be used for that variable.

Anyway, I think I have figured out what to do for now. My program seems to
be doing what its supposed to do.

Thanks
Sriram

At 08:04 AM 10/13/2004 -0600, you wrote:
>On Oct 8, 2004, at 10:07 AM, Sriram Rallabhandi wrote:
>
>>// These define the maximum sizes of the arrays in the structure
>> #define max1 1000
>> #define max2 80
>> #define max3 10
>> #define max4 50
>>
>> typedef struct
>> {
>> int g1[max1], // g1 array not to exceed max1 items. Actual size
>>is entered by user : psizec
>> rank,
>> flag;
>> float x1[max2], // Actual size : psizev
>> x2[max2]; // Actual size psizev2
>> float f[max3], // Actual size: psizef
>> c[max4], // Actual size : psizeco
>> clen,
>> error;
>> } individual;
>>
>> MPI_Datatype Individ,oldtypes[2];
>> int blockcounts[2];
>> MPI_Aint offsets[2],extent;
>>
>> offsets[0]=0;
>> oldtypes[0] = MPI_INT;
>> blockcounts[0] = 2+psizec;
>> MPI_Type_extent(MPI_INT,&extent);
>> offsets[1] = (2+psizec)*extent;
>> oldtypes[1] = MPI_FLOAT;
>> blockcounts[1] = psizev+psizev2+psizef+psizeco+2;
>
>A minor quibble: you can't know for sure that the compiler will lay out
>the individual struct in memory with all the ints (contiguously) first
>and all the floats (contiguously) second. It *probably* will, but
>there is no guarantee of that. If it doesn't, your datatype will not
>match it.
>
>> MPI_Type_struct(2,blockcounts,offsets,oldtypes,&Individ);
>> MPI_Type_commit(&Individ);
>
>Have you tried instantiating an individual, filling it with data, and
>doing a simple send/recv with it to test whether your datatype works?
>
>>Further, I have to do an MPI_Alltoall operation exchanging
>>"individuals" between nodes. I do the following:
>>
>>MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>> MPI_Comm_size(MPI_COMM_WORLD,&psize);
>>
>> individual *sendbuf, *recvbuf;
>> int *sendcnt, *recvcnt, recv;
>> int *sdisp, *rdisp;
>>
>> // Stop all processes at this point so that all nodes proceed from
>>here simultaneously
>> MPI_Barrier(MPI_COMM_WORLD);
>>
>> // ppsize is the number of individuals with each node.
>> ktmp1 = (floor) (ppsize/psize);
>> kslct = (psize-1)*ktmp1;
>>
>> indx1 = (int *) calloc(ppsize,sizeof(int));
>>
>> // Randomly shuffle the location of individuals and place it in send
>>buffer
>> // Have to allocate memory for send buffer and Recv buffer
>> shuffle_index(&indx1,ppsize);
>>
>> // Create a send buffer of a group of individuals
>> sendbuf = (individual *) malloc(ppsize*sizeof(individual));
>>
>> // Put data into the sendbuf array
>> for (pp=0;pp<ppsize;pp++) {
>> sendbuf[pp] = oldpop.ind[indx1[pp]];
>> }
>>
>> recv = ppsize-((psize-1)*ktmp1);
>> sendcnt = (int *) calloc(psize,sizeof(int));
>> recvcnt = (int *) calloc(psize,sizeof(int));
>>
>> for (pp=0;pp<psize;pp++) {
>> if (rank==pp) {
>> sendcnt[pp] = recv;
>> }
>> else {
>> sendcnt[pp] = ktmp1;
>> }
>> }
>>
>> // I think this call would tell the nodes how much info is coming
>> //So, every node gets recvnnt
>> MPI_Alltoall(sendcnt,1,MPI_INT,recvcnt,1,MPI_INT,MPI_COMM_WORLD);
>>
>> sdisp[0]=0;
>> for (pp=1;pp<psize;pp++) {
>> sdisp[pp] = sendcnt[pp-1]+sdisp[pp-1];
>> }
>> rdisp[0]=0;
>> for (pp=1;pp<psize;pp++) {
>> rdisp[pp] = recvcnt[pp-1]+rdisp[pp-1];
>> }
>>
>> recvbuf = (individual *) malloc(ppsize*sizeof(individual));
>> // Each node scatters some of its individuals to other nodes
>>including itself
>>
>>MPI_Alltoallv(sendbuf,sendcnt,sdisp,Individ,recvbuf,recvcnt,rdisp,Indiv
>>id,MPI_COMM_WORLD);
>>
>> // Now use recvbuf
>> for (pp=0;pp<popsize;pp++) {
>> oldpop.ind[pp] = recvbuf[pp];
>> }
>>
>>Is there anything wrong with the above code?
>
>A cursory look didn't reveal any problems (admittedly I didn't look
>closely). Can you be specific about why you are asking? For example,
>are you seeing problems or running into errors?
>
>--
>{+} Jeff Squyres
>{+} jsquyres_at_[hidden]
>{+} http://www.lam-mpi.org/
>
>
>_______________________________________________
>This list is archived at http://www.lam-mpi.org/MailArchives/lam/

-------------------------------------------------------------------------------
Sriram K. Rallabhandi
Graduate Research Assistant Work: 404 385 2789
Aerospace Engineering Res: 404 603 9160
Georgia Inst. of Technology
-------------------------------------------------------------------------------