LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Cole, Derek E (derek.e.cole_at_[hidden])
Date: 2010-03-04 14:33:51


Hi all,

This is my first post to the mailing list, and I am a relatively new MPI user. Forgive me, but I am unable to post source code for this particular problem, but perhaps someone can help anyway.

I have a code running a master/slave red-black algorithm for a G.S. solver. In the simple case, the matrix is split into 4 evenly sized computational pieces. The images 1-3 perform their part of the computation, and send back buffers with the results to image 0. The problem is this:

I have malloc'd an array large enough to hold all of the pieces so that I can map the individual results back into a single grid. The problem seem sto be that on image 0, after the MPI_Recv call, that process is no longer aware that the grid was malloc'd for holding the whole thing. I get an error any time I try to put something in that grid. The only workaround I have found that works is to perform a malloc on all processes, and a malloc again for process zero right before the MPI_Recv.

Any ideas why it is seemingly losing the reference to that previously allocated memory?

In psuedocode:

Malloc whole[][] <--Have to have this allocated
Malloc partial[]
Perform compute on whole[]

If(image!= 0) MPI_Send(whole[])
Else (if image==0)
  Malloc whole[][] again! <-- and this allocated otherwise the problem happens
   Loop over other images
      MPI_Recv(partial)
     Put partial[] into whole[][] <--Here is where the problem occurs
Endif

Thanks for the help in advance