LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Brian Barrett (brbarret_at_[hidden])
Date: 2005-05-10 09:25:15


On May 10, 2005, at 9:14 AM, Steve Lowder wrote:

> Thank you for you replies, they filled in the gaps for me. I'm
> going to
> cache my last datatype and free the previous two. I would have
> liked to use
> MPI_Gather or MPI_Gatherv but my problem dictated that the root
> processor
> not be a data contributor to the gathered data which I think is
> required on
> collective communication but I may be wrong. The compute tasks
> (100s) are
> sending their piece to a noncompute task for postprocessing, io,
> convertions, etc.. I could have used one of the compute tasks but
> in this
> case it makes one task's process size grow a lot and pushes the
> task into
> swapping too much.

You might want to take another look at MPI_Gatherv. All processes in
the communicator (the non-computer task as well) have to participate
in the collective, but not all members have to send the same amount
of data - that's why the recvcounts and displs are arrays of
integers. So you could specify that the root process (the non-
compute process) sends 0 count, and expects to receive N count from
everyone else.

Just an idea, of course. But it should accomplish what you want.

Brian