LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2004-09-10 02:59:47


On Sep 10, 2004, at 3:19 AM, Jaroslaw Zola wrote:

> | Is there a reason you have to use blocking communication and probe?
> | Probe is actually fairly evil. Can you use non-blocking
> communication
> | instead? Note that a blocking MPI_SEND does not necessarily imply
> that
> | the message has been received when it returns (in fact, it doesn't
> | indicate anything at all about the state of the receive when MPI_SEND
> | returns).
>
> Well, in fact problem is more complicated. I utilize library which
> provides several types of communication layer. One is MPI based. During
> initialization it duplicates MPI communicator and uses it to perform
> communication. Since library provides only blocking communication
> primitives I am little bit tied. My application, which is also MPI
> based
> ~ works with blocking receive as well...

Gotcha.

> And what about implementation with non-blocking receive starting
> first? Is it different from the previous version?
>
> mtMPI_Send(...) {
> ~ LOCK_MTX
> ~ MPI_Irecv(...);
> ~ UNLOCK_MTX
>
> ~ while (!flag) {
> ~ LOCK_MTX
> ~ MPI_Test(...,flag);
> ~ UNLOCK_MTX
> ~ }
> }

Yes, this is definitely better -- you greatly reduce the possibility of
extra buffer copies because of the pre-posted send. You may want to
throw a usleep() in the while() loop, though -- give that thread a
chance to swap out and let others run (particularly if you have more
threads than CPUs). That may or may not help your performance; I
mention it because it seems like you're already resigned to letting a
thread spin while waiting for receives.

I should have mentioned this yesterday, but you may also wish to try
LA-MPI -- they have a thread-safe implementation that may work for you
(i.e., no need to do this spinning stuff). Check it out:
http://public.lanl.gov/lampi/

-- 
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/