LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Lei_at_[hidden]
Date: 2004-03-10 14:57:01


Hi Prashanth,

Thanks a lot for your help.

By 'picking up computation' I mean the processor, p2 or p3,
should know that it is its turn to perform computations
using the data A[] transferred from p1, and it should
also know when to start its computation (preferably right
after A[] is transferred).

Having the rest of the world (in this case, both p2 and p3)
check whether or not one has received A[] in the win
that is local is not a scalable solution, similar to a
global sync or a broadcast. And even if this solution
is acceptable with the assumption that a local check
is much cheaper than a global sync or broadcast,
there still needs to be a mechanism to interrupt the
processes after p1 finishes putting A[] so the entire
world does not need to all do busy waiting. If this interruption
is one (p1) to many (p2 and p3), it is no different than
a broadcast. So the ideal solution seems to be that p1
only interrupts the target processor, which in turn
computes using A[] that is local.

Is there a mechanism in MPI-2 that provides such a
solution? Could you continue your pseudocode and show
me what p2 or p3 will do after step 3 is done please?

Thanks,

-Lei mailto:pan_at_[hidden]

Wednesday, March 10, 2004, 6:54:12 AM, you wrote:

P> Hello,

P> Passive Synchronization is the one you might want to use because although
P> the target process does not take part in either Synchronization or
P> Communication it does receive the data sent by the sender. I amn't sure
P> what you mean by 'pick up computation'.

P> You could try something like:

P> 1. Create Windows on target processes 2 and 3

P> 2. if (x < 0.5)
P> /* Lock the window on Process 2, put data and unlock window */
P> MPI_Win_lock()
P> MPI_Put()
P> MPI_Win_unlock()
P> else
P> /* Similarly for Process 3 */
P> endif

P> 3. Free the windows on process 2 and 3.

P> Hope this helps.

P> Prashanth Charapalli,
P> LAM/MPI Team.

P> Thus spake Lei_at_ICS in the message sent on Tue, 9 Mar 2004

->>Hi Prashanth,
->>
->>Thanks a lot for your help.
->>
->>Let us take a look at all possible synchronization
->>mechanisms.
->>
->>1). Passive: The target process takes no part in either
->> the synchronization nor the communication. This is
->> not what I wanted since I would like the target
->> to pick up the computation for efficient data accessing
->> of the array A[].
->>
->>2). Active: There are two sub-classes:
->> a). Collective synchronization: Obviously this one uses
->> MPI_WIN_FENCE() which is a global synchronization.
->> b). Pair-wise synchronization: This is probably the closest.
->> But the target would need to know that it needs to
->> call MPI_WIN_POST() and MPI_WIN_WAIT(). Notice that
->> in my example, I assume the decision factor X is
->> only available on p1. So the actual target, p2 or p3
->> depending on the value of X, would not know that
->> it is supposed to call MPI_WIN_POST() and MPI_WIN_WAIT(),
->> unless a broadcast of X is made.
->>
->>Maybe I am missing something in the above. Could you show
->>me with simple pseudocode how this can be done please?
->>
->>Thanks a lot,
->>-Lei mailto:pan_at_[hidden]
->>
->>Tuesday, March 9, 2004, 6:51:05 PM, you wrote:
->>
->>
->>P> Hello,
->>
->>P> MPI_Put probably provides the functionality you are looking for. MPI_Put
->>P> though requires that the receiving process have a buffer large enough to
->>P> hold the data that is being 'put'.
->>
->>P> The following URL seems to give very good information on MPI_Put and the
->>P> subsequent synchronization needed (not necessarily global).
->>
->>P>
->>http://www.epcc.ed.ac.uk/overview/publications/training_material/tech_watch/98_tw/techwatch-mpi2/MPI2-3.html
->>
->>P> MPI_Put does not require global synchronization nor global broadcasting.
->>
->>P> Hope this helps.
->>
->>P> Prashanth Charapalli,
->>P> LAM/MPI Team.
->>
->>
->>P> Thus spake Lei_at_ICS in the message sent on Tue, 9 Mar 2004
->>
->>->>Hi all,
->>->>
->>->>The following simple example illustrates
->>->>what I wanted to do with one-sided communication.
->>->>
->>->>I have three processors p1, p2, and p3.
->>->>On p1 there is a large array A[10000]
->>->>that needs to be sent to p2 or p3 depending
->>->>on a parameter X which is computed on
->>->>p1. X is not available initially on p2 or p3.
->>->>
->>->>if(X < 0.5)
->>->> put A[10000] to p2;
->>->>else
->>->> put A[10000] to p3;
->>->>
->>->>Now after this I want whoever owns A[]
->>->>to compute using it. In other words,
->>->>p2 or p3, but not p1 will compute using A[].
->>->>This is for efficiency.
->>->>
->>->>I would like to have p2 or p3 be interrupted
->>->>or awakened by p1 only when the data A[]
->>->>is coming to the processor. In other words,
->>->>the one who does not get the data should
->>->>not waste a single CPU cycle doing busy
->>->>waiting. A solution with a global synchronization
->>->>or broadcasting is not desirable.
->>->>
->>->>Is there a way to do it with MPI-2 one-sided
->>->>communication?
->>->>
->>->>Thanks a lot for your help in advance,
->>->>-Lei mailto:pan_at_[hidden]
->>->>
->>->>_______________________________________________
->>->>This list is archived at
->>http://www.lam-mpi.org/MailArchives/lam/
->>->>
->>