On May 2, 2005, at 4:03 PM, David Cronk wrote:
> atarpley wrote:
>
>> In MPI terms, each task might be created through a seperate call
>> to MPI_Comm_spawn. Each task has one (and only one) MPI process
>> associated with it. Since these are independent tasks, they
>> establish communications with each other through MPI_Comm_connect/
>> accept (with the exception of the connection to the master, as
>> that was established through the spawn). Any one task may have
>> many incoming intercomms and many outgoing intercomms. Simply
>> stated, I am trying to build a very dynamic network of tasks with
>> MPI as the comm protocol.
>> I looked into MPI_waitany and if more than one operation is
>> completed, the returned one is chosen arbitraily. I'd like to
>> maintain some ordering. Additionally, I have no desire to
>> complete more than one receive at a time (as is offered by some
>> other the other wait commands).
>
> Chosen arbitrarily in theory. I would be interested to hear from
> the LAM/MPI developers how a request is selected. If there are
> multiple requests that can be completed, I suspect the first one in
> the request array will typically be selected. I could be wrong.
> Still, in terms of portability, there is no way to ensure this.
>
> Still, I am not sure why you care what order this happens in. MPI
> makes no ordering guarantees when you do a blocking receive with
> MPI_ANY_SOURCE. That is, if slaves 2, 4, 8, 11, and 15 have all
> sent a message, you have no way of knowing which will be returned
> by a call to MPI_Recv with source=MPI_ANY_SOURCE. Ordering is only
> guaranteed between 2 processes and only when there is a receive
> that can match multiple msgs.
In general, LAM will return the first request that it determines is
finished. This may or may not be the first completed message, but
should be relatively close. To do more would really require more
overhead than you really want to deal with.
>> I'm not sure that merging intercomms is the right solution based
>> on the dynamic nature of this system (new tasks can join/exit at
>> any time). So I think there should probably be seperate intercomms.
>
> This is surely correct. I agree that having separate intercomms
> between process pairs is the right thing to do.
Yes, definitely the way to go.
>> Based on the description above, does MPI sound like a viable
>> solution to my system? I just need a generic message passing
>> protocol -- a basic way to transfer bytes from one independent
>> task to another task.
>
> If you can convince yourself that ordering is not important, then I
> think MPI should work fine. Keep in mind that there will likely be
> some implicit ordering maintained by the processes themselves.
> I.e. slave x will never send the master a message before it has
> completed its previous task. This implicit ordering may be good
> enough. If you really need to control the order in some irregular
> manner, I suspect any message passing mechanism is going to lead to
> very complicated coding.
>
> I am not sure if I am helping much, but I hope I am.
I would tend to agree with David on that statement.
Brian
--
Brian Barrett
LAM/MPI developer and all around nice guy
Have a LAM/MPI day: http://www.lam-mpi.org/
|