atarpley wrote:
> Thank you for your response.
>
> Please allow me to clarify my intentions. My system implements a "pipe and
> filter" architecture. For example, there are many independent tasks (seperate
> executables) that are connected via a "pipe". These independent tasks have 1+
> inputs and 1+ outputs. The task inter-dependencies are determined at runtime
> through a config script.
Got it. I have done some dynamic process stuff with similar software
architectures.
>
> In MPI terms, each task might be created through a seperate call to
> MPI_Comm_spawn. Each task has one (and only one) MPI process associated with
> it. Since these are independent tasks, they establish communications with
> each other through MPI_Comm_connect/accept (with the exception of the
> connection to the master, as that was established through the spawn). Any one
> task may have many incoming intercomms and many outgoing intercomms. Simply
> stated, I am trying to build a very dynamic network of tasks with MPI as the
> comm protocol.
>
> I looked into MPI_waitany and if more than one operation is completed, the
> returned one is chosen arbitraily. I'd like to maintain some ordering.
> Additionally, I have no desire to complete more than one receive at a time (as
> is offered by some other the other wait commands).
Chosen arbitrarily in theory. I would be interested to hear from the
LAM/MPI developers how a request is selected. If there are multiple
requests that can be completed, I suspect the first one in the request
array will typically be selected. I could be wrong. Still, in terms of
portability, there is no way to ensure this.
Still, I am not sure why you care what order this happens in. MPI makes
no ordering guarantees when you do a blocking receive with
MPI_ANY_SOURCE. That is, if slaves 2, 4, 8, 11, and 15 have all sent a
message, you have no way of knowing which will be returned by a call to
MPI_Recv with source=MPI_ANY_SOURCE. Ordering is only guaranteed
between 2 processes and only when there is a receive that can match
multiple msgs.
>
> I'm not sure that merging intercomms is the right solution based on the
> dynamic nature of this system (new tasks can join/exit at any time). So I
> think there should probably be seperate intercomms.
This is surely correct. I agree that having separate intercomms between
process pairs is the right thing to do.
>
> Based on the description above, does MPI sound like a viable solution to my
> system? I just need a generic message passing protocol -- a basic way to
> transfer bytes from one independent task to another task.
If you can convince yourself that ordering is not important, then I
think MPI should work fine. Keep in mind that there will likely be some
implicit ordering maintained by the processes themselves. I.e. slave x
will never send the master a message before it has completed its
previous task. This implicit ordering may be good enough. If you
really need to control the order in some irregular manner, I suspect any
message passing mechanism is going to lead to very complicated coding.
I am not sure if I am helping much, but I hope I am.
Dave.
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
--
Dr. David Cronk, Ph.D. phone: (865) 974-3735
Research Leader fax: (865) 974-8296
Innovative Computing Lab http://www.cs.utk.edu/~cronk
University of Tennessee, Knoxville
|