LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2004-11-26 09:31:24


On Nov 26, 2004, at 8:57 AM, Tomek wrote:

> OK, but MPI-2 seems to be not fully implemented yet.

The functions that I mentioned are fully implemented in LAM/MPI.
Indeed, LAM has had a full implementation of the MPI-2 dynamic chapter
for many years (since 1996 or so? I don't remember the exact history
offhand).

> Here is precisely what I would like to do:
> Say: a library X exposes the following interface:
> X_Init() - spawns worker processes and sets up communiation with them

I didn't get this part from your prior post, but MPI_COMM_SPAWN is also
fully implemented in LAM/MPI.

> X_Do() - does the job exploiting workers
> X_Terminate() - terminates workers and makes the cleanup
>
> I want to have the same interface for both serial and parallel version
> and
> have all the internals of parallelization hidden. I know it is possible
> to be done with PVM, and I will use it eventually, but I would prefer
> to use
> MPI rather then PVM.
> So - is it possible to implement such a library with MPI ?

Yes. Looks like a pretty straightforward use of MPI_COMM_SPAWN.

However, rather than use the MPI_COMM_SPAWN interface, is there much of
a difference between these two scenarios:

1. Launch ./foo, and foo spawns its workers, uses them, and then they
all die together.

2. Launch "mpirun -np X foo" (or, if you need MPMD, perhaps "mpiexec -n
1 master : -n X slave"). They all launch together, do work, and then
die together.

If you know you're going to be parallel from the beginning, you can
save yourself some programming effort by using mpirun instead of
MPI_COMM_SPAWN. There really isn't much of a difference between the
two, and if you're looking to save debugging time, mpirun (or mpiexec)
is certainly an easier way to go. The PVM mindset of having a
singleton "./foo" spawn off workers is frequently unnecessary with MPI
and adds extra programming complexity.

-- 
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/