LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2003-06-24 20:29:31


On Tue, 24 Jun 2003, Jim Procter wrote:

> > I always find that after giving mpirun, the printf
> > statement never appears on the screen, the cursor
> > waits and once I enter a number, it goes on with the
> > execution. Ironically at the end of the WHOLE thing, a
> > small "Enter the number of terms" message appears...IS
> > this non-interactiveness preventible...Sorry if the
> > question appears too trivial...Coming to the main
> > issue

> This is a i/o buffering issue. Try putting 'fflush(stdout)" before the
> scanf. This might not work - but someone might be able to enlighten us
> with the appropriate ROM-IO function that should be called (or flag that
> can be set).

This is not an MPI issue, nor a ROMIO issue -- it's a unix line-buffered
output issue. Unix typically does not output data to stdout untill either
a block is full or a line has been fully sent (e.g., terminated by newline
or line feed). As mentioned above, you can manually cause the Unix I/O
buffers to flush by invoking fflush(stdout). So you really have two main
(easy) choices:

1. printf(...);
   flush(stdout);

2. printf("put a newline at the end\n");

> > 2) Can I pass data from proc 0 like this
> >
> > " for(k=0;k<9;k++)
> >
> > MPI_Send(&array[k][0][0],10,MPI_FLOAT,1,11,MPI_COMM_WORLD)"
> >
> > And the corresponding array being accepted in
> > processor 1 in a similar loop???

> You could - though I think you need two loops - because the next
> dimension of array needs to be scanned thorugh (unless you only want to
> sent [k][0][0..9] rather than [k][0..9][0..9]. An alternative is to send
> the whole array at once - if you declare array as :
>
> float array[1000];
>
> Then you only need to pass one message at the beginning, and the
> pointer-mathematics involved in [k][j][i] is about the same as
> array[k*100+j*10+i] (but you write it explicitly instead).

This is true. However, there are other answers as well. :-)

In general, MPI will perform best when it deals with contiguous data. So
if you can arrange for all the data that you want to send to be
contiguous, you'll get nicely performing code in a single send/receive
pair.

You can make the data contiguous by doing the float array[1000] trick (as
described above), or you can play the pointers game and setup your own 2D
or 3D array to point to contiguous data. Setting up 2D contiguous data is
pretty straighforward; setting up contiguous 3D data is somewhat tricky,
and how you do it depends on what plane you want to be contiguous. For
example, in your sample above, it looks like you want [k][...][...] planes
to be contiguous (more specifically, the plane that you want to send must
be contiguous in the last 2 dimensions in C, so for array[x][y][z], the x
and z dimensions must be contiguous). So you could malloc (MAXX * MAXY *
MAXZ * sizeof(datatype)) space and setup pointers so that array[x][y][z]
works properly in C, and when you use &array[x][0][0], you get the first
address of the byte in the y/z plane, and you can send all of that data in
a send MPI_SEND.

Make sense?

The other option is to use MPI datatypes. In this manner, you setup a
datatype to describe the data. MPI_TYPE_VECTOR is probably the one that
you would want here. See the MPI-1 description of this function to see
how it works.

Hope that helps.

-- 
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/