Hi
Thankx for the replies guys!
#1..The fflush(stdout) works and so does the newline
\n thingy....
#2 : The array that I'm passing say, is of the type,
array[0..9][0..9][0..9]
I'm already using this array[1000] logic for another
array which is basically a logical array (denotes
whether a particular lattice point is an obstacle or a
pore)..but since a lot of manipulation needs to be
done on these 3D arrays, I prefer to retain its
structure...
Would that be a heavy loss in computational time? or
buffer space???
And as to the contiguous sending of the y,z, I think I
am doing the same..can u check if this is what u
meant, Jeff?
for array[0..8][0..9][0..9], i want to send
subarray[0..8][0..2][0..9] to each process.
i.e two rows for each of five process..
code part:
for(i=0;i<9;i++)
{
if (nid==0)
{
MPI_Send(&array[i][0][0],20,MPI_FLOAT,1,tag,MPI_COMM);
}
}
this is the kind of logic that I used (ex. given for
processor 1 only)
I've seen that this "20" logic works..i.e two rows get
send in contiguos manner..But the real problem was
with the "for loop" and now that it's clarified..I'm
OK
But as luck would have it, new problems are always
catching up ;-)
Here's the latest crib
Inside each processor, I'm creating an array called
"node" (runtime) to accept the values sent by the
"for" loop of the type above. (This is in C++, so I am
just giving "float node[][][]" of appropriate
dimensions.
Does MPI treat the array node as a pointer type array
whose values are accessed by *node[][][]?
Because after working with this "node" and printing
out the results, out came hexadecimal gibberlish to
fill the screen...I'm presuming that the addresses
were printed out...Does this mean that all changes
must be made like
*node[0][i][j]=*node[0][i][j]+1;
and cout<<*node[][][];
Please do clarify
Once again, Thanks for all the interest
GOOD DAY!!
Sarath Devarajan
--- Jeff Squyres <jsquyres_at_[hidden]> wrote:
> On Tue, 24 Jun 2003, Jim Procter wrote:
>
> > > I always find that after giving mpirun, the
> printf
> > > statement never appears on the screen, the
> cursor
> > > waits and once I enter a number, it goes on with
> the
> > > execution. Ironically at the end of the WHOLE
> thing, a
> > > small "Enter the number of terms" message
> appears...IS
> > > this non-interactiveness preventible...Sorry if
> the
> > > question appears too trivial...Coming to the
> main
> > > issue
>
> > This is a i/o buffering issue. Try putting
> 'fflush(stdout)" before the
> > scanf. This might not work - but someone might be
> able to enlighten us
> > with the appropriate ROM-IO function that should
> be called (or flag that
> > can be set).
>
> This is not an MPI issue, nor a ROMIO issue -- it's
> a unix line-buffered
> output issue. Unix typically does not output data
> to stdout untill either
> a block is full or a line has been fully sent (e.g.,
> terminated by newline
> or line feed). As mentioned above, you can manually
> cause the Unix I/O
> buffers to flush by invoking fflush(stdout). So you
> really have two main
> (easy) choices:
>
> 1. printf(...);
> flush(stdout);
>
> 2. printf("put a newline at the end\n");
>
> > > 2) Can I pass data from proc 0 like this
> > >
> > > " for(k=0;k<9;k++)
> > >
> > >
>
MPI_Send(&array[k][0][0],10,MPI_FLOAT,1,11,MPI_COMM_WORLD)"
> > >
> > > And the corresponding array being accepted in
> > > processor 1 in a similar loop???
>
> > You could - though I think you need two loops -
> because the next
> > dimension of array needs to be scanned thorugh
> (unless you only want to
> > sent [k][0][0..9] rather than [k][0..9][0..9]. An
> alternative is to send
> > the whole array at once - if you declare array as
> :
> >
> > float array[1000];
> >
> > Then you only need to pass one message at the
> beginning, and the
> > pointer-mathematics involved in [k][j][i] is about
> the same as
> > array[k*100+j*10+i] (but you write it explicitly
> instead).
>
> This is true. However, there are other answers as
> well. :-)
>
> In general, MPI will perform best when it deals with
> contiguous data. So
> if you can arrange for all the data that you want to
> send to be
> contiguous, you'll get nicely performing code in a
> single send/receive
> pair.
>
> You can make the data contiguous by doing the float
> array[1000] trick (as
> described above), or you can play the pointers game
> and setup your own 2D
> or 3D array to point to contiguous data. Setting up
> 2D contiguous data is
> pretty straighforward; setting up contiguous 3D data
> is somewhat tricky,
> and how you do it depends on what plane you want to
> be contiguous. For
> example, in your sample above, it looks like you
> want [k][...][...] planes
> to be contiguous (more specifically, the plane that
> you want to send must
> be contiguous in the last 2 dimensions in C, so for
> array[x][y][z], the x
> and z dimensions must be contiguous). So you could
> malloc (MAXX * MAXY *
> MAXZ * sizeof(datatype)) space and setup pointers so
> that array[x][y][z]
> works properly in C, and when you use
> &array[x][0][0], you get the first
> address of the byte in the y/z plane, and you can
> send all of that data in
> a send MPI_SEND.
>
> Make sense?
>
> The other option is to use MPI datatypes. In this
> manner, you setup a
> datatype to describe the data. MPI_TYPE_VECTOR is
> probably the one that
> you would want here. See the MPI-1 description of
> this function to see
> how it works.
>
> Hope that helps.
>
> --
> {+} Jeff Squyres
> {+} jsquyres_at_[hidden]
> {+} http://www.lam-mpi.org/
> _______________________________________________
> This list is archived at
http://www.lam-mpi.org/MailArchives/lam/
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
|