Hi !
the printing of the addresses was my mistake..as U had
guessed rightly, I had forgotten one dimension of the
array..Sorry for the confusion with the array[][][]..I
meant to put it as array[i][j][k] only!!
The code is OK now!
Thanks again.
And I forgot, thanks, Dr.Procter for your help
before..
-Sarath Devarajan
--- Jeff Squyres <jsquyres_at_[hidden]> wrote:
> On Wed, 25 Jun 2003, Sarath Devarajan wrote:
>
> > #2 : The array that I'm passing say, is of the
> type,
> > array[0..9][0..9][0..9]
> >
> > I'm already using this array[1000] logic for
> another array which is
> > basically a logical array (denotes whether a
> particular lattice point is
> > an obstacle or a pore)..but since a lot of
> manipulation needs to be done
> > on these 3D arrays, I prefer to retain its
> structure... Would that be a
> > heavy loss in computational time? or buffer
> space???
>
> No.
>
> > And as to the contiguous sending of the y,z, I
> think I am doing the
> > same..can u check if this is what u meant, Jeff?
> >
> > for array[0..8][0..9][0..9], i want to send
> > subarray[0..8][0..2][0..9] to each process.
> > i.e two rows for each of five process..
>
> You can do this in a single send if you use an MPI
> datatype, or, as you
> have in your code below, multiple sends spanning the
> last 20 elements in
> each plane (i.e., the last 2 rows of the plane).
>
> > code part:
> >
> > for(i=0;i<9;i++)
> > {
> > if (nid==0)
> > {
> >
>
MPI_Send(&array[i][0][0],20,MPI_FLOAT,1,tag,MPI_COMM);
> > }
> > }
>
> >From what you described. this should be fine.
>
> > Inside each processor, I'm creating an array
> called "node" (runtime) to
> > accept the values sent by the "for" loop of the
> type above. (This is in
> > C++, so I am just giving "float node[][][]" of
> appropriate dimensions.
> >
> > Does MPI treat the array node as a pointer type
> array whose values are
> > accessed by *node[][][]?
>
> It's not MPI that treats it this way, it's C. i.e.,
> it's the language of
> C itself, not any special treatment that MPI uses.
>
> So yes -- node is a pointer. See below.
>
> > Because after working with this "node" and
> printing out the results, out
> > came hexadecimal gibberlish to fill the
> screen...I'm presuming that the
> > addresses were printed out...Does this mean that
> all changes must be
> > made like
> >
> > *node[0][i][j]=*node[0][i][j]+1;
> > and cout<<*node[][][];
>
> Keep in mind that "float *node[][][]" is different
> than "float
> node[][][]". The former is effectively a 4D array,
> and the latter is
> effectively a 3D array (I'm not 100% sure of the C++
> syntax; I believe the
> latter is not quite right because you would need to
> specify dimensions in
> that case...?). So I'm not quite sure what you're
> trying to show here;
> I'm guessing that you meant to put in indices in the
> "cout" statement that
> you showed...? Otherwise you'll just be showing
> some kind of pointer
> value, but since you've got "*node", I think you're
> using an extra
> dimension that you're not intending to use.
>
> So it all depends on how you setup the "node" array;
> from your
> description, it *sounds* like you really only want 3
> dimensions. So you
> might be able to do something as simple as (typing
> off the top of my head
> -- pardon any errors...):
>
> // Receive the 3 dimensions from the master
> int dims[3];
> MPI_Recv(dims, 3, MPI_INT, master, tag, comm,
> MPI_STATUS_IGNORE);
> // Make an array of the right size
> float ***node = new
> float[dims[0]][dims[1]][dims[2]];
> // Fill the lower 20 values in several of the
> planes
> for (i = 0; i < 9; ++i)
> MPI_Recv(&array[i][0][0], 20, MPI_FLOAT, master,
> tag,
> comm, MPI_STATUS_IGNORE);
>
> Something along those lines.
>
> --
> {+} Jeff Squyres
> {+} jsquyres_at_[hidden]
> {+} http://www.lam-mpi.org/
> _______________________________________________
> This list is archived at
http://www.lam-mpi.org/MailArchives/lam/
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
|