LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Kumar, Ravi Ranjan (rrkuma0_at_[hidden])
Date: 2005-03-09 12:41:21


Thank you for the reply! I'll apply non-blocking send-recv to check if problem
still exits.

I have another question on sending contiguous data. I want to know which index
is traversed first among 3 indices (z, y & x) in T[Nz][Nx][Ny]. I wrote my code
in C++ and MPI. Suppose, I want to send Nx*Ny number of data so I am simply
using T[2][0][0] (if I want to send 3rd plane out of Nz planes of data) and I
am receiving data in another processor at location (say) T[3][0][0]. I have
doubt about x-y co-ordinate of data. Will it be received in the same fashion as
it was sent?? Do I need to check which indices ( x or y ) is traversed first?
Pls explain.

Thanks a lot for your help

Ravi R. Kumar

Quoting Jeff Squyres <jsquyres_at_[hidden]>:

> Yes. MPI_SEND is allowed to block, but may not always (usually
> depending on the size of the message).
>
> There have been a lot of discussions about this topic on this list --
> you might want to search the archives for related threads.
>
>
>
> On Mar 8, 2005, at 11:52 AM, Kumar, Ravi Ranjan wrote:
>
> > Hello,
> >
> > I am using blocking MPI_send/MPI_recv in my code. It runs perfect for
> > smaller
> > array size ( T[51][51][51] ) problem but it starts giving problem for
> > larger
> > array size ( T[101][101][101] ). Sometimes code gives correct result
> > for larger
> > array size prob and sometimes it hangs in the mid of its execution.
> > What can be
> > the reason for this? Anyone pls explain. Can this be due to *blocking*
> > send/recv??
> >
> > Thanks,
> >
> > Ravi R. Kumar
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> >
>
> --
> {+} Jeff Squyres
> {+} jsquyres_at_[hidden]
> {+} http://www.lam-mpi.org/
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>