It's hard to guess what is wrong when you don't indicate what the
problem is (you just say "it seems that there is some mistake"). What
is going wrong? Are you seg faulting? Are you getting incorrect data?
...?
Two questions:
1. Are your 2D planes guaranteed to be contiguous in memory? Since
you're sending Nx*Ny MPI_DOUBLES, MPI is assuming that they're
contiguous in memory. If they're not, this won't work (you'll need to
either make them contiguous or make an MPI datatype that maps our the
gaps between them -- if it were me, I'd make them contiguous).
2. Is there ever a case where you're sending and receiving into the
same buffer? It is unclear from your code. For example, can Nz ever
equal 0? You must always send and receive from *different* buffers (it
doesn't make sense otherwise).
On Mar 2, 2005, at 6:26 PM, Kumar, Ravi Ranjan wrote:
> Hello,
>
> Thanks for the reply for my previous message. I could find the error
> and fixed
> it. Now I have little problem in using MPI_Sendrecv. I want to
> exchange data
> between neighbouing ranks only. I am using 3 processes (rank 0, rank 1
> & rank
> 2). All of them have a 3D array defined (T[1...Nz][Ny][Nx]). 3D array
> can be
> considered as combination of Nz number of planes (each plane contains
> Nx*Ny
> number of data). The interfacial planes (Nx*Ny number of data) has to
> be
> exchanged. T[0][...][...] & T[Nz+1][...][...] can be assumed to be
> storage
> buffer for each rank(except end ranks). I have wriiten code for this
> but seems
> there is some mistake. Pls point out my mistakes:
>
> void interface_data_exchange()
> {
> MPI_Status status;
>
> if(rank != num_processes - 1)
> {
>
> MPI_Sendrecv(&T[Nz][0][0],Nx*Ny,MPI_DOUBLE,rank+1,rank,&T
> [0][0][0],Nx*Ny,MPI_DOUBLE,rank-1,rank-
> 1,MPI_COMM_WORLD,&status);
> }
>
> if(rank != 0)
> {
> MPI_Sendrecv(&T[1][0][0],Nx*Ny,MPI_DOUBLE,rank-1,rank,&T
> [Nz+1][0][0],Nx*Ny,MPI_DOUBLE,rank+1,rank+1,MPI_COMM_WORLD,&status);
>
> }
>
> }
>
>
> Thanks for your help,
> Ravi R. Kumar
>
>
>
>
>
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
--
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/
|