This is because you're receiving the data as 25 consecutive
MPI_INTEGERs, so MPI will fill them in consecutively. You need to
receive them with an appropriate datatype such that MPI will fill them
in in the right order (similar to how you're sending the "stride" type;
you need to receive into an analogous type).
Make sense?
On Apr 16, 2005, at 11:08 AM, Khaled Al-Salem wrote:
> Thanks Jeff for the quick response.
>
> my knowledge of mpi datatypes is actually limited to mpi_type_vector
> and mpi_type_hvector. I'm sure there is more to datatypes than just
> that. I'll have to look into it, I appreciate the advice.
>
> Here is a snippet of a test code that illustrates where I'm stuck
> right now
>
> integer recvbuf(10,5), sendbuf(5,5), rcounts(2), displ(2)
> include "mpif.h"
> displ=(/ 0,1 /)
> rcounts=1
> ...
> ...
> sendbuf=rank
>
> call mpi_type_vector(5,5,5,mpi_integer,stride,err)
> call mpi_type_commit(stride,err)
> call mpi_gatherv(sendbuf,25,mpi_integer,&
> recvbuf, &
> rcounts,displ,stride, &
> 0,mpi_comm_world,err)
>
> and here is the output
>
> 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 0 0 0 0 0
> 0 0 0 0 0 1 1 1 1 1
> 1 1 1 1 1 1 1 1 1 1
> 1 1 1 1 1 1 1 1 1 1
>
> any idea?
>
> ----- Original Message ----- From: "Jeff Squyres"
> <jsquyres_at_[hidden]>
> To: "General LAM/MPI mailing list" <lam_at_[hidden]>
> Sent: Friday, April 15, 2005 8:47 PM
> Subject: Re: LAM: MPI_gather for 3D topology
>
>
>> MPI_GATHER can be used to put data in this order, but you need to
>> think about your data layout in memory, and structure your use
>> datatypes and MPI_GATHER accordingly.
>>
>> Keep in mind that the datatype that you gather *from* does not need
>> to be the same datatype that you gather *to*. Specifically, the
>> datatypes provided on the root and non-root processes do not need to
>> be the same. They must be equivalent -- essentially meaning that the
>> resulting number of bytes and basic data elements are the same, but
>> the layout in memory could be different.
>>
>> I realize that I'm not giving you much of an answer :-(, but
>> datatypes are a rather complex issue; I would strongly suggest
>> spending a little time with an MPI book or a tutorial to get at least
>> the basics of complex datatypes. They can give much more insight and
>> detail than I can in an e-mail.
>>
>> Hope that helps!
>>
>>
>> On Apr 15, 2005, at 7:05 PM, Khaled Al Salem wrote:
>>
>>> Hello,
>>> I wrote an f90 mpi code to solve a 3D problem. The topology is
>>> obtained using mpi_cart_creat. At the end of the calculations I wish
>>> to get the results gathered from all processors into one big 3
>>> dimensional array that can be directly visualized without going
>>> through post-processing. The problem is that mpi_gather doesn't
>>> gather the results from the different proccessors as blocks.
>>> for example, in 2D, if proc. 1 has the following
>>>
>>> 1 1 1
>>> 1 1 1
>>> 1 1 1
>>>
>>> and proc. 2
>>>
>>> 2 2 2
>>> 2 2 2
>>> 2 2 2
>>>
>>> then mpi_gather results in the following
>>>
>>> 1 1 1 1 1 1
>>> 1 1 1 2 2 2
>>> 2 2 2 2 2 2
>>>
>>> the results that I'm after would be
>>>
>>> 1 1 1 2 2 2
>>> 1 1 1 2 2 2
>>> 1 1 1 2 2 2
>>>
>>> is there an easy way to do this?
>>> _______________________________________________
>>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>>
>>
>> --
>> {+} Jeff Squyres
>> {+} jsquyres_at_[hidden]
>> {+} http://www.lam-mpi.org/
>>
>> _______________________________________________
>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
--
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/
|