LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: David Cronk (cronk_at_[hidden])
Date: 2005-05-10 09:31:21


Steve Lowder wrote:
> David and Brian,
> Thank you for you replies,

You are certainly welcome.

they filled in the gaps for me. I'm going to
> cache my last datatype and free the previous two. I would have liked to use
> MPI_Gather or MPI_Gatherv but my problem dictated that the root processor
> not be a data contributor to the gathered data

This is why I suggested Gatherv. This allows you to specify how much
data is contributed by each process, and this count can be 0 for one or
more processes. For your purposes, the root would use sendcount = 0 and
recvcounts[0] = 0. You just need to make sure you get the displacements
setup correctly.

Dave.

which I think is required on
> collective communication but I may be wrong. The compute tasks (100s) are
> sending their piece to a noncompute task for postprocessing, io,
> convertions, etc.. I could have used one of the compute tasks but in this
> case it makes one task's process size grow a lot and pushes the task into
> swapping too much.
>
> Thanks again,
> Steve Lowder
>
>
> -----Original Message-----
> From: lam-bounces_at_[hidden] [mailto:lam-bounces_at_[hidden]] On Behalf Of
> David Cronk
> Sent: Monday, May 09, 2005 10:04 AM
> To: General LAM/MPI mailing list
> Subject: Re: LAM: Sending 3D arrays and opaque objects
>
>
>
> Steve Lowder wrote:
>
>>Hello,
>>
>> I have an MPI application where I need to "gather" a 3D array into one
>>process that does not provide data to the array.
>>
>> For example, I have a distributed 3D array across 20 processors and I
>>want to gather it to a 21^st processor. The only I know to do this
>>easily is with SEND/RECV and derived datatypes. There are a number of
>>instances of this software posted on the web.
>
>
> You could also experiment with MPI_Gatherv.
>
>
>>
>>
>> Here is a typical example, I want to put this into a subroutine to
>>cache some datatypes.
>>
>>
>>
>> ------------------------------------- sample code begin
>
> -----------------------------
>
>>
>>
>> REAL a(100,100,100), e(9,9,9)
>>
>> INTEGER oneslice, twoslice, threeslice, sizeofreal, myrank, ierr
>>
>> INTEGER status(MPI_STATUS_SIZE)
>>
>> CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank)
>>
>> CALL MPI_TYPE_EXTENT( MPI_REAL, sizeofreal, ierr)
>>
>>C create datatype for a 1D section
>>
>> CALL MPI_TYPE_VECTOR( 9, 1, 2, MPI_REAL, oneslice, ierr)
>>
>>C create datatype for a 2D section
>>
>> CALL MPI_TYPE_HVECTOR(9, 1, 100*sizeofreal, oneslice, twoslice,
>
> ierr)
>
>>C create datatype for the entire section
>>
>> CALL MPI_TYPE_HVECTOR( 9, 1, 100*100*sizeofreal, twoslice, 1,
>>
>> threeslice, ierr)
>>
>> CALL MPI_TYPE_COMMIT( threeslice, ierr)
>>
>> CALL MPI_SENDRECV(a(1,3,2), 1, threeslice, myrank, 0, e, 9*9*9,
>>MPI_REAL, myrank, 0, MPI_COMM_WORLD, status, ierr)
>>
>>------------ sample code end
>>---------------------------------------------------------
>>
>>
>>
>>I'm trying to understand some of the memory allocation issues about this
>>code.
>>
>>
>>
>> 1. When the first two derived types are created, I assume that memory
>> is allocated for opaque objects and the handle is stored in the
>> variables oneslice and twoslice. If this was inside a subroutine
>> and called many times, I would assume I need to explicitly free
>> these objects inside the subroutine prior to exit otherwise I have
>> small memory leak. Is this correct?
>
>
> Yes, you should free a derived datatype when you are done with it.
> However, why not create these outside the subroutine, so you are only
> creating them once?
>
>
>> 2. When the third type is created (threeslice), does it copy the info
>> from the previous type (twoslice) so I could free twoslice or does
>> it just keep a reference implying that I can not free twoslice? I
>> know that freeing threeslice does no affect twoslice, but what
>> would freeing twoslice do to threeslice?
>
>
> Once threeslice has been created you can do anything you want with the
> building blocks. It is safe to free twoslice.
>
>
>> 3. Some versions of this code have a MPI_TYPE_FREE of threeslice
>> which I think leads people to believe that the free is necessary
>> only because of the MPI_TYPE_COMMIT. I don't know if the COMMIT
>> creates more objects but I'm guessing the FREE is necessary first
>> because of the initial object creation of threeslice. Is this
>
> correct?
>
> Yes, all the types should be freed when they are no longer needed.
>
>
>>
>>
>>I've read some of the MPI standard and I what I understand from it is,
>>if you create it (any derived type), you free it, period. Is this
>>correct? I have heard people comment that if this code is inside a
>>subroutine then the local variables (handles) will automatically mark
>>their respective objects for deallocation on exit. I doubt this is true.
>>Yes? No?
>
>
> While I doubt this is true, it is also not the issue with handles. The
> issue is MPI. MPI maintains a finite number of handles. If you never
> free a handle through MPI, eventually the MPI implementation runs out of
> available handles.
>
> Dave.
>
>
>>
>>
>>Thank you,
>>
>>Steve Lowder
>>
>>NRL Monterey
>>
>>
>>
>>
>>------------------------------------------------------------------------
>>
>>_______________________________________________
>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
>

-- 
Dr. David Cronk, Ph.D.                      phone: (865) 974-3735
Research Leader                             fax: (865) 974-8296
Innovative Computing Lab                    http://www.cs.utk.edu/~cronk
University of Tennessee, Knoxville