Keep in mind that you can have lots of messages "on the wire" at any
given time -- they just can't be coming from the same buffer when MPI
thinks that the message has not completed sending yet (which makes
total sense -- if you start overwriting your buffer before the message
has been fully sent [i.e., before MPI_Test* says that it has
completed], you can't guarantee what is actually sent out).
So you don't have to limit yourself to one MPI_Isend/MPI_Test at a time
-- but you may have to have a pool of buffers to send from (perhaps
returning them to the "free" pool when MPI_Test* says that a given
request has completed).
On May 5, 2005, at 8:32 AM, Guanhua Yan wrote:
> Thank you a lot for your help, Dave. In my old code, I used MPI_Isend
> to push
> out messages aggressively but observed that some messages get lost
> strangely.
> So I used a flow control mechanism in which at any time, there is only
> one
> message on the fly. MPI_Test is used to test whether the current
> MPI_Isend is
> finished. If it's done, the next message will be sent out. However, it
> seems
> this approach slows down the throughput significantly. I will roll
> back to
> the old version to see whether some bugs caused message losses.
>
> - Guanhua
>
> On Thursday 05 May 2005 06:38, David Cronk wrote:
>> There are 2 issues you need to be aware of. First, unless you know a
>> message has been copied out of the send buffer you cannot touch the
>> send
>> buffer. Touching the buffer before the data has been copied out (for
>> either reading or writing) is strictly prohibited by the standard.
>> There are only 2 ways to know the message has been copied out of the
>> send buffer. One is by completing the Isend (with a wait or test
>> operation). The other is for the receiver to report that the message
>> has been received.
>>
>> The other issue is the request handle returned from the Isend routine.
>> There are a finit number of these handles available. If you do not
>> free
>> these handles (by completing a wait, a test that returns true, or
>> explicitely with MPI_Request_free) you will eventually run out of
>> available handles and your app will die on the Isend. You may not run
>> into this for short runs, but the limit exists and you should free
>> requests.
>>
>> To answer your real question, there should be (as long as the
>> implementation is standard compliant) no worry about buffer overflow.
>> What may happen is, if the system runs out of system resources your
>> send
>> data will remain in the send buffer until enough system resources have
>> been freed or the message is matched to a send. See point 1 above for
>> the conseequences of this.
>>
>> I hope this helps.
>>
>> Dave.
>>
>> Guanhua Yan wrote:
>>> Hi all,
>>>
>>> Sorry if this is off topic. I used MPI_Isend (to the same
>>> destination)
>>> for many times without checking whether each sending operation has
>>> finished. Is it possible that this will run into buffer overflow
>>> problem
>>> at the receiver side? Should I do message control by myself or the
>>> MPI
>>> kernell handles all these? Or is it MPI implementation dependent?
>>>
>>> Thanks,
>>> Guanhua
>>> _______________________________________________
>>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
--
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/
|