On Mar 14, 2005, at 9:16 PM, John Korah wrote:
> Thanks for replying... quick question. If I introduced
> sleep() in profile function MPI_ISend() and have two
> consecutive MPI_Isend(). Will the program thread
> suspend itself in the first MPI_Isend()(due to
> sleep()) before moving to the next MPI_Isend()? I
> suppose it will. That kind of defeats the purpose as
> far as my application is concerned..
Yes. So if you want to simulate variable latency on non-blocking
communication, you're going to have to do something more tricky. In
LAM, the time to call MPI_Isend() should be fairly constant, regardless
of messages size (assuming contiguous data - non-contiguous data can
lead to packing, which means time relative to the data size, etc.). If
you want to simulate time to completion, just play with the results of
test / wait. So let's say I had something like this (assuming one
thread and only one MPI_Isend() pending at a time. You could use locks
to overcome the threads and a table to overcome the one pending
limitation:
struct timeval start;
int
MPI_Isend(...)
{
gettimeofday(&start, NULL);
return PMPI_Isend(....);
}
int
MPI_Test(...)
{
struct timeval now;
/* note: this isn't quite right - can't compare structs like this */
if (start + MIN_TIME > now) {
sleep(MIN_TIME);
}
return PMPI_Test(...);
}
You would need to do the same for MPI_Wait, MPI_Testany, MPI_Waitany,
etc. (there are a bunch of them). But that might be enough to simulate
what you need. If not, there might be a way to do it at the lowest
layers of LAM, but it would be completely non-portable between MPI
implementations and would mean some serious code diving. If you are
interested, contact me off the list and we can talk some more.
Hope this helps...
Brian
> --- Brian Barrett <brbarret_at_[hidden]> wrote:
>> On Mar 13, 2005, at 8:42 PM, John Korah wrote:
>>
>>> I am trying to simulate variable latency links on
>> a
>>> homogenous clusters. Meaning the latency of links
>>> between certain pairs of computers are larger than
>>> other links.
>>>
>>> Is there any way I can use MPI communication
>>> primitives to do it? I am using sleep function to
>> do
>>> it now....
>>
>> Are you trying to simulate variable latency between
>> MPI applications
>> using only MPI primitives? LAM doesn't have a
>> built-in feature for
>> doing this (or doing bandwidth limiting or anything
>> like that).
>> However, you may be able to do what you want with
>> the profiling layer
>> and intercept MPI calls. For really simple
>> simulations, sleeps in the
>> profiling layer should fake latency differences.
>> Since you know
>> everything the MPI layer knows about the message
>> being sent, you could
>> do really complex things that I can't even imagine.
>>
>> For a detailed explanation of the MPI profiling
>> layer, have a look at
>> the MPI standard. But as a quick example, if you
>> wanted to make
>> MPI_Send take 1 second longer than usual, you could
>> create an MPI
>> profiling function like:
>>
>> int
>> MPI_Send(void * buf, int count, MPI_Datatype dtd,
>> int rank, int tag,
>> MPI_Comm comm)
>> {
>> sleep(1);
>> return PMPI_Send(buf, count, dtd, rank, tag,
>> comm);
>> }
>>
>> Then just compile like normal - your version of
>> MPI_Send will be called
>> then call PMPI_Send, which is another entry point
>> into LAM/MPI.
>>
>> Hopefully, you can come up with something more
>> creative than I can :).
>>
>> Brian
>>
>> --
>> Brian Barrett
>> LAM/MPI developer and all around nice guy
>> Have an LAM/MPI day: http://www.lam-mpi.org/
>>
>> _______________________________________________
>> This list is archived at
>> http://www.lam-mpi.org/MailArchives/lam/
>>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
--
Brian Barrett
LAM/MPI developer and all around nice guy
Have an LAM/MPI day: http://www.lam-mpi.org/
|