> I'm using MPI_Isend and MPI_Recv in a loop like in the code I attached
> to this mail. In each iteration the code becomes slower and slower.
> What is the reason of this behaviour? How can I
> prevent this?
> Using MPI_Send instead of MPI_ISend I have no problem.
> But when I have data with a size greater than 8192 Bytes in this case
> MPI_Send hangs.
>
. . .
> for (i = 0; ; i++)
> {
> if (rank == 0)
> {
> MPI_Isend (send, size, MPI_DOUBLE, 1, 1, MPI_COMM_WORLD, &request);
> MPI_Recv (recv, size, MPI_DOUBLE, 1, 2, MPI_COMM_WORLD, &status);
> }
> if (rank == 1)
> {
> MPI_Isend (send, size, MPI_DOUBLE, 0, 2, MPI_COMM_WORLD, &request);
> MPI_Recv (recv, size, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD, &status);
> }
> if ((rank == 0) && (i > 0) && (i % 1000 == 0)) {
> printf("iteration = %i, time = %u\n", i, time(t)-oldtime);
> oldtime = time(t);
> }
> }
Yeah, I remember it took me about a day to figure that out. Wasn't that hard
though. You have to MPI_Request_free() request objects returned by non-blocking
MPI operations. If you don't do this, I suspect they are kept somewhere inside
LAM, I didn't have time to dig and make sure that's really the issue.
Performance jumps up if you do release them though. I don't know if it is
mandatory by the MPI standard.
You should NOT use MPI_Send. As MPI standards says somewhere, you should be
able to substitute MPI_Send with MPI_Ssend to make sure your application is
deadlock-free.
--
Andriy Fedorov
Department of Computer Science,
College of William & Mary
P.O. Box 8795
Williamsburg, VA 23185-8795, USA
|