Thanks a lot, Jeff!
I was going to use tag until I realize that the tag
space (number of usable values) is too small for my
application.
A message is identified uniquely by an integer pair
(i, j), both i and j go from 0 to N-1. So the tag
I use is: tag=i*N+j. This maps the two dimensional
(i, j) space uniquely to the one dimensional tag
space. Unfortunately, this solution limits the
N value to sqrt(max_tag). Even if max_tag=MAX_INT,
that's still not enough for me (N~=50k).
One of the following could solve my problem:
1) A user adjustable max_tag, so I can set it to
be MAX_INT^2;
2) A more efficient 2D -> 1D tag map so I won't
waste the 1D tag space too much.
Any suggestions?
Thanks again
-Lei
On 6/27/04 11:45 AM, "Jeff Squyres" <jsquyres_at_[hidden]> wrote:
>> It seems that this has to be true, but I just want to make sure that MPI
>> has such a scheduling policy (FIFO?) between two processors. The reason
>> for knowing that for sure is so I can avoid using message tags.
>
> You may want to reconsider that. Message tags are your friends -- they
> can really help in terms of development, debugging, and ensuring that you
> don't have difficult-to-diagnose race conditions later. It doesn't cost
> much to post a few non-blocking receives on multiple tags and then using
> MPI_Testany/MPI_Waitany to see if any of them have completed.
|