I think that the disconnect here is that Anthony's method will translate the
current process's rank from some arbitrary communicator to MPI_COMM_WORLD,
while Kyle is asking how to translate an arbitrary rank in an arbitrary
communicator to a corresponding on MPI_COMM_WORLD (right, Kyle?) or find out
that such a process is not in MPI_COMM_WORLD.
If that's right, then Kyle's way is probably the easiest way. All the
function calls listed here are local (i.e., they's simply data lookups), so
there's no huge penalty for any of them. MPI_GROUP_TRANSLATE_RANKS in LAM
is a simple O(n) lookup for each rank in the array. MPI_COMM_GROUP and
MPI_GROUP_FREE are increments and decrements to reference counts (i.e.,
calling COMM_GROUP and then GROUP_FREE on that group will not not cause a
malloc/copy or free). So the whole set of operations is linear, but pretty
cheap.
I think Anthony's point is that his approach was simply fewer function calls
(but may not have what you're looking for).
On 8/3/06 6:25 PM, "Anthony Chan" <chan_at_[hidden]> wrote:
>
>
> On Thu, 3 Aug 2006, Kyle Wheeler wrote:
>
>>>> MPI_Group world, cur;
>>>> int globalrank;
>>>> PMPI_Comm_group(comm, &cur);
>>>> PMPI_Comm_group(MPI_COMM_WORLD, &world);
>>>> PMPI_Group_translate_ranks(cur, 1, &rank, world, &globalrank);
>>>
>>> The simplest way is to call MPI_Comm_rank twice.
>>>
>>> PMPI_Comm_rank( MPI_COMM_WORLD, &world_rank );
>>> PMPI_Comm_rank( comm, &comm_rank );
>>>
>>> PMPI_Group_translate_ranks() becomes more efficient if you have a group
>>> of ranks to be translated (i.e. save you a Comm_group() call).
>>
>> Really? Umm... I'm not sure that does what I want (or maybe I'm just
>> not understanding what PMPI_Comm_rank() does). I *have* an arbitrary
>> rank (the destination argument of an MPI_Send, for example) and I want
>> to, inside the profiling layer, convert that destination rank into a
>> rank in MPI_COMM_WORLD. I don't see where it's taking my arbitrary
>> rank as input somewhere.
>
> My 2 comm_rank calls relate rank of local processs in 2 different
> communicators. For translation of remote process' rank, you probably need
> Group_tranlate_ranks.
>
>>
>> If PMPI_Group_translate_ranks() is inefficient... perhaps I should
>> build up my own translation tables?
>
> I would think that Group_translate_ranks is less efficient than
> Comm_rank because it involves a "group" translation (That statement
> depends on the MPI implementation of communicator and group). If you need
> to do rank translation often in profiling, building your translation
> table may not be a bad idea.
>
> A.Chan
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems
|