Hi,
MPI_INIT seems to complete more or less instantly in my case (on both
master and slave). I have found that the following will work:
bpsh 0 mpirun myapp
If I set schedule=yes in my nodes file (after applying the patch from
CVS), the head node will also take part in the calculations. So it
seems my problem only exists when the master process is executed on the
head node.
Mike
Jeff Squyres wrote:
>On Fri, 27 Jun 2003, Michael Madore wrote:
>
>
>>My Apologies. It turns out I was running the copy of LAM that I had
>>patched to not set the NT_WASTE flag for bproc clusters (My own hack,
>>not your fix from CVS). If I remove that hack, the cpi example runs
>>correctly.
>>
>
>Excellent.
>
>
>>The Mandelbrot example, however, still gets stuck with the slave process
>>running on the compute nodes. If I modify the schema to only run the
>>slave process on node 0 (master) then the program runs successfully.
>>
>
>Weird. I see that when I run an MPI job that spans the head node and the
>client nodes, MPI_INIT seems to take a *loooong* time (a minute or more).
>It eventually completes and then runs to completion. Does yours do this?
>
>As a first guess: perhaps this has something to do with the connectivity
>between the head node and the compute nodes...?
>
>
|