LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Robin Humble (rjh_at_[hidden])
Date: 2005-07-15 09:17:13


On Fri, Jul 15, 2005 at 08:49:41AM +0200, Sebastian wrote:
>we have a Cluster with 5 nodes, each node has Dual Opterons on Board.
>Actually we use --with-rpi=sysv as rpi modul, but because we are interested
>in using SGE we have to change the rpi modul to --with-rpi=tcp, because
>otherwise the shared memory segments and the Semaphore Arrays don't become
>cleared with the "tight integration"-method.

can't you run an 'epilogue' script (PBS terminology I'm afraid) to clean
up after every job? It'd just be a few ipcrm commands - easy if it's 1
job per node, harder if MPI jobs share nodes.

>Now i want know if the speed between this to modules is different and how
>much?

google netpipe, compile the mpi version against a recent LAM, then run with
  mpirun h -c 2 -ssi rpi sysv NPmpi -o np.sysv
  mpirun h -c 2 -ssi rpi usysv NPmpi -o np.usysv
  mpirun h -c 2 -ssi rpi tcp NPmpi -o np.tcp
plot them up, see what it looks like :-)

>Does the rpi-modul tcp make much more overhead than the sysv modul?

probably. run 'top' at the same time as the above... ??

cheers,
robin