LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Neil Storer (Neil.Storer_at_[hidden])
Date: 2004-03-30 03:43:24


Hi,

Have you tried using:

       mpirun -ssi rpi sysv ...
or
       mpirun -ssi rpi usysv ...

i.e.
       use "shared memory" rather the default TCP/IP.

Regards
        Neil

Ming Wu wrote:
> Hi,
>
> I assigned multiple processes on one machine instead of several machines. In this way, I expect the communication cost will be reduced compared with assigning them to several machines. However, the result is weird. both computation cost and communication cost are increased sharply, especially MPI_Reduce, MPI_Sendrecv. It seems that the underlying implementation of lam_mpi doesn't favor multiple processes on the same host.
>
> Your help will be greatly appreciated.
>
> thanks
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/

-- 
+-----------------+---------------------------------+------------------+
| Neil Storer     |    Head: Systems S/W Section    | Operations Dept. |
+-----------------+---------------------------------+------------------+
| ECMWF,          | email: neil.storer_at_[hidden]    |    //=\\  //=\\  |
| Shinfield Park, | Tel:   (+44 118) 9499353        |   //   \\//   \\ |
| Reading,        |        (+44 118) 9499000 x 2353 | ECMWF            |
| Berkshire,      | Fax:   (+44 118) 9869450        | ECMWF            |
| RG2 9AX,        |                                 |   \\   //\\   // |
| UK              | URL:   http://www.ecmwf.int/    |    \\=//  \\=//  |
+--+--------------+---------------------------------+----------------+-+
    | ECMWF is the European Centre for Medium-Range Weather Forecasts |
    +-----------------------------------------------------------------+