Hi,
The idea is to parallelize the program in multiples processors no
processes, when you assing multiples processes to one machine, your
processes are executing one after other. The computation time should be N
times where N is the number of processes. Now, if the information that
you communicate depend of all processes, the communication finish until
the last process finish.
The idea is match the number of processes with the number of processors.
> Hi,
>
> I assigned multiple processes on one machine instead of several
> machines. In this way, I expect the communication cost will be reduced
> compared with assigning them to several machines. However, the result is
> weird. both computation cost and communication cost are increased
> sharply, especially MPI_Reduce, MPI_Sendrecv. It seems that the
> underlying implementation of lam_mpi doesn't favor multiple processes on
> the same host.
>
> Your help will be greatly appreciated.
>
> thanks
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
|