Hi,
according to the FAQ
http://www.lam-mpi.org/faq/category7.php3#question6
LAM does not provide asynchronous message passing progress unless
there is special hardware (communication co-processors).
I've written a small program that tests the asynchronous progress
of an MPI_Issend. I am not aware of any special hardware in my
system (IBM X345 with Gigabit NIC, dual XEON), hence I expected
no asynchronous message passing progress at all but in fact
I do observe async progress for not too large messages and rpi=tcp.
Any explanation? Is the FAQ entry still right?
Here's the program (hopefully selfexplanatory)
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <mpi.h>
#define BUFSIZE 100
int main (int argc, char *argv[]) {
int myrank, nprocs;
char *buf;
MPI_Request req;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
buf = (char*)malloc (BUFSIZE*sizeof(char));
MPI_Barrier (MPI_COMM_WORLD);
if (myrank == 0) {
MPI_Issend (buf, BUFSIZE, MPI_CHAR, nprocs-1, 99, MPI_COMM_WORLD, &req);
sleep (5);
printf ("sender calls MPI_Wait\n");
MPI_Wait (&req, &status);
}
else if (myrank == nprocs-1) {
MPI_Recv (buf, BUFSIZE, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status);
printf ("message received\n");
}
free (buf);
MPI_Finalize();
return EXIT_SUCCESS;
}
leonardo:~/projects/mpi/src$ laminfo
LAM/MPI: 7.1.1
Prefix: /usr
Architecture: i686-pc-linux-gnu
Configured by: root
Configured on: Wed Apr 13 17:29:57 CEST 2005
Configure host: hal
Memory manager: ptmalloc2
C bindings: yes
C++ bindings: yes
Fortran bindings: yes
C compiler: gcc
C++ compiler: g++
Fortran compiler: g77
Fortran symbols: double_underscore
C profiling: yes
C++ profiling: yes
Fortran profiling: yes
C++ exceptions: no
Thread support: yes
ROMIO support: yes
IMPI support: no
Debug support: no
Purify clean: no
SSI boot: globus (API v1.1, Module v0.6)
SSI boot: rsh (API v1.1, Module v1.1)
SSI boot: slurm (API v1.1, Module v1.0)
SSI coll: lam_basic (API v1.1, Module v7.1)
SSI coll: shmem (API v1.1, Module v1.0)
SSI coll: smp (API v1.1, Module v1.2)
SSI rpi: crtcp (API v1.1, Module v1.1)
SSI rpi: lamd (API v1.0, Module v7.1)
SSI rpi: sysv (API v1.0, Module v7.1)
SSI rpi: tcp (API v1.0, Module v7.1)
SSI rpi: usysv (API v1.0, Module v7.1)
SSI cr: self (API v1.0, Module v1.0)
--
Stephan Mertens @ http://www.uni-magdeburg.de/mertens
Supercomputing in Magdeburg @ http://tina.nat.uni-magdeburg.de
|