This really doesn't seem like a LAM specific situation, so I am
reluctant to continue in this forum. The usenet group comp.parallel.mpi
is a good source for general MPI questions.
The only other thing I see right now is I don't see where TAG is
defined. Make sure TAG has the same value for both sender and receiver.
Dave.
Marcelo Fukushima wrote:
>ok ty didnt realise that... but it aint working still...
>On 7/4/05, David Cronk <cronk_at_[hidden]> wrote:
>
>
>>You are sending from rank 0 to rank 0. You need to send to rank 1.
>>
>>Dave.
>>
>>Marcelo Fukushima wrote:
>>
>>
>>
>>>hello guys!!! another noobish question...
>>>
>>>im trying the simplest of all non-blocking routine and it simply locks up...
>>>
>>>int main(int argc, char* argv[]) {
>>> MPI_Request req;
>>> MPI_Status status;
>>> int size, rank;
>>> int num, flag;
>>> int i;
>>> MPI_Init(&argc, &argv);
>>> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>> MPI_Comm_size(MPI_COMM_WORLD, &size);
>>>
>>> printf ("starting waiting....\n");
>>>
>>> if (rank == 1) {
>>> //MPI_Irecv (&num, 1, MPI_INT, MPI_ANY_SOURCE, TAG,MPI_COMM_WORLD, &req);
>>> //flag = 0;
>>> //while (!flag){
>>> MPI_Irecv (&num, 1, MPI_INT, MPI_ANY_SOURCE, TAG,MPI_COMM_WORLD, &req);
>>> MPI_Wait (&req, &status);
>>> printf ("Received: %d\n", num);
>>> }
>>> else if (rank == 0) {
>>> num= 123;
>>> scanf ("%d", &num);
>>> printf ("Sending %d\n", num);
>>> MPI_Send (&num, 1, MPI_INT, 0,TAG, MPI_COMM_WORLD);
>>>
>>> }
>>> printf ("%d is saying bye...\n", rank);
>>> MPI_Finalize();
>>> return 0;
>>>
>>>---------------------------
>>>bottom line is: im posting a receive without the sender has sent the
>>>msg and it stucks... i also tried pooling the receive post with the
>>>MPI_Test and also locks (the request never turn into a completed
>>>one)... so, in more general words, what i want to do is a "preemptive"
>>>check if there was any msg sent to this node... is there a way? cuz i
>>>didnt find on the tutorials.... ty in advance
>>>
>>>
>>>
>>>
>>>
>>>
>>_______________________________________________
>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>
>>
>>
>
>
>
>
|