Hi
Sorry, I should have looked at this more carefully before replying.
Hope I get it right this time :-)
This loop (below) when you're filling the columns [][] array.
columns [j] [k] does not (always) start at [0] [0]. k starts at offset,
which, for the second task, equals rows. (2)
I think you want columns to always start at [j][0] but B to start at [j]
[k].
so columns [0] [0] and [0] [1] are still uninitialized. You might (will?)
even be going out of bounds of the columns array with the last 2 elements.
for (j = 0; j < elements; j++)
for (k = offset; k < offset + rows; k++)
columns[j][k] = B[j][k];
MPI_Send(&columns, count, MPI_INT, i, 0, MPI_COMM_WORLD);
// For the next task the data to be sent will start where this one ended
offset = offset + rows;
My thoughts:
for (j = 0; j < elements; j++)
for (k = offset; k < offset + rows; k++)
columns[j][k - offset] = B[j][k];
Regards,
Lee
On 7/7/06, Raúl González <revenant81_at_[hidden]> wrote:
>
> Thanks for your answer Elie
>
> That would make sense if I were sending B by itself, because its an 8x8
> array. But I'm copying its values into a new array of 4x2 called columns,
> begining from 0,0 so I think what I want to send is a pointer to the initial
> address of columns and not columns + offset because that would cause even
> more garbage, won't it?
>
> Regards
>
>
>
> -----------------------------------------------
> > Date: Fri, 7 Jul 2006 13:59:10 +0300
> > From: elie.choueiri_at_[hidden]
> > To: lam_at_[hidden]
> > Subject: Re: LAM: MPI_Send, MPI_Recv and arrays
> >
> > Hi
> > I think the problem is the way you build the columns [][] array and
> send..
> > When offset starts at 0 -> you call MPI_Send (& columns, count, ...)
> > But when offset is not 0 you want to MPI_Send (& (columns+offset),
> count, ... )
> > That would explain why you are receiving crap -> always sending columns
> [0] ... columns [count - 1] and they're not initialized.
> > Regards,
> > Lee
> > for (j = 0; j < elements; j++)
> > for (k = offset; k < offset + rows; k++)
> > columns[j][k] = B[j][k];
> > MPI_Send(&columns, count, MPI_INT, i, 0, MPI_COMM_WORLD);
> > // For the next task the data to be sent will start where this one ended
> > offset = offset + rows;
> > On 7/7/06, Raúl González <revenant81_at_[hidden] > wrote:
> > I'm trying to do some simple programs with MPI and C to get familiarized
> with them.
> > I was trying to multiply a couple of arrays (A and B), sending part of
> the arrays to a couple of tasks (half of the rows of A and half of the
> columns of B to one task and the rest to the other) but for some reason I
> get some weird values for the columns of B I sent to the second task.
> > I'm using MPI_Send and MPI_Recv and all seems to be alright. This is the
> relevant code from the master node:
> > printf("\nSending %d rows from A and %d colums from B to node %d\n",
> rows, rows, i);
> > printf("Sending the offset (%d) to task %d\n", offset, i);
> > MPI_Send(&offset, 1, MPI_INT, i, 0,
> MPI_COMM_WORLD);
> > printf("Sending the number of rows (%d) to task
> %d\n", rows, i);
> > MPI_Send(&rows, 1, MPI_INT, i, 0,
> MPI_COMM_WORLD);
> > // We send the corresponding rows of A
> > count = rows * elements;
> > printf("Sending the rows from A to task %d\n",
> i);
> > MPI_Send(&A[offset][0], count, MPI_INT, i, 0,
> MPI_COMM_WORLD);
> > // and columns from B
> > printf("Sending the columns from B to task
> %d\n", i);
> > int columns[elements][rows];
> > for (j = 0; j < elements; j++)
> > for (k = offset; k < offset + rows; k++)
> > columns[j][k] = B[j][k];
> > MPI_Send(&columns, count, MPI_INT, i, 0, MPI_COMM_WORLD);
> > // For the next task the data to be sent will start where this one ended
> > offset = offset + rows;
> > When I print the content of columns here, before seding it, all seems to
> be ok, so I guess I get the fake values because I don't use MPI_Send or
> Recive as they are supoused to be used, or because of something I did wrong
> when I recived it. This is the code for the worker nodes:
> > // Recive the info
> > MPI_Recv(&offset, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
> &status);
> > printf ("Task %d recived %d as offset\n", id, offset);
> > MPI_Recv(&rows, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
> &status);
> > printf ("Task %d recived %d as rows to be sent\n", id,
> rows);
> > printf ("Task %d got %d as number of elements per row with
> broadcasting\n", id, elements);
> > count = rows * elements;
> > MPI_Recv(&A, count, MPI_INT, 0, 0, MPI_COMM_WORLD,
> &status);
> > printf ("Task %d recived %d elements from matrix A:\n",
> id, count);
> > for (i = 0; i < rows; i++) {
> > for (j = 0; j < elements; j++)
> > printf("%d ", A[i][j]);
> > printf("\n");
> > }
> > int columns[elements][rows];
> > MPI_Recv(&columns, count, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
> > printf ("Task %d recived %d elements from matrix B:\n",
> id, count);
> > I am not really a regular C programmer, so there could be something
> really wrong with my code, but I don't understand why I am getting crap on
> my MPI_Recv calls.
> > This is an example of the execution of the program:
> > A:
> > 62 27 88 57
> > 21 94 60 17
> > 91 28 14 3
> > 81 50 53 71
> > B:
> > 34 34 76 43
> > 13 0 17 57
> > 96 36 23 58
> > 26 87 17 15
> > The first worker node got two rows from A:
> > 62 27 88 57
> > 21 94 60 17
> > and two columns from B:
> > 34 34
> > 13 0
> > 96 36
> > 26 87
> > And the second one got the other two rows from A:
> > 91 28 14 3
> > 81 50 53 71
> > and two columns from B, but with a couple of fake values wich are the
> ones that ruin it all:
> > 134516401 134519392
> > 76 43
> > 17 57
> > 23 58
> > If someone could throw some light upon this, I'd be grateful.
> > _________________________________________________________________
> > Be one of the first to try Windows Live Mail.
> >
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
> > _______________________________________________
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
> _________________________________________________________________
> Be one of the first to try Windows Live Mail.
>
> http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
|