It is not always possible to allocate so much memory on the stack --
meaning that your application is crashing before it even starts.
For example, try the same thing with a non-MPI application (all those
double arrays with large sizes) and try to run it. You'll see that as
the sizes increase, you'll eventually start seg faulting before main().
Specifically: this is an operating system issue, not an MPI issue. MPI
is simply reporting that your application has crashed.
You should probably convert all those static allocations to calls to
malloc() -- you can allocate arbitrary amounts of memory off the heap
without problem (until your system runs out of memory, of course ;-) ).
Be aware that malloc'ing 2D arrays is a little tricky -- you need to be
sure to setup the pointers correctly.
This exact question has come up on this list before; you might want to
search through the archives and/or do some googling around the web to
see how to malloc out 2D arrays in C properly.
> -----Original Message-----
> From: lam-bounces_at_[hidden]
> [mailto:lam-bounces_at_[hidden]] On Behalf Of K Ganeshamoorthy
> Sent: Friday, April 28, 2006 8:18 AM
> To: lam_at_[hidden]
> Subject: LAM: LAM 7.1.1 MPI 2
>
> Dear Sir/Madam,
> I am a research student on Grid Computing. I have written a
> parallel code by
> using LAM 7.1.1 MPI 2, for the Tsunami model: absorbing
> boundary conditions,
> fudge of problem with shallow sea, using long wave theory. It
> is basically
> under the partial differential equation. I have been using
> some constants
> (ig=303, jg=313, igg=903 and jgg=1233) to declare some arrays
> for my coding,
> as follows:
> double z[igg][jgg];
> double m[ig][jg], n[ig][jg];
> double h[ig][jg], r1[ig][jg], r2[ig][jg], r3[ig][jg],
> r4[ig][jg], r5[ig][jg];
> double r6[jg], c1[ig], c2[jg], c3[ig], c4[jg], v1[jg],
> v2[jg], v3[jg], v4
> [jg], v5[jg];
>
> I have run the code successfully with correct output when the
> values for
> those constants are small. If I assign the above mentioned
> values, I have
> problem!
>
> #define ig 303
> #define jg 313
> #define igg 603 //903
> #define jgg 900 //1233
> OR
> #define ig 150 //303
> #define jg 150 //313
> #define igg 903
> #define jgg 1233
> For the above situation, the code runs successfully with
> correct output.
>
> But When I tried to run with the following needed values,
> #define ig 303
> #define jg 313
> #define igg 903
> #define jgg 1233, I got the following error message:
>
> "It seems that [at least] one of the processes that was started with
> mpirun did not invoke MPI_INIT before quitting (it is possible that
> more than one process did not invoke MPI_INIT -- mpirun was only
> notified of the first one, which was on node n0)."
>
> "mpirun can *only* be used with MPI programs (i.e., programs that
> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
> to run non-MPI programs over the lambooted nodes."
>
> Could you please give some idea to solve this problem. I hope
> that I will
> correct my problem through your guidance.
>
> Thanking you.
> Yours truly,
> Ganeshamoorthy
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
|