LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: etienne gondet (etienne.gondet_at_[hidden])
Date: 2004-02-02 10:53:54


Jeff Squyres a écrit:

>On Thu, 29 Jan 2004, etienne gondet wrote:
>
>
>
>> I compile with pgi and link with lam a small fortran program with
>>a basic local array in a subroutine bosse. When I use the pgi flag -mp
>>the subroutine bosse either failed or never finish.
>>
>>
>
>What does the "-mp" flag do?
>
>

    OpenMP directives understanding, it also change the allocation mode
for local variables.

>>Is it any stack limitations with lam .
>>
>>
>
>I'm not sure what you mean -- LAM should not be corrupting your stack in
>any way. However, many compilers and/or OS's have inherent limits as to
>how much data you can have on the stack before they will fail. An easy
>way to test this is to try to run a comprable serial program (i.e.,
>without MPI_INIT/MPI_FINALIZE) and try to have the same sized array on the
>stack as your parallel program. If it seg faults upon execution, it's
>possible that your array is too large to be on the stack.
>

   I tried with a serial program without MPI_INIT and FINALIZE and
there is a limitation
with pgf90 -mp between 250 and 260 Mégabytes?

px-107:/home/egondet/f90/memoire $ pgf90 -mp test_local.f90

With mpif77 -mp on the mpi program I already sent , even a 4Mb
allocation in the stack is not possible.

> What I don't understand: it's why if I used an automatic array
>(local array but with dimension through argument) in subroutine bosse
>there isn' any problems.
>
>
>
>I'm not much of a fortran programmer, but if I had to guess, I'd stay that
>the compiler makes the array get allocated on the heap instead of the
>stack.
>
>
You probably right that's probably why when I ask too much for automatic
arrays I have that messages :

0: ALLOCATE: 1610612 bytes requested; not enough memory

>> Note the problems is solved if not using -mp but it should be a
>>severe limitation for hybrid parallel algorithm with both mpi and OpenMP
>>using lam.
>>
>>
>
>If -mp changes the compile-time or run-time characteristics of your
>application, you might need to compile LAM with that flag as well.
>However, if this is the flag that enables OpenMP, then I'm not sure what
>the Right course of action is here (I've never played with mixing MPI and
>OpenMP). LAM won't have any OpenMP compile-time directives, of course, so
>theoretically there shouldn't be any harm in compiling LAM with -mp. But
>this should only be necessary if there is bootstrapping that the compiler
>adds that all compilation units need in order for OpenMP to work.
>
>
>
I am not sure I get the point , I will make a try with lam compiled with
-mp .

    Thanks for advices.

program vidememoire
real :: sum
integer :: NMAX,ierr

call loc()

END program vidememoire

SUBROUTINE loc()
integer,parameter :: NMAX=250*1024*1024/4
!integer,parameter :: NMAX=1*1024*256
real :: sum, R, VAL_CONVERGENCE
real,dimension(NMAX) :: C

R = NMAX
VAL_CONVERGENCE=(R*(R+1))/2

call sleep(1)

do i = 1, NMAX
   C( i ) = i
enddo
sum = 0.0
do i = 1, NMAX
   sum = sum + C( i )
enddo
if( sum .ne. VAL_CONVERGENCE ) then
   print *, "error in summation",NMAX, sum,VAL_CONVERGENCE
else
   print *, "ok in summation",NMAX, sum,VAL_CONVERGENCE
endif

END SUBROUTINE loc