LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Lei_at_[hidden]
Date: 2005-08-18 17:42:43


Hi Jeff:

Thanks a lot for your help! I reinstalled LAM and the MPI singleton
works now.

Now I have another question: Is it possible to start a heterogeneous
LAM universe and use published names to link an MPI singleton
with some real MPI processes? We have the need to run the MPI singleton
on a Windows PC via MEX from a Windows version of Matlab,
and have the singleton either spawn the real MPI processes on
a LINUX cluster or talk with the real MPI processes by searching published
names.

I know LAM MPI runs on heterogeneous nodes, but just wanted
to make sure that I hear a "yes" from you given that you now
understand our need to do parallel computation in the context
of Matlab. I am also hoping that with LAM all the MPI code
that I develop assuming homogeneous environment will automatically
work on heterogeneous nodes.

If not, we may have to use the low level socket stuff, which I
would like to avoid if possible.

Thanks again!

-Lei

Jeff Squyres wrote:

>On Aug 11, 2005, at 10:49 PM, Lei_at_ICS wrote:
>
>
>
>>But a more serious problem arises. MPI_Init() crashes Matlab.
>>This is the C code:
>>int error=0, mp_init=0;
>>int argc=2, num_proc, my_rank;
>>char **argv;
>>
>>MPI_Initialized(&mp_init);
>>if(mp_init == 1)
>>printf("***** MPI_Init called.\n");
>>else {
>>printf("***** MPI_Init NOT called. Calling ...\n");
>>MPI_Init(&argc, &argv);
>>printf("***** Now MPI_Init called.\n");
>>}
>>
>>"***** MPI_Init NOT called. Calling ..." is printed out
>>before crashing.
>>
>>Is it because I faked argc and argv (this C code is called
>>from another C code, which is called by Matlab via MEX,
>>so I don't have any real argc and argv)?
>>
>>
>
>Don't try to fake them out -- the value of argv is not initialized, for
>example. Just go ahead and use MPI_Init(NULL, NULL) -- that should
>work fine (LAM doesn't use those values, anyway).
>
>
>
>>Or is it because of the following (quoted from this group):
>>
>>-----
>>3.4.3 Dynamic/Embedded Environments
>>
>>In LAM/MPI version 7.1.1, some RPI modules may utilize an additional
>>
>>
>
>Yes, it is likely because of this.
>
>Try compiling LAM/MPI without thread support and without a memory
>manager:
>
>./configure --without-threads --with-memory-manager=none ...
>
>This should allow you to use the wrapper compiler, too (just slightly
>neater / more transparent to you, rather than using -showme and
>manually stripping out -pthread).
>
>
>