LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Axel Bellivier (abel.fds_at_[hidden])
Date: 2009-04-06 10:54:54


I solved the problem.

There were a conflict with an old installation (bad removed) of mpich2 and
lamboot.

I reboot the computer and re install lamboot and now all is ok.

2009/4/3 Brian W. Barrett <brbarret_at_[hidden]>

> Can you run a simple hello, world application (like the ones that ship with
> LAM)? This looks like either a problem with your test code or a
> misunderstanding with how you are supposed to run the executable -
> unfortunately both of those issues are out of my area of expertise.
>
> Brian
>
>
> On Fri, 27 Mar 2009, Axel Bellivier wrote:
>
> All mpi run get this same message.
>> Single runs don't have any problems.
>>
>> Here is more informations:
>>
>> My command: (lamd is running)
>> mpirun -np 4 fds5_mpi_intel test1-Parallel.fds < /dev/null > log.run
>> 2>&1 &
>>
>> My system:
>> Dell R900 with 4 quad core Intel(R) Xeon(R) CPU X7350 @
>> 2.93GHz
>> Red Hat Enterprise Linux Server release 5.2 (Tikanga) (64 bit)
>> Linux version 2.6.18-92.1.18.el5
>> (brewbuilder_at_[hidden]) (gcc version 4.1.2 20071124
>> (Red Hat 4.1.2-42)) #1 SMP Wed Nov 5 09:00:19 EST 2008
>>
>>
>> LAM/MPI: 7.1.4
>> Prefix: /usr/local
>> Architecture: x86_64-unknown-linux-gnu
>> Configured by: root
>> Configured on: Mon Jan 12 14:31:06 CET 2009
>> Configure host: srvcal-r900-1
>> Memory manager: ptmalloc2
>> C bindings: yes
>> C++ bindings: yes
>> Fortran bindings: yes
>> C compiler: icc
>> C++ compiler: icpc
>> Fortran compiler: ifort
>> Fortran symbols: underscore
>> C profiling: yes
>> C++ profiling: yes
>> Fortran profiling: yes
>> C++ exceptions: no
>> Thread support: yes
>> ROMIO support: yes
>> IMPI support: no
>> Debug support: no
>> Purify clean: no
>> SSI boot: globus (API v1.1, Module v0.6)
>> SSI boot: rsh (API v1.1, Module v1.1)
>> SSI boot: slurm (API v1.1, Module v1.0)
>> SSI coll: lam_basic (API v1.1, Module v7.1)
>> SSI coll: shmem (API v1.1, Module v1.0)
>> SSI coll: smp (API v1.1, Module v1.2)
>> SSI rpi: crtcp (API v1.1, Module v1.1)
>> SSI rpi: lamd (API v1.0, Module v7.1)
>> SSI rpi: sysv (API v1.0, Module v7.1)
>> SSI rpi: tcp (API v1.0, Module v7.1)
>> SSI rpi: usysv (API v1.0, Module v7.1)
>> SSI cr: self (API v1.0, Module v1.0)
>>
>>
>> 2009/3/26 Brian W. Barrett <brbarret_at_[hidden]>
>> On Thu, 26 Mar 2009, Axel Bellivier wrote:
>>
>> i post because a realise that my mpi jobs was as
>> longer as single ones !!
>>
>> The only thing strange i found on logs is:
>>
>> Process 0 of 0 is running on srvcal-r900-1
>> Mesh 1 is assigned to Process 0
>> Mesh 2 is assigned to Process 0
>> Mesh 3 is assigned to Process 0
>> Mesh 4 is assigned to Process 0
>> Process 0 of 0 is running on srvcal-r900-1
>>
>>
>> >From your output, it's hard to tell what's supposed to
>> happen and what
>>
>> actually happened. If you run a simple hello world example, does
>> it run properly? Is your log of many runs, or a single run? How
>> did you start your parallel application?
>>
>> Brian
>>
>> --
>> Brian Barrett
>> LAM/MPI Developer
>> Make today a LAM/MPI day!
>> _______________________________________________
>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>
>>
>>
>>
>>
> --
> Brian Barrett
> LAM/MPI Developer
> Make today a LAM/MPI day!
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>