LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Christian F. Vélez Witrofsky (cfvelez_at_[hidden])
Date: 2005-02-17 19:27:00


Javier & Nelson,

Thanks a lot for the info, I really apreciate it. We will be using
MPITB & Octave on our project in the comming weeks and I'll be sure to
let you know how it went (asuming it all goes well).

Thanks again!
Christian F. Velez Witrofsky
Computer Science, University of Puerto Rico

> ---------- Forwarded message ----------
> From: Javier Fernández Baldomero <javier_at_[hidden]>
> To: lam_at_[hidden]
> Date: Thu, 17 Feb 2005 09:20:12 +0100
> Subject: Re: LAM: Octave and MPITB
>
> Hi Christian, Nelson,
>
> Of course Nelson is right, and the 3 options in his reply are exercisable using MPITB,
> but here I will concentrate on the "recommended" MPITB setup. MPITB users are
> expected to use the easiest possible filesystem configuration, to avoid getting trapped
> into complex MPI_Comm_spawn_multiple() arglists and config files/info keys.
>
> > Subject: LAM: Octave and MPITB > ... > Do I need to install Octave on every node in a cluster to use Octave > in a parallel program? The recommended MPITB setup is a "shared $OCTAVE_HOME", see the
> final section in MPITB web page http://atc.ugr.es/javier-bin/mpitb
> _____________________________
> Installing:
> ...
> - A shared OCTAVEHOME is strongly advised. Make sure your octave executable
> has been compiled with DLD support.
> ...
> If not, make sure "octave" is in your search path on each node, as well as LAM libraries in your LD_LIBRARY_PATH.
> ...
> If neither condition is satisfied, you may need to play with your .cshrc, .bashrc,
> run command scripts, or MPI_Comm_spawn_multiple.
> ...
> _____________________________
>
> That is, say your home (but not /usr) is NFS-exported to all nodes in the cluster.
> Then I would recommend you to locally install octave in your account (make sure
> you recompile it with DLD support enabled)
>
> If /usr is also exported and octave with DLD support is already there, there you go.
> Since you are asking, I guess either each of your cluster nodes has a separate /usr
> or you have no write access to the common, shared, octave-missing, /usr.
> You still may install octave locally in your account.
>
> If your home is not NFS-shared either, or you have no room for octave there,
> you need help from your sysadmin to make octave available from all cluster nodes.
> She (might / will probably) have tools to easily replicate /usr to the other nodes.
> Installing octave on /tmp after each reboot is not an option, there are problems
> (I think somebody wrote here on that some time ago) and it's cumbersome.
>
> The more elaborated the workaround, the more elaborated the .bashrc setup
> to make octave spawn-able when you later use MPI_Comm_spawn from the
> headed-node octave session.
>
> Hope you can solve your problem.
>
> -javier
>
>
> ---------- Forwarded message ----------
> From: Nelson Brito <ntbrito_at_[hidden]>
> To: General LAM/MPI mailing list <lam_at_[hidden]>
> Date: Thu, 17 Feb 2005 09:41:33 +0000
> Subject: Re: LAM: Octave and MPITB
> I am sorry that i missunderstood the question and gave a general answer
> about the use of lam-mpi.
> In fact i don't know octave neither MPITB, and there's nothing like a
> good explanation... the programs i use here are always in a shared
> filesystem, there are several advantages in doing that, and the most
> evident is that when you update the program you just need to do it once.
>
> Regards,
> nelson
>
> Nelson de Brito
> http://www.fc.up.pt/pessoas/ntbrito