LAM/MPI logo

User Setup for Using LAM/MPI

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just the tutorials
Introduction | Preliminary setup | Compiling MPI programs |
Booting LAM/MPI | Running MPI programs | Shutting down LAM/MPI ]

2. PRELIMINARY SETUP

    2.1 Environment setup

    A script has been created that sets up your environment for you. The script will set certain environment variables and reset your path (to include removing other LAM directories from your path) for the architecture and compiler that you are using. Place the following line in your $HOME/.cshrc file
      source /afs/nd.edu/user37/ccse/mpi/lam_cshrc
    
    If you are accessing this site from outside of Notre Dame, an annotated version of this script is available that you should be able to use as a basis for making a cshrc-like script for your own site.

    Once you have sourced this script, you can use any of the LAM commands. See the section on "Compiler", below.

    NOTE: This line must be added before the line in your .cshrc that reads:

      if ($?USER == 0 || $?prompt == 0) exit
    

    2.2 Compiler

    Because different compilers tend to generate different linkage symbols for the same routines/variables (particularly C++ compilers), we have LAM compiled for several different compilers (both C and C++) on every architecture.

    As such, the lam_cshrc script will examine the CC and CXX environment variables to determine which compiler you are using. If the variables do not exist, lam_cshrc defaults to the native compiler for that architecture.

    NOTE: If you switch compilers for a given program, you must set the CC and CXX environment variables, and re-source the lam_cshrc script so that your environment can be re-set.

    2.3 Hostfile

    In your working directory (where your MPI binaries will reside), create a hostfile which provide a listing of the machines to be included in an MPI session. Here is an example of a hostfile :
      node1.cluster.example.com
      node2.cluster.example.com
      node3.cluster.example.com
      node4.cluster.example.com
      node5.cluster.example.com
    
    IMPORTANT: make sure that for each of the hosts listed in the hostfile there is a corresponding entry for that host in your .rhosts file. For example, for for above hostfile your .rhosts file should look something like this:
      node1.cluster.example.com user=username
      node2.cluster.example.com user=username
      node3.cluster.example.com user=username
      node4.cluster.example.com user=username
      node5.cluster.example.com user=username
    

    NOTE: Some implementations of rsh are very picky about the format of the information in your .rhosts file. In particular, ensure that there is no leading white space before the machine name on each line in the file.

    NOTE: the setup we described above supports a homogenous environment, that is all of the hosts are sparc machines running Solaris.


Introduction | Preliminary setup | Compiling MPI programs |
Booting LAM/MPI | Running MPI programs | Shutting down LAM/MPI ]