|
The Portable Extensible Toolkit for Scientific Computation (PETSc) package is
a suite of data structures and routines for the
scalable (parallel) solution of scientific applications modeled
by partial differential equations. It is available from:
http://www.mcs.anl.gov/petsc/
At the time of this writing, the version of PETSc available is 2.0.24,
patch level 9 (be sure to download and install the patchfile).
The installation instructions (available from that web site)
describes how to build PETSc, so we won't repeat much of it here (especially
since it might change in future versions). These instructions only
pertain to making LAM work correctly with PETSc.
- You need to manually edit the file
bmake/${PETSC_ARCH}/base.site and set the following
values:
# These can both be blank (in terms of LAM
# options); LAM's wrapper
# compilers take care of all of this for
# you. LAM 6.2b users, however, will
# need to put "-lmpi" in the "*LINKER"
# macros (below) if you use "hcc" instead
# of "mpicc". PETSc, however, can use
# the -D flag shown below.
MPI_LIB =
# For PETSc version 2.0.24 and below
MPI_INCLUDE = -DHAVE_MPI_COMM_F2C
# For PETSc versions above 2.0.24
MPI_INCLUDE = -DPETSC_HAVE_MPI_COMM_F2C
# Use the "mpirun.lam" that is provided with
# PETSc. It is a short shell script that
# invokes LAM's mpirun, but does some argument
# parsing first.
MPIRUN = ${PETSC_DIR}/bin/mpirun.lam
- Additionally, you need to manually edit the file
bmake/${PETSC_ARCH}/base_variables
and set the following values:
# You can also use "hcc" here, but LAM 6.2
# users will need to specify "-lmpi" in the
# "MPI_LIBS" macro You may need to
# specify the full path to mpicc if it isn't
# already in your path. Same goes for "hf77"
# and "hcp". Also adjust the linker flags
# as necessary for your underlying compiler.
C_CC = mpicc
C_FC = mpif77
C_CLINKER = mpicc ${COPTFLAGS} ..whatever flags you need..
C_FLINKER = mpif77 ${COPTFLAGS} ..whatever flags you need..
C_CCV = mpicc ..whatever flags you need..
CXX_CC = mpiCC
CXX_FC = mpif77
CXX_CLINKER = mpiCC ${COPTFLAGS} ..whatever flags you need..
CXX_FLINKER = mpiCC ${COPTFLAGS} ..whatever flags you need..
CXX_CCV = mpiCC ..whatever flags you need..
- Finally, you will need to edit
bin/mpirun.lam. You
may need to set the full path to mpirun in the last line
of this script. You can probably leave all the other options as they
are (the -D option is somewhat useless when paired with
the -s option, and will not work as expected if the
-s option is removed and the PETSc executables are not in
your $PATH, so it was removed from the example below).
If you have a common filesystem between your nodes, you can remove
the "-s n0" - it will save a few seconds on each program
invocation. We also recommend that you add a "lamclean"
after the mpirun line, to ensure that no messages are
left around on the network after each example application finishes.
# Include the full path to mpirun and lamclean
# if they are not already in your $PATH.
/path/to/mpirun -w -c $np -s n0 $proname -- $options
/path/to/lamclean
The remainder of the values are fairly obvious and irrelevant to LAM
(i.e., if you don't see it here, specific values not necessary for LAM);
you can set whatever optimization level you want, etc.
If you follow the rest of the instructions for building, PETSc
will build correctly and use LAM as its MPI communication layer.
|