On Aug 26, 2004, at 2:38 AM, Michael Gauckler wrote:
> 1) The main feature of MPI_File_{open, write, close} is to control
> access to a file, so that several processes can read/write efficiently
> from/to the same file.
That is one of the rationale behind the MPI_File interface, yes.
> 2) When reading from/writing to a file from multiple processes,
> MPI_File_{open, write, close} needs a file system (nfs other similar)
> which is mounted on all nodes involved in writing/reading. This means,
> that MPI_File_{open, write, close} does not do any transportation of
> the data from one node to the other (the transportation is implicit
> done by the mounted file system).
That decision is up to the implementation. LAM uses the ROMIO
implementation of the MPI_File interface, and it does support this mode
of operation (NFS available to all the processes). I *believe* that it
also supports (or is capable of supporting) systems where the
filesystem is not available to all processes -- and therefore it has to
transport data to other processes/nodes before it can be read/written.
I think that this is merely a function of the back-end implementation
of the I/O device layer in ROMIO. But I'd have to check the code to be
sure.
> 3) If I write to the local file system the files written are valid and
> can be read again, but only by the process running on the node with
> the file system (to which I wrote the files).
I believe that the stipulation is that if you write it with the
MPI_File interface, you have to read it with the MPI_File interface.
This is to allow MPI I/O implementations to do funky / optimal things
when writing in parallel -- things that may not be obvious to "cat" or
"more" (or other serial unix commands), for example. That being said,
I'm not sure offhand how ROMIO stores its files -- it *may* simply end
up as a liner file that can easily be read by entities outside of MPI.
--
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/
|