LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2006-01-28 08:38:24


Rajeev is exactly right -- we're using a third party package for
MPI-2 IO functionality named ROMIO. It has its own functions, types,
and header files. Hence, if anything is going to define the
OFFSET_KIND type, it will be ROMIO.

As for the mpiof.h issue: in LAM, as you saw, mpi.h includes mpio.h,
so the user don't have to do anything. In mpif.h, we did *not*
include mpiof.h because some fortran compilers don't accept the
"include" statement. So we left it up to the Fortran users to
manually include both mpif.h and mpiof.h (if you want to use ROMIO).

Rajeev is also correct that this technically violates the standard --
the standard says that all you should have to do is include mpif.h
and that should have everything. But for similar integration
reasons, ROMIO implements different types and function names (the
MPIO_Request type in C, and MPIO_TEST and MPIO_WAIT in C/Fortran).
Hence, users wanting to use MPI-2 IO functionality already have to
violate the standard by using these non-standard names.

We have a measley section on ROMIO in the User's Guide -- I should
really expand it to include the above explanation, as well as the
fact that you need to include mpiof.h in Fortran.

All this being said, we fully integrated ROMIO in Open MPI -- you
never need to include mpio.h or mpiof.h, you don't have to use
MPIO_Request's, and you don't have to use MPIO_Test or MPIO_Wait.
You use regular MPI_Request's in C, and can mix I/O, point-to-point,
and generalized requests in the array versions of MPI_TEST and
MPI_WAIT (in both C and Fortran) -- there is no need to use MPIO_TEST
or MPIO_WAIT.

I'm somewhat resistant to fix it so that you don't have to include
mpiof.h in LAM for a few reasons:

1. We are putting 99% of our effort into Open MPI these days, and the
ROMIO integration is Much Better there (see above).

2. The ROMIO integration that we did with Open MPI is at a much
deeper/more fundamental level than we did with LAM; it is not
possible to port that integration back to LAM.

3. Even if we fix this one thing (no need to include mpiof.h) for our
Last Big LAM Release (7.1.2), there will still be lots of old
installations of LAM out there where you do need to include mpiof.h.

On Jan 27, 2006, at 4:53 PM, David Cronk wrote:

> I am hoping one of the LAM developers will provide some input.
> That is
> why I waited to respond. I looked through the header files for LAM.
> MPI_Offset is defined in the C header file. However, I could not find
> MPI_OFFSET_KIND in any of the Fortran header files.
>
> I have used LAM for MPI I/O in C programs in the past and it has
> worked.
> I have never tried it with Fortran programs, though the fact that I
> don't see MPI_OFFSET_KIND being defined suggests to me that it will
> not
> work with Fortran.
>
> As far as what to include. As was mentioned earlier, it SHOULD not be
> necessary to include both. LAM's mpi.h includes mpio.h for you so
> with
> C programs including just mpi.h is sufficient. This does not
> appear to
> hold for the Fortran headers, though someone from the LAM development
> team may correct me there.
>
> So, it looks like the question for the LAM development team is,
> where is
> MPI_OFFSET_KIND defined?
>
> Dave.
>
> Barry A. Croker wrote:
>> David, others,
>>
>> This might be an indication of a larger misunderstanding, that
>> maybe you
>> can help with. The documentation that I have is not entirely clear
>> about some of the included header files needed for MPI I/O.
>>
>> For "normal" MPI programs I "INCLUDE mpif.h". When I started writing
>> the MPI I/O routines on my local machine (using LAM-MPI) I found
>> that I
>> also had to "INCLUDE mpiof.h" for everything to be recognized.
>> However,
>> even with both of these includeded I still get "Entity
>> mpi_offset_kind
>> has undefined type", so I didn't use it (bad call I guess). When I
>> ported our code to other platforms, I had to modify these include
>> statements. For example, the MPICH-2 distribution gave me redundant
>> declaration errors when I included both, but no complaints when I
>> only
>> "INCLUDE mpif.h". Likewise, the IBM SP4 had similar behavior.
>>
>> Is there something incorrect with my installation of MPI? Should
>> I need
>> both headers, or only one? The odd thing is the only time that
>> MPI I/O
>> appears to work correct is on my local machine, when I include
>> both files.
>>
>> Thanks for everyone's help and patience with this issue.
>>
>> David Cronk wrote:
>>
>>> Barry,
>>
>>> try INTEGER(KIND=MPI_OFFSET_KIND) offset, disp
>>
>>> I am guessing your INTEGER is 32 bits while MPI requires 64 bits for
>>> offset and displacement (files are often larger than 2 gig).
>>
>>> Dave.
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> ---
>>
>> _______________________________________________
>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
> --
> Dr. David Cronk, Ph.D. phone: (865) 974-3735
> Research Director fax: (865) 974-8296
> Innovative Computing Lab http://www.cs.utk.edu/
> ~cronk
> University of Tennessee, Knoxville
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/

-- 
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/