Some more information:
I've also tried the code on a an IBM SP4 using GPFS and it also fails.
We've tried to make sure that our NFS volumes are setup correctly for
ROMIO (i.e. attribute caching off, etc).
I've written a small test code to see if it helps, but I still have
similar behavior. I've included the code below. If anyone has time,
please try to run it and see what you get. Also, if you see that I
am doing something incorrect, let me know! On my workstation, I get
a file called "test.dat" which is np*4bytes in size, where np is the
number of processes. On the NFS filesystem it creates files of the
order of 400 GB. I also made a small c-routine to parse the large
data files and it appears that it's mostly filled with zeros.
Thanks again for everyone's help. I would like to figure this out.
BARRY A. CROKER, CAPT., USAF
Research Aerospace Engineer
Air Vehicles Directorate (AFRL/VAAC)
Air Force Research Laboratory - Wright-Patterson AFB
(937) 255-7876
PROGRAM test
IMPLICIT NONE
INCLUDE 'mpif.h'
INCLUDE 'mpiof.h'
INTEGER :: i, ierror, offset, disp
INTEGER :: fmode,finfo,fhandle,fsize
INTEGER :: mpi_rank, mpi_size, mpi_status(MPI_STATUS_SIZE)
CALL MPI_INIT(ierror)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, mpi_rank, ierror)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, mpi_size, ierror)
i = 100 + mpi_rank
PRINT*, 'process ',mpi_rank, ' i is', i
! Read-write file access mode
fmode = IOR(MPI_MODE_RDWR,MPI_MODE_CREATE)
CALL MPI_INFO_CREATE(finfo, ierror)
CALL MPI_FILE_OPEN
(MPI_COMM_WORLD,'test.dat',fmode,finfo,fhandle,ierror)
disp = 0
offset = mpi_rank
CALL MPI_FILE_SET_VIEW(fhandle, disp,
MPI_INTEGER,MPI_INTEGER,'native',finfo,ierror)
CALL MPI_FILE_WRITE_AT(fhandle,offset,i,
1,MPI_INTEGER,mpi_status,ierror)
CALL MPI_BARRIER(MPI_COMM_WORLD,ierror)
CALL MPI_FILE_GET_SIZE(fhandle,fsize,ierror)
IF(mpi_rank == 0) PRINT*, 'File is ', fsize, 'bytes';
CALL MPI_FILE_CLOSE(fhandle, ierror)
CALL MPI_FINALIZE(ierror)
END PROGRAM test
|