The problem may be that LAM defines type_darray and type_subarray as
first-class datatypes (and rightly so), whereas the ROMIO code assumes they
are constructed out of MPI-1 datatypes. (If this is not true then I don't
know what the problem is.) This is a deficiency in ROMIO, which will
hopefully be fixed sometime in the future. In the meanwhile, your code
should work with MPICH.
Rajeev
> -----Original Message-----
> From: jeremy archuleta [mailto:archuleta_at_[hidden]]
> Sent: Thursday, July 24, 2003 6:02 PM
> To: lam_at_[hidden]
> Cc: romio-maint_at_[hidden]
> Subject: Problem writing subarray
>
>
>
> I am trying to run the attached code which can also be found on
> page 77 of
> "Using MPI-2: Advanced Features of the Message Passing Interface."
>
> Essentially, it is a parallel write using subarrays and ghost cells. The
> problem seems to be with using user-defined datatypes in
> MPI_File_set_view
> and MPI_File_write_all.
>
> Here is the error I get with LAM 6.5.6 (Redhat PC), 7.0 (SunOS
> UltraSparc),
> and 7.0 (MacOS Powerbook Ti)
>
> Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks
>
> Because I believe that this is associated with I/O, this email is
> also sent
> to the ROMIO maintainers.
>
> Thanks for your help...in advance.
>
> -j
>
>
> Source Code (it's short):
>
> #include "mpi.h"
> int main( int argc, char *argv[] )
> {
> int i,j,k;
> int gsizes[2], distribs[2], dargs[2], psizes[2],
> rank, size, m,
> n;
> int lsizes[2], dims[2], periods[2], coords[2],
> start_indices[2];
> int memsizes[2];
> MPI_Datatype filetype, memtype;
> MPI_Comm comm;
> int local_array_size, num_local_rows, num_local_cols;
> int row_procs, col_procs, row_rank, col_rank;
> MPI_File fh;
> float *local_array;
> MPI_Status status;
>
> MPI_Init( &argc, &argv );
>
> /* ... */
> /* Jeremy Added */
> m = 8; n = 9;
> /* end Jeremy Added */
>
> /* This code is particular to a 2 x 3 process decomposition */
> MPI_Comm_size( MPI_COMM_WORLD, &size );
> if (size != 6) {
> printf( "Communicator size must be 6\n" );
> MPI_Abort( MPI_COMM_WORLD, 1 );
> }
>
> /* See comments on block distribution */
> row_procs = 2;
> col_procs = 3;
> num_local_rows = (m + row_procs - 1) / row_procs;
> /* adjust for last row */
> if (row_rank == row_procs-1)
> num_local_rows = m - (row_procs-1) * num_local_rows;
> num_local_cols = (n + col_procs - 1) / col_procs;
> /* adjust for last column */
> if (col_rank == col_procs-1)
> num_local_cols = n - (col_procs-1) * num_local_cols;
>
> local_array = (float *)malloc( num_local_rows * num_local_cols *
> sizeof(floa
> t) );
>
> /* ... set elements of local_array ... */
>
> /* Jeremy Added */
> MPI_Comm_rank(MPI_COMM_WORLD,&k);
> for(i=0;i<num_local_rows;i++){
> for(j=0;j<num_local_cols; j++){
> local_array[i*num_local_cols + j] = k+1.0;
> }
> }
> /* end Jeremy Added */
>
> gsizes[0] = m; gsizes[1] = n;
> /* no. of rows and columns in global array*/
> psizes[0] = 2; psizes[1] = 3;
> /* no. of processes in vertical and horizontal dimensions
> of process grid */
> lsizes[0] = m/psizes[0]; /* no. of rows in local array */
> lsizes[1] = n/psizes[1]; /* no. of columns in local array */
> dims[0] = 2; dims[1] = 3;
> periods[0] = periods[1] = 1;
> MPI_Cart_create(MPI_COMM_WORLD, 2, dims, periods, 0, &comm);
> MPI_Comm_rank(comm, &rank);
> MPI_Cart_coords(comm, rank, 2, coords);
> /* global indices of the first element of the local array */
> start_indices[0] = coords[0] * lsizes[0];
> start_indices[1] = coords[1] * lsizes[1];
> MPI_Type_create_subarray(2, gsizes, lsizes, start_indices,
> MPI_ORDER_C, MPI_FLOAT, &filetype);
> MPI_Type_commit(&filetype);
> MPI_File_open(MPI_COMM_WORLD, "datafile",
> MPI_MODE_CREATE | MPI_MODE_WRONLY,
> MPI_INFO_NULL, &fh);
> MPI_File_set_view(fh, 0, MPI_FLOAT, filetype, "native",
> MPI_INFO_NULL);
> /* create a derived datatype that describes the layout of the local
> array in the memory buffer that includes the ghost area. This is
> another subarray datatype! */
> memsizes[0] = lsizes[0] + 2; /* no. of rows in allocated array */
> memsizes[1] = lsizes[1] + 2; /* no. of columns in allocated array */
> start_indices[0] = start_indices[1] = 1;
> /* indices of the first element of the local array in the
> allocated array */
> MPI_Type_create_subarray(2, memsizes, lsizes, start_indices,
> MPI_ORDER_C, MPI_FLOAT, &memtype);
> MPI_Type_commit(&memtype);
> MPI_File_write_all(fh, local_array, 1, memtype, &status);
> MPI_File_close(&fh);
>
> /* ... */
> MPI_Finalize();
> return 0;
> }
>
>
|