LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Wang Shuguang (tmswsg_at_[hidden])
Date: 2004-03-10 19:14:49


hi,

I have a question on how LAM handles IO on a dual processors machine.

I wrote a small Fortran MPI program which reads/writes file on a NFS harddisk to exchange information between two processes. I have two same dual processors machines and I did two tests. first I tested the program on a single dual processors machine and ran two process on it. Then I tried the same program on two machines, and ran one process on each machine. the time that the program takes to run two process on a single machine is longer than that on two machines. is it normal? if not, what could be the reason? I suspect it has something to do with IO.

below is main part of the source code.
      ...
      real, dimension(120000)::transferarray
      call MPI_INIT( ierr )
      call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
      call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
      call MPI_GET_PROCESSOR_NAME(processor_name,namelen, ierr)

      do n=1,2000
        if (myid.ne.0) then
          call MPI_RECV(temp,1,MPI_REAL,myid-1,9,MPI_COMM_WORLD,
     & status,ierr)
          open(unit=myid+200,file='transfer',status='old',
     & recl=120000*4, form='UNFORMATTED',iostat=ierr)
          rewind(myid+200)
          read (myid+200) (transferarray(i),i=1,transfersize)
          close(myid+200)
          call MPI_BARRIER(MPI_COMM_WORLD,ierr)
        end if

        if (myid.eq.0) then
          open(unit=myid+200,file='transfer',status='unknown',
     & recl=120000*4, form='UNFORMATTED',iostat=ierr)
          write (myid+200) (transferarray(i),i=1,transfersize)
          call flush(myid+200)
          close(myid+200)
          call MPI_SEND(1.0,1,MPI_REAL,myid+1,9,
     & MPI_COMM_WORLD,ierr)
          call MPI_BARRIER(MPI_COMM_WORLD,ierr)
        end if
      end do

      call MPI_BARRIER(MPI_COMM_WORLD,ierr)
      call MPI_FINALIZE(ierr);
      ...

thanks
sg