LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeremy Archuleta (archuleta_at_[hidden])
Date: 2003-09-26 09:33:10


What version of FFTW are you using? V3.0.1 Doesn't have MPI built in
yet.

"(We haven't yet added MPI parallel transforms to 3.0.1, so you need to
use 2.1.5 for these. Version 3.0.1 does include shared-memory/threads
parallel transforms, however.)"

If you are using 2.1.5, and you don't have good results with FFTW,
there is something wrong with your setup. I have used 2.1.5 extensively
and have found it to be almost 100% scalable.

-J

On Friday, Sep 26, 2003, at 03:06 US/Pacific, Wa-Kun Lam wrote:

> hi all,
> It's some problem while using FFTW in LAM/MPI. I have written a
> program to test the efficiency.
>
> for (idx=0; idx<5000; idx++)
> {
> fftw_mpi(forward_plan, 4096, mydata, outdata);
> if (rank==0)
> {
> // processing outdata here
> }
> fftw_mpi(backward_plan, 4096, outdata, mydata);
> }
>
> I found that the efficiency of the above code, running in a 8-node
> parallel network, is not better than that running in a single machine
> !!!
>
> I wonder if anything wrong with my code or FFTW in MPI is not as good
> as
> we expect.
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>


  • text/enriched attachment: stored