LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2004-08-26 02:07:25


On Aug 25, 2004, at 2:53 PM, Jack C wrote:

> Basically, I figured that, for something simple, I'd hack up a prime
> finder. I
> managed to make one that has master giving numbers, and slaves testing
> each
> number it gets, by dividing it by every integer up to the numbers
> square root
> (that was a confusing sentence). I friend suggested a few
> improvements, such
> as a) giving each node a range of numbers, rather than just one, and b)
> keeping a list on all the nodes of the previously found primes, and
> only
> dividing by those. These sound good to me.
>
> The first problem I have is that my chunks don't chunk right.....
> Basically, I
> get multiply responses, etc. This is more of a technical question than
> one of
> algorithms, but I figured I'd give it a chance.

I'm not sure what you mean here...

> Second, I realized that not all my nodes are the same speed. At the
> moment,
> the code just says: for i = number of nodes, mpi_send (the number to
> test).
> But if node 3 finished before node2, it sits around and waits. Can
> anyone
> tell me a way to fix this?

You might want to investigate using a bit more of a client/server
approach -- change your logic so that the server sends out an initial
round of work to each client process, and then waits for anyone to
reply with their results. When it gets a set of results from someone,
it gives them the next round of work. In this way, naturally faster
machines will request work faster, etc.

You can even adaptively set the chunksize on the fly, based on how long
a process took to return results to the server, etc. You'll need a bit
more instrumentation (e.g., record the start and end time for each
chunk on each process), but it's certainly do-able.

> And finally, how can I keep the current list of primes updated on all
> of the
> nodes? What if a node is in the middle of checking? How can it recieve?

This is a canonical problem. :-)

LAM is a single-threaded MPI implementation, meaning that you can only
have one thread inside the MPI library at a time. You really need a
shared data construct for this kind of problem, and there are many
different ways to effect that. The simplest might well be to send an
update every time a process asks for more work. You can make this
update be arbitrarily efficient -- ranging from (not efficient) sending
the entire list every time to sending only the new results that that
particular process has not seen yet.

> My code is as follows:
> ----------
> //NOTE: a regular int can only go to 4,294,967,295, so I used type
> uint64_t
> //which goes to 9,223,372,036,854,775,807.
> //So if you're planning to find primes higher than that, you'll need
> different
> //variables... ;-)
>
> #include <stdint.h>
> #include <stdio.h>
> #include <math.h>
> #include <mpi.h>
>
> #define ITERS 5 //if negative, it goes and goes
> #define DEBUG 1    //1 means yes, at least to me
> #define STARTNUM 1244666627 //MUST BE ODD!!!!!
> #define CHUNKSIZE 1 //default size of chunks
> #define NUMNODES 4 //number of nodes, including master
>
> int main(int argc, char *argv[]) {
>
>   int rank, size, numtimes, x;
>   uint64_t i=STARTNUM,t;
>   double startwtime = 0.0, endwtime;
>   int chunk[NUMNODES-1];
>
>   for (x=0;x<NUMNODES;x++)
>     chunk[x]=CHUNKSIZE;
>
>  /*Set your custom chunk sizes here. This is in case you
>   *have nodes with different speeds, you can make them all
>   *work together by fixing the chunk size */
>   //chunk[1]=250; //make the chunk sizes for each respective
>   //chunk[2]=1100;//node. All nodes undefined will be equal to
> CHUNKSIZE
>
>   MPI_Init(&argc, &argv);
>   MPI_Comm_size(MPI_COMM_WORLD, &size);
>   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>
>     if (rank == 0) { //split to master
>       startwtime = MPI_Wtime();
>       for (numtimes=0;numtimes!=ITERS;numtimes++) {
>         for (x=1;x<size;x++) {
>           MPI_Send(&i, 1, MPI_DOUBLE, x, 1, MPI_COMM_WORLD);
>           i+=chunk[x];//increment it by the size of that node's chunk
>         }
>       }
>     } else { //slaves
>       for (numtimes=0;numtimes!=ITERS;numtimes++) {
>         MPI_Recv(&i, 1, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD,
> MPI_STATUS_IGNORE);
>         if (DEBUG==1)
>           printf("Node %d got an assignment: i=%d.\n",rank,i);
>         for (t=i;(t-i)<chunk[rank];t+=2) {
>           printf("Node %d is testing %d.\n",rank,t);
>           //sleep(1);
>           if (isprime(t)==1)
>             printf("%d says %d\n",i,rank); //just stick it on stdout
>             if (DEBUG==1)
>               printf("Node %d says %d is prime!!!\n",rank,i);
>           else
>             if (DEBUG==1)
>               printf("Node %d says %d is not prime.\n",rank,i);
>         }
>       }
>     }
>
>    if (rank == 0) {
>       endwtime = MPI_Wtime();
>       printf("wall clock time = %f\n", endwtime - startwtime);
>     }
>
>   MPI_Finalize();
>   return 0;
> }
>
> int isprime(uint64_t i) { //check if prime. Returns 1 if so, 0 if not.
>
>   uint64_t k;
>   int prime_cnt;
>
>   prime_cnt=1;
>   for (k=2; k<=sqrt(i);k++) {
>     if ((i%k) == 0) {
>       prime_cnt--;
>       break;
>     }
>   }
>   return prime_cnt;
> }
> --------
> Thanks A LOT!
>
> -Jack C
> jack {at} crepinc.com
> http://www.crepinc.com
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>

-- 
{+} Jeff Squyres
{+} jsquyres_at_[hidden]
{+} http://www.lam-mpi.org/