Hi Tim,
Thank you Tim for your answer.
> You don't give enough information to answer, and perhaps I'm not
> guessing entirely what you are asking. You could, of course, go to each
> node while your job is running, to see how many processes are running
> there.
What do you mean 'go to each node' . Assume that I am using one PC with
8 dual core processors.
So, if I understand MPI vocabulary correctly we have 1 node with 8 dual
core processors.
I am not able to run a job in this way: mpirun n0,1,2,3 ./jobname on
this computer. On the other hand if I want to run a job: mpirun c0,1,2,3
./jobname
I have to first boot the lam with lamboot hostfile, where in the
hostfile I have: localhost cpu=16. Is it a correct procedure?
Assume that I stared lam: lamboot hostfile, where in the hostfile I
have: localhost cpu=16
I run the job mpirun c0,1,2,3 ./jobname. Does it mean that I am using 2
cores from processor 0 and 2 cores from processor 1 ?
I run the job mpirun c0,2,4,6 ./jobname. Does it mean that I am using 1
core from processors 0,1,2,3 ?
And finally what in case: mpirun c0,0,0,0 ./jobname ??
In all above cases when I check the cpu usage by 'top' then I can see
four cpu working at 100%. The time of computations is more or less the
same in all cases.
However in case: mpirun c0,0,0,0 ./jobname I would expect to see 4 times
~50% because I explicitly run job on 1 cpu with 2 cores . Am I correct?
This is the case when I run the mpirun c0,0,0,0 ./jobname on PC with 1
dual core processor. Could you explain me this?
> OTOH, lam, with observance of
> the -O option or proper build options for shared memory messaging, will
> use shared memory effectively for message passing within each node.
I am using Suse 10.2 with fortran compilers from: Intel, Pathscale and
Gfortran compiler also. Do you know any "proper build options for shared
memory messaging" for one of those compilers?
Thank you once again and thank you in advance for answering to the above
doubts.
Kind regards,
Artur.
|