LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: damien_at_[hidden]
Date: 2005-11-21 16:33:40


Hmmm. Exploring this is not really LAM and we shouldn't spam the group
with off-topic stuff, so how about we discuss this part off-list, in case
we go back and forth some more? I'll email you directly.

Damien

>> One thing on swap: if you're not peaked out on your actual RAM, the
>> OS
>> might still decide to use some swap files. Even if it's not from
>> the
>> solver process and it's one of the other low-level apps you
>> mentioned, if
>> it swaps on the processor your solver is on, even a tiny bit, your
>> performance will decrease a lot.
>
> Good point!
>
>> Also, if CG solver = Conjugate Gradient, you have a very fast
>> algorithm
>> compared to say a direct solver, so you might be loading the
>> processor in
>> shorter spikes which is harder for the kernel to schedule.
>
> Yes, CG=Conjugate Gradient. Sorry for having forgotten to mention
> this in my first post.
> I'm not sure I get your idea. Do you mean that the program performs
> computions for some little time and then does smth else which is not
> compute-intensive? Although this is not the case, if it were, then
> what?
>
>> With a
>> dual-processor, you could also be losing a bit because the kernel
>> has to
>> schedule the parallel parts of the solver, so you might be running
>> part
>> parallel and part sequential (while one waits for the other to
>> finish) in
>> terms of execution sequence.
>
> A very, very good point!
>
>>128KB to move on your larger problems is not
>> very much, even on 100Mbit, and it sounds like you have short bursts
>> of
>> activity rather than a steady load.
>
> As I said above, this is not the case.
>
>> If you can do some runs with
>> nothing
>> but the solver loaded, you'll get a better idea.
>
> Well, if I could run in dedicated mode, then I wouldn't bother running
> in non-dedicated mode.
>
> Angel
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>