Beniamino Sciacca wrote:
> Before I compiled gromacs enabling mpi, and mdrun used both the cpu
> starting only one process.
Okay, I'm not sure how you ever saw speedup via MPI running only one
process - LAM is not multithreaded at all. This implies that if you
really were seeing speedups with one process, some sort of
multithreading was used without MPI for communication.
> I need one process because in my case if I use two processes mdrun
> return an error (I use constraint forces).
I take it this means gromacs doesn't let you run in parallel with
constraint forces? Are you sure you are setting up parallel runs with
gromacs correctly?
> I'm not sure, but maybe this version doesn't support native multithreaded...
I would recommend looking into that, as it might give you what you want.
Andrew
> Beniamino Sciacca
>
> Andrew Friedley ha scritto:
>
>>Beniamino Sciacca wrote:
>>
>>
>>>I don't know what is changed.. but I partially resolved the problem:
>>>is not an hardware problem (installing smp kernel it works).
>>>Now I've to use mpirun (before I didn't use it), and I don't succeed to
>>>tell the system to use both processors for only one process.
>>>In fact using "mpirun -c 2" it makes two processes, each for one cpu.
>>>I need to work with only one process, and before I succeeded in this.
>>>(with another computer dual core I'm able to do this.
>>>Thanks
>>>
>>
>>Apologies, I'm not sure I understand what you're saying here.
>>
>>You weren't using mpirun before? What were you using?
>>
>>I don't see why two processes is bad - this to me is expected behavior,
>>as LAM/MPI will start one process per processor. Why do you need only
>>one process?
>>
>>Another note.. does gromacs support some sort of native multithreaded
>>mode? This might perform better for you than using MPI would if you're
>>only using the one system.
>>
>>Andrew
>>
>>
>>
>>>Andrew Friedley ha scritto:
>>>
>>>
>>>
>>>>Beniamino Sciacca wrote:
>>>>
>>>>
>>>>
>>>>
>>>>>Hi community!
>>>>>I've a great problem:
>>>>>two months ago I installed LAM on my centrino duo, debian -kernel 2.6.
>>>>>Later I compiled gromacs enabling mpi, and everything was working well
>>>>>until four days ago.
>>>>>Now the simulation time is increased like if there is only one processor
>>>>>(it takes twice time than before).
>>>>>I don't understand why.
>>>>>I disinstalled and then reinstalled LAM and gromacs (LAM 7.1.2 (I
>>>>>compiled it on my own) + gromacs 3.3.1).
>>>>>Nothing has changed.
>>>>>I disinstalled linux and then reinstalled (debian kernel 2.6), and again
>>>>>LAM and gromacs: I've the same problem, like if one of the two cores
>>>>>doesn't work.
>>>>>I tried also to install gromacs NON mpi-enabled, and I've the same
>>>>>simulation time (respect to the enabled-mpi gromacs).
>>>>>In all these cases LAM seems to work (when I type lamboot LAM starts
>>>>>without problem).
>>>>>
>>>>>What's happened?
>>>>>How can I understand if it's a software problem or an hardware one?
>>>>>thank you very much for the reply!
>>>>>
>>>>>
>>>>
>>>>The first question I always ask is - what changed? You say it was
>>>>working properly up to some point, something must have happened at or
>>>>before that point.
>>>>
>>>>It's hard to tell based on the information you've given, but the first
>>>>thing I would do is make sure linux has both CPU's enabled, and that any
>>>>power management stuff isn't affecting your performance.
>>>>
>>>>Can you test any other applications, to see if this problem is specific
>>>>to LAM/gromacs?
>>>>
>>>>You might consider other support channels as well - I can't recommend a
>>>>good place for you off the top of my head, but LAM is not likely the
>>>>problem here.
>>>>
>>>>Andrew
>>>>_______________________________________________
>>>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>>>
>>>>
>>>>
>>>
>>>_______________________________________________
>>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>>
>>
>>_______________________________________________
>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>
>>
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
|