LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Roberto Pasianot (pasianot_at_[hidden])
Date: 2003-10-06 14:29:21


 Hi,

 Probably, as Nihar suggested, channel bonding is te best way to go in
 your case. However in LAM 7 you can use different IPs to communicate with
 different host names ( lamboot parameter "-l" ). So calling the other
 host with two names and seting up the lamhosts (and of course the
 /etc/hosts ) file accordingly will do the trick.

 Cheers ,

 Roberto

On Mon, 6 Oct 2003, Nihar Sanghvi wrote:

>
> On Mon, 6 Oct 2003, Dmitry Kovalsky wrote:
>
> - Hi there
> -
> - I have two nodes. I use gigabit to run in parallel. However the CPUs are
> - loaded to ~70% only. I want to setup another interface eth1 and to let
> - another job to be computed there. By this I want to load the CPU at 100%.
> - However seems LAM (6.5.9 I'm using) cann't run on eth0 and eth1 interfaces in
> - simultaneously. Or I'm wrong?
> -
>
> This basically is not an issue related to LAM. LAM does not know about
> the number of underlying network cards. All it needs is an IP address to
> communicate.
> Two processes in the same LAM universe may not be able to use
> different network cards.
>
> You could do something like Channel Bonding or unusual network topologies,
> on the lower level. This is beyond what LAM does.
>
> Hope this helps..
>
> Nihar
>
>
> - Sincerely yours,
> -
> - Ph.D. Student Dmytro Kovalskyy
> - Institute of Molecular Biology & Genetics
> - 150 Akad. Zabolotnogo Street,
> - Kiev-143, 03143
> - UKRAINE
> -
> - E-mail: dikov_at_[hidden]
> - Fax: +380 (44) 266-0759
> - Tel.: +380 (44) 266-5589
> -
> -
> -
> - _______________________________________________
> - This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> -
>
>
> Powered by LAM/MPI...
> ---------------------------------------
> Nihar Sanghvi
> LAM/MPI Team
> Graduate Student (Indiana University)
> http://www.lam-mpi.org
> --------------------------------------
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>