LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Glen Beane (beaneg_at_[hidden])
Date: 2004-11-19 13:18:12


ulimit -[HS]n increases the maximum number of open files (Sn is a soft
limit, Hn is a hard limit)

ulimit -[HS]u increases the maximum number of processes per user.

(for the shell process that calls the ulimit - all processes that are
spawned by that shell also inherit that limit so anything started by
/etc/rc and /System/Library/StartupItems/IPServices/IPservices will
inherit these new limits)

>> kern.maxfilesperproc=2048
sets a kernel limit of files per process

>> kern.maxprocperuid=2048
sets a kernel limit of processes per user

These need to be set in order for the ulimit calls shown above to
successfully raise the limits to 2048

On Nov 19, 2004, at 12:58 PM, Don Kenzakowski wrote:

> Q: What do each of these parameters increase?
> Thanks
> Don
>
> Glen Beane wrote:
>
>> Hi Don, Hello from UMaine - long time no see
>>
>> The Mac has some limits that you'll want to increase, so give this a
>> shot and let me know how it works out.
>>
>> add this to /etc/rc
>>
>> ulimit -Hn 2048
>> ulimit -Sn 2048
>> ulimit -Hu 2048
>> ulimit -Su 2048
>>
>> add this to /System/Library/StartupItems/IPServices/IPServices *just
>> before* the "xinetd -inetd_compat -pidfile /var/run/xinetd.pid" line
>>
>> ulimit -Hn 2048
>> ulimit -Sn 2048
>> ulimit -Hu 2048
>> ulimit -Su 2048
>>
>>
>> add this to /etc/sysctl.conf
>>
>> kern.maxfilesperproc=2048
>> kern.maxprocperuid=2048
>>
>>
>>
>> On Nov 19, 2004, at 6:27 AM, Don Kenzakowski wrote:
>>
>>> I am using LAM v7.1.1 on MacOSX using GiGE for interconnect.
>>> I have available 128 nodes, to which I have
>>> successfully lambooted. Running lamnodes, all is fine and as
>>> expected. Running a
>>> sample case utilizing only up to 76 of the 128 available nodes in my
>>> lam environment,
>>> all again is fine as my exec starts up very quickly and completes.
>>> However, if I run a case beyond 76 nodes, my executable hangs
>>> without even getting past MPI_INIT.
>>> After several minutes, I eventually have to kill the job and do a
>>> lamhalt.
>>>
>>>
>>> Q: Is there anything I need to set for running lamboot / mpirun
>>> with a relatively
>>> large number of nodes ?
>>>
>>> Sincerely,
>>> Don Kenzakowski
>>>
>>>
>>> _______________________________________________
>>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>>>
>>
>> _______________________________________________
>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
>
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>