LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Beth Kirschner (bkirschn_at_[hidden])
Date: 2005-07-01 11:55:53


Jeff Squyres wrote:

>On Jul 1, 2005, at 9:18 AM, Beth Kirschner wrote:
>
>
>
>> We have a cluster of Mac OSX G5s that are used during the daytime in
>>classrooms and labs, but are left idle in the evenings and weekends.
>>We are using PBS and Globus to submit jobs to run during these
>>off-hours, and we'd like to offer MPI jobs as well. We'd like to open
>>up a specified range of ports for use by LAM/MPI, without opening
>>access to all TCP ports (which the sysadmin would rightfully never
>>allow). SSH is used to communicate between nodes.
>>
>>
>
>This could be done in LAM, but there is no code currently to support
>this model (i.e., only use a specific range of TCP ports). Right now,
>when we open sockets (both TCP and UDP), we get whatever port the OS
>gives back to us. Remember that LAM is user-based, so we'll never be
>connecting to or from privileged ports. Those can certainly still be
>blocked (which is where most of your problems will come from, anyway).
>Allowing random, non-privileged connections from one machine, IMHO, is
>not much of a security risk -- it allows software that is already
>installed on two machines to communicate data.
>
>If your workstations are already behind a firewall or some other kind
>of port-blocking protection, this may be a suitable model for you...?
>
>Is there any chance that TCP/UDP ports could be opened only between
>those machines? For cluster usage, this is relatively common.
>Specifically, you have some measure of trust between your cluster
>machines, but not anywhere else.
>
>
>
Unfortunately these machines are not behind a firewall -- they are
public access workstations that we use for cycle-scavenging in the
off-hours. I'll see how the sysadmin feels about opening up just the
high user ports on these servers. Can you point me at the code which
handles the TCP/UDP socket communication, just in case?

>> Additionally, we'd like each of the execute nodes to only need to
>>hold the public key to the head node, while the head node would hold
>>the public keys to each individual exeecute node. Right now we're just
>>testing with two nodes, so I'm not sure if this is possible as well.
>>
>>
>
>I assume you're talking about ssh keys...?
>
>There's two kinds of keys -- node keys and user keys.
>
>For node keys, it's easiest (and there are few negative security
>implications) to initially setting up a list of known node keys to all
>your nodes. Hence, every node knows the public key of every other
>node.
>
>For user keys, it's up to you. If your user private keys are protected
>by passphrases, there's little harm in distributing them far and wide
>(e.g., all the nodes in your cluster). If they're unprotected, then
>you need to take reasonable measures to protect them. This kindo f
>issue hits at the heart of what many people hold religiously to be
>"truth" in security, so lots of people have different opinions here.
>:-)
>
>
>
I was refering to node keys -- and the reason I asked is our very
security concious sysadmin does not like the idea of having a list known
node keys distributed to every other node. You mention this is the
'easiest' route -- what are the other options?

Thanks!
- Beth