Although in theory you may have no need to access any of the individual
compute nodes, in practice there are many potential reasons you may need to
access any given compute node. You may need to monitor the actual data being
generated by a program, which may not be readily apparent from the head node,
especially if your program is using local scratch. Also, it is
generally accepted that computers are fallible things, and in the event of any
errors, the scheduling system or your script may leave data behind on the head
node that you need to retrieve.
In cluster systems access to the compute nodes is restricted to the head
node or gateway nodes and other compute nodes. Therefore, access to the compute
nodes usually starts from the cluster head node. So accessing any particular
compute node is as simple as using ssh to connect to that compute
node.
Example:
[jdpoisso@axiom ~]$ ssh compute-0-4 [jdpoisso@compute-0-4 ~]$
Note: You may not be asked for a password when using ssh from the
head node to a compute node. In cluster configurations the compute nodes are
configured to ``trust'' the head node. If
you are asked for a password it may represent a problem with the system or your
account.
Note: The names of the individual compute nodes vary from cluster to
cluster, but most clusters are based on a particular schema, such as greek
letters (alpha, beta, gamma, delta, etc...) or numbers (one, two, three, four,
etc...). In the axiom and amino clusters, the node names have a reference to
their physical locations. So compute-0-4 would be the fourth system in rack
zero. Some clusters may also have shorthand names, compute-0-4 could also be
called c0-4 on the amino cluster.