@@ -10,15 +10,16 @@ specific instances (i.e., compute nodes).
1010
1111For instance, port 12345 may map to port 22 of the first instance of a
1212compute node in the pool for the public IP address 1.2.3.4. The next compute
13- node in the pool may have port 22 mapped to 12346 on the load balancer.
13+ node in the pool may have port 22 mapped to port 12346 on the load balancer.
1414
1515This allows many compute nodes to sit behind one public IP address.
1616
1717## Interactive SSH
1818By adding an SSH user to the pool (which can be automatically done for you
19- via the ` ssh ` block in the pool config), you can interactively log in to
20- compute nodes in the pool and execute any command on the remote machine,
21- including Docker commands via ` sudo ` .
19+ via the ` ssh ` block in the pool config upon pool creation or through the
20+ ` pool asu ` command), you can interactively log in to compute nodes in the
21+ pool and execute any command on the remote machine, including Docker
22+ commands via ` sudo ` .
2223
2324You can utilize the ` pool ssh ` command to automatically connect to any
2425compute node in the pool without having to manually resort to ` pool grls `
@@ -31,7 +32,14 @@ created to the compute node specified.
3132If using ` --cardinal ` it requires the natural counting number from zero
3233associated with the list of nodes as enumerated by ` pool grls ` . If using
3334` --nodeid ` , then the exact compute node id within the pool specified in
34- the pool config must be used.
35+ the pool config must be used. For example:
36+
37+ ``` shell
38+ SHIPYARD_CONFIGDIR=. shipyard pool ssh --cardinal 0
39+ ```
40+
41+ would create an interactive SSH session with the first compute node in the
42+ pool as listed by ` pool grls ` .
3543
3644## Securely Connecting to the Docker Socket Remotely via SSH Tunneling
3745To take advantage of this feature, you must install Docker locally on your
@@ -94,16 +102,20 @@ export DOCKER_HOST=:
94102docker run --rm -it busybox
95103```
96104
97- would create a busybox container on the remote similar to the prior command.
105+ would create a busybox container on the remote compute node similar to
106+ the prior command.
98107
99108To run a CUDA/GPU enabled docker image remotely with nvidia-docker, first you
100109must install
101110[ nvidia-docker locally] ( https://github.com/NVIDIA/nvidia-docker/wiki/Installation )
102111in addition to docker as per the initial requirement. You can install
103112nvidia-docker locally even without an Nvidia GPU or CUDA installed. It is
104- simply required for the local command execution. You can then launch your
105- CUDA-enabled Docker image on the remote compute node on N-series the same
106- as any other Docker image except invoking with ` nvidia-docker ` instead:
113+ simply required for the local command execution. If you do not have an Nvidia
114+ GPU available and install ` nvidia-docker ` you will most likely encounter an
115+ error with the nvidia docker service failing to start, but this is ok. You
116+ can then launch your CUDA-enabled Docker image on the remote compute node
117+ on Azure N-series VMs the same as any other Docker image except invoking
118+ with the ` nvidia-docker ` command instead:
107119
108120``` shell
109121DOCKER_HOST=: nvidia-docker run --rm -it nvidia/cuda nvidia-smi
@@ -121,6 +133,6 @@ unset DOCKER_HOST
121133```
122134
123135Finally, please remember that the ` ssh_docker_tunnel_shipyard.sh ` script
124- is refreshed and is specific for the pool at the time of pool creation,
125- resize, when an SSH user is added or when the remote login settings are
126- listed.
136+ is generated and is specific for the pool as specified in the pool
137+ configuration file at the time of pool creation, resize, when an SSH user
138+ is added or when the remote login settings are listed.
0 commit comments