Skip to content
This repository was archived by the owner on Mar 20, 2023. It is now read-only.

Commit ae7e5df

Browse files
committed
Tag for 2.3.1 release
- Update some docs
1 parent b69334d commit ae7e5df

File tree

4 files changed

+32
-14
lines changed

4 files changed

+32
-14
lines changed

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,20 @@
11
# Change Log
22

33
## [Unreleased]
4+
5+
## [2.3.1] - 2017-01-03
46
### Added
57
- Add support for nvidia-docker with ssh docker tunnel
68

9+
### Fixed
10+
- Fix multi-job bug with jpcmd
11+
712
## [2.3.0] - 2016-12-15
813
### Added
914
- `pool ssh` command. Please see the usage doc for more information.
1015
- `shm_size` json property added to the json object within the `tasks` array
1116
of a job. Please see the configuration doc for more information.
17+
- SSH, Interactive Sessions and Docker SSH Tunnel guide
1218

1319
### Changed
1420
- Improve usability of the generated SSH docker tunnel script

convoy/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@
2222
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
2323
# DEALINGS IN THE SOFTWARE.
2424

25-
__version__ = '2.3.0'
25+
__version__ = '2.3.1'

docs/01-batch-shipyard-installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ DISTRIB_ID=centos DISTRIB_RELEASE=6.x ./install.sh -3
6464
The following distributions will not work with the `install.sh` script:
6565
* CentOS < 6.0
6666
* Debian < 8
67-
* Fedora < 12
67+
* Fedora < 13
6868
* OpenSUSE < 13.1
6969
* RHEL < 6.0
7070
* SLES < 12

docs/85-batch-shipyard-ssh-docker-tunnel.md

Lines changed: 24 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,16 @@ specific instances (i.e., compute nodes).
1010

1111
For instance, port 12345 may map to port 22 of the first instance of a
1212
compute node in the pool for the public IP address 1.2.3.4. The next compute
13-
node in the pool may have port 22 mapped to 12346 on the load balancer.
13+
node in the pool may have port 22 mapped to port 12346 on the load balancer.
1414

1515
This allows many compute nodes to sit behind one public IP address.
1616

1717
## Interactive SSH
1818
By adding an SSH user to the pool (which can be automatically done for you
19-
via the `ssh` block in the pool config), you can interactively log in to
20-
compute nodes in the pool and execute any command on the remote machine,
21-
including Docker commands via `sudo`.
19+
via the `ssh` block in the pool config upon pool creation or through the
20+
`pool asu` command), you can interactively log in to compute nodes in the
21+
pool and execute any command on the remote machine, including Docker
22+
commands via `sudo`.
2223

2324
You can utilize the `pool ssh` command to automatically connect to any
2425
compute node in the pool without having to manually resort to `pool grls`
@@ -31,7 +32,14 @@ created to the compute node specified.
3132
If using `--cardinal` it requires the natural counting number from zero
3233
associated with the list of nodes as enumerated by `pool grls`. If using
3334
`--nodeid`, then the exact compute node id within the pool specified in
34-
the pool config must be used.
35+
the pool config must be used. For example:
36+
37+
```shell
38+
SHIPYARD_CONFIGDIR=. shipyard pool ssh --cardinal 0
39+
```
40+
41+
would create an interactive SSH session with the first compute node in the
42+
pool as listed by `pool grls`.
3543

3644
## Securely Connecting to the Docker Socket Remotely via SSH Tunneling
3745
To take advantage of this feature, you must install Docker locally on your
@@ -94,16 +102,20 @@ export DOCKER_HOST=:
94102
docker run --rm -it busybox
95103
```
96104

97-
would create a busybox container on the remote similar to the prior command.
105+
would create a busybox container on the remote compute node similar to
106+
the prior command.
98107

99108
To run a CUDA/GPU enabled docker image remotely with nvidia-docker, first you
100109
must install
101110
[nvidia-docker locally](https://github.com/NVIDIA/nvidia-docker/wiki/Installation)
102111
in addition to docker as per the initial requirement. You can install
103112
nvidia-docker locally even without an Nvidia GPU or CUDA installed. It is
104-
simply required for the local command execution. You can then launch your
105-
CUDA-enabled Docker image on the remote compute node on N-series the same
106-
as any other Docker image except invoking with `nvidia-docker` instead:
113+
simply required for the local command execution. If you do not have an Nvidia
114+
GPU available and install `nvidia-docker` you will most likely encounter an
115+
error with the nvidia docker service failing to start, but this is ok. You
116+
can then launch your CUDA-enabled Docker image on the remote compute node
117+
on Azure N-series VMs the same as any other Docker image except invoking
118+
with the `nvidia-docker` command instead:
107119

108120
```shell
109121
DOCKER_HOST=: nvidia-docker run --rm -it nvidia/cuda nvidia-smi
@@ -121,6 +133,6 @@ unset DOCKER_HOST
121133
```
122134

123135
Finally, please remember that the `ssh_docker_tunnel_shipyard.sh` script
124-
is refreshed and is specific for the pool at the time of pool creation,
125-
resize, when an SSH user is added or when the remote login settings are
126-
listed.
136+
is generated and is specific for the pool as specified in the pool
137+
configuration file at the time of pool creation, resize, when an SSH user
138+
is added or when the remote login settings are listed.

0 commit comments

Comments
 (0)