Skip to content
This repository was archived by the owner on Mar 20, 2023. It is now read-only.

Commit ff49d18

Browse files
committed
Tag for 3.8.0 release
1 parent 826c46a commit ff49d18

File tree

5 files changed

+127
-46
lines changed

5 files changed

+127
-46
lines changed

.vsts/pipeline.yml

Lines changed: 26 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -246,18 +246,33 @@ jobs:
246246
set -o pipefail
247247
docker version
248248
docker login "$(docker.servername)" -u="$(docker.username)" -p="$(docker.password)"
249+
export DOCKER_CLI_EXPERIMENTAL=enabled
249250
singularity_version=$(grep -m1 _SINGULARITY_VERSION convoy/misc.py | cut -d "'" -f 2)
250-
echo "Replicating Singularity verison $singularity_version images to MCR"
251-
dhImage="alfpark/singularity:${singularity_version}-mnt"
252-
docker pull "$dhImage"
253-
mcrImage="$(docker.servername)/public/azure-batch/shipyard:${singularity_version}-singularity-mnt"
254-
docker tag "$dhImage" "$mcrImage"
255-
docker push "$mcrImage"
256-
dhImage="alfpark/singularity:${singularity_version}-mnt-resource"
257-
docker pull "$dhImage"
258-
mcrImage="$(docker.servername)/public/azure-batch/shipyard:${singularity_version}-singularity-mnt-resource"
259-
docker tag "$dhImage" "$mcrImage"
260-
docker push "$mcrImage"
251+
echo "Replicating Singularity version $singularity_version images to MCR"
252+
chkImage=mcr.microsoft.com/azure-batch/shipyard:${singularity_version}-singularity-mnt
253+
set +e
254+
if docker manifest inspect "$chkImage"; then
255+
echo "$chkImage exists, skipping replication"
256+
else
257+
set -e
258+
dhImage="alfpark/singularity:${singularity_version}-mnt"
259+
mcrImage="$(docker.servername)/public/azure-batch/shipyard:${singularity_version}-singularity-mnt"
260+
docker pull "$dhImage"
261+
docker tag "$dhImage" "$mcrImage"
262+
docker push "$mcrImage"
263+
fi
264+
chkImage=mcr.microsoft.com/azure-batch/shipyard:${singularity_version}-singularity-mnt-resource
265+
set +e
266+
if docker manifest inspect "$chkImage"; then
267+
echo "$chkImage exists, skipping replication"
268+
else
269+
set -e
270+
dhImage="alfpark/singularity:${singularity_version}-mnt-resource"
271+
mcrImage="$(docker.servername)/public/azure-batch/shipyard:${singularity_version}-singularity-mnt-resource"
272+
docker pull "$dhImage"
273+
docker tag "$dhImage" "$mcrImage"
274+
docker push "$mcrImage"
275+
fi
261276
displayName: Replicate Singularity Container Images
262277
condition: and(succeeded(), ne(variables['ARTIFACT_CLI'], ''))
263278
- template: ./pyenv.yml

CHANGELOG.md

Lines changed: 62 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,66 @@
22

33
## [Unreleased]
44

5+
## [3.8.0] - 2019-08-13
6+
### Added
7+
- Revamped Singularity support, including support for Singularity 3,
8+
SIF images, and pull support from ACR registries for SIF images via ORAS.
9+
Please see the global and jobs configuration docs for more information.
10+
([#146](https://github.com/Azure/batch-shipyard/issues/146))
11+
- New MPI interface in jobs configuration for seamless multi-instance task
12+
executions with automatic configuration for SR-IOV RDMA VM sizes with support
13+
for popular MPI runtimes including OpenMPI, MPICH, Intel MPI, and MVAPICH
14+
([#287](https://github.com/Azure/batch-shipyard/issues/287))
15+
- Support for Hb/Hc SR-IOV RDMA VM sizes
16+
([#277](https://github.com/Azure/batch-shipyard/issues/277))
17+
- Support for NC/NV/H Promo VM sizes
18+
- Support for user-specified job preparation and release tasks on the host
19+
([#202](https://github.com/Azure/batch-shipyard/issues/202))
20+
- Support for conditional output data
21+
([#230](https://github.com/Azure/batch-shipyard/issues/230))
22+
- Support for bring your own public IP addresses on Batch pools.
23+
Please see the pool configuration doc and the
24+
[Virtual Networks and Public IPs guide](docs/64-batch-shipyard-byovnet.md)
25+
for more information.
26+
- Support for Shared Image Gallery for custom images
27+
- Support for CentOS HPC 7.6 native conversion
28+
- Additional Slurm configuration options
29+
- New recipes: mpiBench across various configurations,
30+
OpenFOAM-Infiniband-OpenMPI, OSUMicroBenchmarks-Infiniband-MVAPICH
31+
32+
### Changed
33+
- **Breaking Change:** the `singularity_images` property in the global
34+
configuration has been modified to accomodate Singularity 3 support.
35+
Please see the global configuration doc for more information.
36+
([#146](https://github.com/Azure/batch-shipyard/issues/146))
37+
- **Breaking Change:** the `gpu` property in the jobs configuration has
38+
been changed to `gpus` to accommodate the new native GPU execution
39+
support in Docker 19.03. Please see the jobs configuration doc for
40+
more information.
41+
([#293](https://github.com/Azure/batch-shipyard/issues/293))
42+
- `pool images` commands now support Singularity
43+
- Non-native task execution is now proxied via script
44+
([#235](https://github.com/Azure/batch-shipyard/issues/235))
45+
- Batch Shipyard images have been migrated to the Microsoft Container Registry
46+
([#278](https://github.com/Azure/batch-shipyard/issues/278))
47+
- Updated Docker CE to 19.03.1
48+
- Updated blobxfer to 1.9.0
49+
- Updated LIS to 4.3.3
50+
- Updated NC/ND driver to 418.67, NV driver to 430.30
51+
- Updated Batch Insights to 1.3.0
52+
- Updated dependencies to latest, where applicable
53+
- Updated Python to 3.7.4 for pre-built binaries
54+
- Updated Docker images to use Alpine 3.10
55+
- Various recipe updates to showcase the new MPI schema, HPLinpack and HPCG
56+
updates to SR-IOV RDMA VM sizes
57+
58+
### Fixed
59+
- Cargo Batch service client update missed
60+
([#274](https://github.com/Azure/batch-shipyard/issues/274), [#296](https://github.com/Azure/batch-shipyard/issues/296))
61+
- Premium File Shares were not enumerating correctly with AAD
62+
([#294](https://github.com/Azure/batch-shipyard/issues/294))
63+
- Per-job autoscratch setup failing for more than 2 nodes
64+
565
### Removed
666
- Python 3.4 support
767

@@ -1532,7 +1592,8 @@ transfer is disabled
15321592
#### Added
15331593
- Initial release
15341594

1535-
[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.7.1...HEAD
1595+
[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.8.0...HEAD
1596+
[3.8.0]: https://github.com/Azure/batch-shipyard/compare/3.7.1...3.8.0
15361597
[3.7.1]: https://github.com/Azure/batch-shipyard/compare/3.7.0...3.7.1
15371598
[3.7.0]: https://github.com/Azure/batch-shipyard/compare/3.6.1...3.7.0
15381599
[3.6.1]: https://github.com/Azure/batch-shipyard/compare/3.6.0...3.6.1

README.md

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[![Build Status](https://azurebatch.visualstudio.com/batch-shipyard/_apis/build/status/batch-shipyard-CI)](https://azurebatch.visualstudio.com/batch-shipyard/_build/latest?definitionId=11)
22
[![Build Status](https://travis-ci.org/Azure/batch-shipyard.svg?branch=master)](https://travis-ci.org/Azure/batch-shipyard)
33
[![Build status](https://ci.appveyor.com/api/projects/status/3a0j0gww57o6nkpw/branch/master?svg=true)](https://ci.appveyor.com/project/alfpark/batch-shipyard)
4-
[![Docker Pulls](https://img.shields.io/docker/pulls/alfpark/batch-shipyard.svg)](https://hub.docker.com/r/alfpark/batch-shipyard)
5-
[![Image Layers](https://images.microbadger.com/badges/image/alfpark/batch-shipyard:latest-cli.svg)](http://microbadger.com/images/alfpark/batch-shipyard)
64

75
# Batch Shipyard
86
<img src="https://azurebatchshipyard.blob.core.windows.net/github/README-dash.gif" alt="dashboard" width="1024" />
@@ -28,13 +26,21 @@ in Azure, independent of any integrated Azure Batch functionality.
2826
[Kata Containers](https://katacontainers.io/) tuned for Azure Batch
2927
compute nodes
3028
* Automated deployment of container images required for tasks to compute nodes
29+
* Support for container registries including
30+
[Azure Container Registry](https://azure.microsoft.com/services/container-registry/)
31+
for both Docker and Singularity images (ORAS), other Internet-accessible
32+
public and private registries, and support for
33+
the [Sylabs Singularity Library](https://cloud.sylabs.io/library) and
34+
[Singularity Hub](https://singularity-hub.org/)
3135
* Transparent support for GPU-accelerated container applications on both
3236
[Docker](https://github.com/NVIDIA/nvidia-docker) and Singularity
3337
on [Azure N-Series VM instances](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-gpu)
34-
* Support for Docker Registries including
35-
[Azure Container Registry](https://azure.microsoft.com/services/container-registry/),
36-
other Internet-accessible public and private registries, and support for
37-
the [Singularity Hub](https://singularity-hub.org/) Container Registry
38+
* Transparent assist for running Docker and Singularity containers utilizing
39+
Infiniband/RDMA on HPC Azure VM instances including
40+
[A-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
41+
[H-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
42+
[Hb/Hc-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
43+
and [N-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-gpu)
3844

3945
### Data Management and Shared File Systems
4046
* Comprehensive [data movement](https://batch-shipyard.readthedocs.io/en/latest/70-batch-shipyard-data-movement/)
@@ -90,13 +96,8 @@ to accommodate MPI and multi-node cluster applications packaged as Docker or
9096
Singularity containers on compute pools with automatic job completion and
9197
task termination
9298
* Seamless, direct high-level configuration support for popular MPI runtimes
93-
including OpenMPI, MPICH, MVAPICH, and Intel MPI
94-
* Transparent assist for running Docker and Singularity containers utilizing
95-
Infiniband/RDMA for MPI on HPC low-latency Azure VM instances including
96-
[A-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
97-
[H-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
98-
[Hb/Hc-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-hpc),
99-
and [N-Series](https://docs.microsoft.com/azure/virtual-machines/linux/sizes-gpu)
99+
including OpenMPI, MPICH, MVAPICH, and Intel MPI with automatic configuration
100+
for Infiniband, including SR-IOV RDMA VM sizes
100101
* Seamless integration with Azure Batch job, task and file concepts along with
101102
full pass-through of the
102103
[Azure Batch API](https://azure.microsoft.com/documentation/articles/batch-api-basics/)
@@ -111,9 +112,11 @@ tasks at set intervals
111112
* Support for [Low Priority Compute Nodes](https://docs.microsoft.com/azure/batch/batch-low-pri-vms)
112113
* Support for deploying Batch compute nodes into a specified
113114
[Virtual Network](https://batch-shipyard.readthedocs.io/en/latest/64-batch-shipyard-byovnet/)
115+
and pre-defined public IP addresses
114116
* Automatic setup of SSH or RDP users to all nodes in the compute pool and
115117
optional creation of SSH tunneling scripts to Docker Hosts on compute nodes
116118
* Support for [custom host images](https://batch-shipyard.readthedocs.io/en/latest/63-batch-shipyard-custom-images/)
119+
including Shared Image Gallery
117120
* Support for [Windows Containers](https://docs.microsoft.com/virtualization/windowscontainers/about/)
118121
on compliant Windows compute node pools with the ability to activate
119122
[Azure Hybrid Use Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
@@ -134,8 +137,8 @@ and [iOS](https://itunes.apple.com/us/app/microsoft-azure/id1219013620?mt=8)
134137
app.
135138

136139
Simply request a Cloud Shell session and type `shipyard` to invoke the CLI;
137-
no installation is required. Try Batch Shipyard now from your browser:
138-
[![Launch Cloud Shell](https://shell.azure.com/images/launchcloudshell.png "Launch Cloud Shell")](https://shell.azure.com)
140+
no installation is required. Try Batch Shipyard now
141+
[in your browser](https://shell.azure.com).
139142

140143
## Documentation and Recipes
141144
Please refer to the

convoy/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@
2222
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
2323
# DEALINGS IN THE SOFTWARE.
2424

25-
__version__ = '3.7.1'
25+
__version__ = '3.8.0'

docs/64-batch-shipyard-byovnet.md

Lines changed: 20 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -39,24 +39,9 @@ at least the `Virtual Machine Contributor` role permission or a
3939
* `Microsoft.Network/publicIPAddresses/read`
4040
* `Microsoft.Network/publicIPAddresses/join/action`
4141

42-
## Public IPs
43-
For pools that are not internode communication enabled, more than 1 public IP
44-
and load balancer may be created for the pool. If you are not bringing your
45-
own public IPs, they are allocated in the subscription that has allocated the
46-
virtual network. If you are not bringing your own public IPs, ensure that
47-
the sufficient Public IP quota has been granted for the subscription of the
48-
virtual network (and is sufficient for any pool resizes that may occur).
42+
## Virtual Networks
4943

50-
If you are bringing your own public IPs, you must supply a sufficient number
51-
of public IPs in the pool configuration for the maximum number of compute
52-
nodes you intend to deploy for the pool. The current requirements are
53-
1 public IP per 50 dedicated nodes or 20 low priority nodes.
54-
55-
Note that enabling internode communication is not recommended unless
56-
running MPI (multinstance) jobs as this will restrict the upper-bound
57-
scalability of the pool.
58-
59-
## `virtual_network` Pool configuration
44+
### `virtual_network` Pool configuration
6045
To deploy Batch compute nodes into a subnet within a Virtual Network that
6146
you specify, you will need to define the `virtual_network` property in the
6247
pool configuration file. The template is:
@@ -100,7 +85,24 @@ on-premises, then you may have to add
10085
to that subnet. Please follow the instructions found in this
10186
[document](https://docs.microsoft.com/azure/batch/batch-virtual-network#user-defined-routes-for-forced-tunneling).
10287

103-
## `public_ips` Pool configuration
88+
## Public IPs
89+
For pools that are not internode communication enabled, more than 1 public IP
90+
and load balancer may be created for the pool. If you are not bringing your
91+
own public IPs, they are allocated in the subscription that has allocated the
92+
virtual network. If you are not bringing your own public IPs, ensure that
93+
the sufficient Public IP quota has been granted for the subscription of the
94+
virtual network (and is sufficient for any pool resizes that may occur).
95+
96+
If you are bringing your own public IPs, you must supply a sufficient number
97+
of public IPs in the pool configuration for the maximum number of compute
98+
nodes you intend to deploy for the pool. The current requirements are
99+
1 public IP per 50 dedicated nodes or 20 low priority nodes.
100+
101+
Note that enabling internode communication is not recommended unless
102+
running MPI (multinstance) jobs as this will restrict the upper-bound
103+
scalability of the pool.
104+
105+
### `public_ips` Pool configuration
104106
To deploy Batch compute nodes with pre-defined public IPs that
105107
you specify, you will need to define the `public_ips` property in the
106108
pool configuration file. The template is:

0 commit comments

Comments
 (0)