Skip to content
This repository was archived by the owner on Mar 20, 2023. It is now read-only.

Commit d6da749

Browse files
committed
Tag for 3.9.1 release
1 parent 85985d6 commit d6da749

File tree

3 files changed

+31
-3
lines changed

3 files changed

+31
-3
lines changed

CHANGELOG.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,27 @@
22

33
## [Unreleased]
44

5+
## [3.9.1] - 2019-12-13
6+
### Added
7+
- Support `--no-wait` on pool creation to allow the command to skip
8+
waiting for the pool to become idle
9+
- Allow ability to ignore GPU warnings, please see pool configuration
10+
- Ubuntu 18.04 SR-IOV IB/RDMA Packer script
11+
12+
### Changed
13+
- **Breaking Change:** improved per-job autoscratch setup. As part of this
14+
change the `auto_scratch` property in the jobs configuration has changed.
15+
- Provide ability to setup via dependency or blocking behavior
16+
- Allow specifying the number of VMs to span
17+
- Allow specifying the per-job autoscratch task id
18+
- Allow multiple multi-instance tasks per job in non-`native` mode
19+
- Update GlusterFS version to 7
20+
521
### Fixed
622
- Fix merge task regression with enhanced autogenerated task id support
23+
- Fix job schedule submission regression
24+
([#329](https://github.com/Azure/batch-shipyard/issues/329))
25+
- Fix per-job autoscratch provisioning due to upstream dependency changes
726

827
## [3.9.0] - 2019-11-15 (SC19 Edition)
928
### Added
@@ -1707,7 +1726,8 @@ transfer is disabled
17071726
#### Added
17081727
- Initial release
17091728

1710-
[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.9.0...HEAD
1729+
[Unreleased]: https://github.com/Azure/batch-shipyard/compare/3.9.1...HEAD
1730+
[3.9.1]: https://github.com/Azure/batch-shipyard/compare/3.9.0...3.9.1
17111731
[3.9.0]: https://github.com/Azure/batch-shipyard/compare/3.8.2...3.9.0
17121732
[3.8.2]: https://github.com/Azure/batch-shipyard/compare/3.8.1...3.8.2
17131733
[3.8.1]: https://github.com/Azure/batch-shipyard/compare/3.8.0...3.8.1

convoy/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@
2222
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
2323
# DEALINGS IN THE SOFTWARE.
2424

25-
__version__ = '3.9.0'
25+
__version__ = '3.9.1'

docs/63-batch-shipyard-custom-images.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ It is **strongly recommended** to use Shared Image Gallery resources instead
2828
of directly using an Azure Managed Image for increased reliability, robustness
2929
and performance of scale out (i.e., pool allocation with target node counts
3030
and resize up) operations with Azure Batch pools. These improvements hold even
31-
for Shared Image Gallery resource with a replica count of 1.
31+
for a Shared Image Gallery resource with a replica count of 1.
3232

3333
This guide will focus on creating Shared Image Gallery resources for use with
3434
Azure Batch and Batch Shipyard.
@@ -263,12 +263,20 @@ and the required user-land software for Infiniband installed. It is best to
263263
base a custom image off of the existing Azure platform images that support
264264
Infiniband/RDMA.
265265

266+
#### MPI Libraries
267+
If you are utilizing MPI, the associated runtime(s) must be installed such
268+
that they are invocable by the calling programs.
269+
266270
#### Storage Cluster Auto-Linking and Mounting
267271
If mounting a storage cluster, the required NFSv4 or GlusterFS client tooling
268272
must be installed and invocable such that the auto-link mount functionality
269273
is operable. Both clients need not be installed unless you are mounting
270274
both types of storage clusters.
271275

276+
#### Per-Job Autoscratch
277+
If utilizing the per-job autoscratch feature, then BeeGFS Beeond must be
278+
installed so that a shared file system can be created.
279+
272280
#### GlusterFS On Compute
273281
If a GlusterFS on compute shared data volume is required, then GlusterFS
274282
server and client tooling must be installed and invocable so the shared

0 commit comments

Comments
 (0)