Skip to content

Commit e9d6834

Browse files
Merge pull request #250340 from jvenka/jvenka-patch-2
HBv3 update
2 parents babb372 + 936958c commit e9d6834

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

articles/virtual-machines/hbv3-series.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
1818

1919
[Premium Storage](premium-storage-performance.md): Supported<br>
2020
[Premium Storage caching](premium-storage-performance.md): Supported<br>
21-
[Ultra Disks](disks-types.md#ultra-disks): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/ultra-disk-storage-for-hpc-and-gpu-vms/ba-p/2189312) about availability, usage and performance) <br>
21+
[Ultra Disks](disks-types.md#ultra-disks): Not supported<br>
2222
[Live Migration](maintenance-and-updates.md): Not Supported<br>
2323
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br>
2424
[VM Generation Support](generation-2.md): Generation 1 and 2<br>

articles/virtual-machines/hbv4-series-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ When paired in a striped array, the NVMe SSD provides up to 12 GB/s reads and 7
110110
| Cores | 176, 144, 96, 48, or 24 (SMT disabled) |
111111
| CPU | AMD EPYC 9V33X |
112112
| CPU Frequency (non-AVX) | 2.4 GHz base, 3.7 GHz peak boost |
113-
| Memory | 688 GB (RAM per core depends on VM size) |
113+
| Memory | 704 GB (RAM per core depends on VM size) |
114114
| Local Disk | 2 * 1.8 TB NVMe (block), 480 GB SSD (page file) |
115115
| InfiniBand | 400 Gb/s Mellanox ConnectX-7 NDR InfiniBand |
116116
| Network | 80 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |

articles/virtual-machines/hbv4-series.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.reviewer: wwilliams
1414

1515
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
1616

17-
HBv4-series VMs are optimized for various HPC workloads such as computational fluid dynamics, finite element analysis, frontend and backend EDA, rendering, molecular dynamics, computational geoscience, weather simulation, and financial risk analysis. HBv4 VMs feature up to 176 AMD EPYC™ 9V33X ("Genoa-X") CPU cores with AMD's 3D V-Cache, clock frequencies up to 3.7 GHz, and no simultaneous multithreading. HBv4-series VMs also provide 688 GB of RAM, 2.3 GB L3 cache. The 2.3 GB L3 cache per VM can deliver up to 5.7 TB/s of bandwidth to amplify up to 780 GB/s of bandwidth from DRAM, for a blended average of 1.2 TB/s of effective memory bandwidth across a broad range of customer workloads. The VMs also provide up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance.
17+
HBv4-series VMs are optimized for various HPC workloads such as computational fluid dynamics, finite element analysis, frontend and backend EDA, rendering, molecular dynamics, computational geoscience, weather simulation, and financial risk analysis. HBv4 VMs feature up to 176 AMD EPYC™ 9V33X ("Genoa-X") CPU cores with AMD's 3D V-Cache, clock frequencies up to 3.7 GHz, and no simultaneous multithreading. HBv4-series VMs also provide 704 GB of RAM, 2.3 GB L3 cache. The 2.3 GB L3 cache per VM can deliver up to 5.7 TB/s of bandwidth to amplify up to 780 GB/s of bandwidth from DRAM, for a blended average of 1.2 TB/s of effective memory bandwidth across a broad range of customer workloads. The VMs also provide up to 12 GB/s (reads) and 7 GB/s (writes) of block device SSD performance.
1818

1919

2020
All HBv4-series VMs feature 400 Gb/s NDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. NDR continues to support features like Adaptive Routing and the Dynamically Connected Transport (DCT). This newest generation of InfiniBand also brings greater support for offload of MPI collectives, optimized real-world latencies due to congestion control intelligence, and enhanced adaptive routing capabilities. These features enhance application performance, scalability, and consistency, and their usage is recommended.
@@ -31,11 +31,11 @@ All HBv4-series VMs feature 400 Gb/s NDR InfiniBand from NVIDIA Networking to en
3131

3232
|Size |Physical CPU cores |Processor |Memory (GB) |Memory bandwidth (GB/s) |Base CPU frequency (GHz) |Single-core frequency (GHz, peak) |RDMA performance (Gb/s) |MPI support |Temp storage (TB) |Max data disks |Max Ethernet vNICs |
3333
|----|----|----|----|----|----|----|----|----|----|----|----|
34-
|Standard_HB176rs_v4 |176 |AMD EPYC 9V33X (Genoa-X) |688 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
35-
|Standard_HB176-144rs_v4|144 |AMD EPYC 9V33X (Genoa-X) |688 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
36-
|Standard_HB176-96rs_v4 |96 |AMD EPYC 9V33X (Genoa-X) |688 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
37-
|Standard_HB176-48rs_v4 |48 |AMD EPYC 9V33X (Genoa-X) |688 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
38-
|Standard_HB176-24rs_v4 |24 |AMD EPYC 9V33X (Genoa-X) |688 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
34+
|Standard_HB176rs_v4 |176 |AMD EPYC 9V33X (Genoa-X) |704 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
35+
|Standard_HB176-144rs_v4|144 |AMD EPYC 9V33X (Genoa-X) |704 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
36+
|Standard_HB176-96rs_v4 |96 |AMD EPYC 9V33X (Genoa-X) |704 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
37+
|Standard_HB176-48rs_v4 |48 |AMD EPYC 9V33X (Genoa-X) |704 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
38+
|Standard_HB176-24rs_v4 |24 |AMD EPYC 9V33X (Genoa-X) |704 |780 |2.4 |3.7 |400 |All |2 * 1.8 |32 |8 |
3939

4040
[!INCLUDE [hpc-include](./includes/hpc-include.md)]
4141

0 commit comments

Comments
 (0)