Skip to content

Commit 680f33f

Browse files
authored
fix typo
1 parent 589429b commit 680f33f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pages/gpu/reference-content/migration-h100.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited avai
8686
#### Is H100-SXM-2-80G compatible with my current setup?
8787
Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). No changes in your code base are required when upgrading to a SXM-based GPU Instance.
8888

89-
### Why is the H100-SXM better for multi-GPU workloads?
89+
#### Why is the H100-SXM better for multi-GPU workloads?
9090

9191
The NVIDIA H100-SXM outperforms the H100-PCIe in multi-GPU configurations primarily due to its higher interconnect bandwidth and greater power capacity. It uses fourth-generation NVLink and NVSwitch, delivering up to **900 GB/s of bidirectional bandwidth** for fast GPU-to-GPU communication. In contrast, the H100-PCIe is limited to a **theoretical maximum of 128 GB/s** via PCIe Gen 5, which becomes a bottleneck in communication-heavy workloads such as large-scale AI training and HPC.
9292
The H100-SXM also provides **HBM3e memory** with up to **3.35 TB/s of bandwidth**, compared to **2 TB/s** with the H100-PCIe’s HBM3, improving performance in memory-bound tasks.

0 commit comments

Comments
 (0)