Skip to content

Commit 0193d03

Browse files
committed
fix(gpu): fix gpu name
1 parent 648418b commit 0193d03

File tree

2 files changed

+3
-17
lines changed

2 files changed

+3
-17
lines changed

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -126,17 +126,3 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
126126
| Storage | 64TB of a3i DDN low latency storage | 1.8 PB of a3i DDN low latency storage |
127127
| [MIG compatibility](/gpu/how-to/use-nvidia-mig-technology/) | Yes | Yes |
128128
| Inter-GPU bandwidth | Infiniband 400 Gb/s | Infiniband 400 Gb/s |
129-
130-
### NVIDIA GH200 Superchip
131-
132-
| | **[GH200 Grace Hopper™](https://www.scaleway.com/en/contact-gh200/)** |
133-
|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
134-
| GPU Type | NVIDIA [GH200 Grace Hopper™ Superchip](https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/) |
135-
| NVIDIA architecture | GH200 Grace Hopper™ Architecture |
136-
| Performance | 990 TFLops (in FP166 Tensor Core) |
137-
| Specifications | - GH200 SuperChip with 72 ARM Neoverse V2 cores<br />- 480 GB of LPDDR5X DRAM<br />- 96GB of HBM3 GPU memory<br />(Memory is fully merged for up to 576GB of global usable memory) |
138-
| [MIG compatibility](/gpu/how-to/use-nvidia-mig-technology/) | Yes |
139-
| Inter-GPU bandwidth (for clusters up to 256 GH200) | NVlink Switch System 900 GB/s |
140-
| Format & Features | Single chip up to GH200 clusters. (For larger setup needs, [contact us](https://www.scaleway.com/en/contact-ai-supercomputers/)) |
141-
| Use cases | - Extra large LLM and DL model inference<br />- HPC |
142-
| What they are not made for | - Graphism<br /> - (Training) |

pages/gpu/reference-content/understanding-nvidia-nvlink.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ dates:
77
posted: 2025-03-13
88
---
99

10-
NVLink is NVIDIA's high-bandwidth, low-latency GPU-to-GPU interconnect with built-in resiliency features, available on Scaleway's [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table). It was designed to significantly improve the performance and efficiency when connecting GPUs, CPUs, and other components within the same node.
10+
NVLink is NVIDIA's high-bandwidth, low-latency GPU-to-GPU interconnect with built-in resiliency features, available on Scaleway's [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table). It was designed to significantly improve the performance and efficiency when connecting GPUs, CPUs, and other components within the same node.
1111
It provides much higher bandwidth (up to 900 GB/s total GPU-to-GPU bandwidth in an 8-GPU configuration) and lower latency compared to traditional PCIe Gen 4 (up to 32 GB/s per link).
1212
This allows more data to be transferred between GPUs in less time while also reducing latency.
1313

@@ -21,7 +21,7 @@ Unified Memory Access allows GPUs to access each other's memory directly without
2121
### Comparison: NVLink vs. PCIe
2222
NVLink and PCI Express (PCIe) are both used for GPU communication, but NVLink is specifically designed to address the bandwidth and latency bottlenecks of PCIe in multi-GPU setups.
2323

24-
| Feature | NVLink 4.0 (H100-SGX) | PCIe 5.0 |
24+
| Feature | NVLink 4.0 (H100-SXM) | PCIe 5.0 |
2525
|-------------------|---------------------------|------------------------------------|
2626
| **Use case** | High-performance computing, deep learning | General-purpose computing, graphics |
2727
| **Bandwidth** | Up to 900 GB/s (aggregate, multi-GPU) | 128 GB/s (x16 bidirectional) |
@@ -31,4 +31,4 @@ NVLink and PCI Express (PCIe) are both used for GPU communication, but NVLink is
3131
| **Scalability** | Multi-GPU direct connection via NVSwitch | Limited by PCIe lanes |
3232
| **Efficiency** | Optimized for GPU workloads | More general-purpose |
3333

34-
In summary, NVLink, available on [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table), is **superior** for **multi-GPU AI and HPC** workloads due to its **higher bandwidth, lower latency, and memory-sharing capabilities**, while PCIe remains essential for broader system connectivity and general computing.
34+
In summary, NVLink, available on [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table), is **superior** for **multi-GPU AI and HPC** workloads due to its **higher bandwidth, lower latency, and memory-sharing capabilities**, while PCIe remains essential for broader system connectivity and general computing.

0 commit comments

Comments
 (0)