Skip to content

Commit 8c6af7c

Browse files
feat(gpu): update doc gpu (#4621)
* feat(gpu): add h100-sxm * feat(gpu): update wording * feat(gpu): update docs * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> --------- Co-authored-by: Jessica <[email protected]>
1 parent 61c7aff commit 8c6af7c

File tree

10 files changed

+87
-8
lines changed

10 files changed

+87
-8
lines changed

menu/navigation.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1825,6 +1825,10 @@
18251825
{
18261826
"label": "Understanding NVIDIA FP8 format",
18271827
"slug": "understanding-nvidia-fp8"
1828+
},
1829+
{
1830+
"label": "Understanding NVIDIA NVLink",
1831+
"slug": "understanding-nvidia-nvlink"
18281832
}
18291833
],
18301834
"label": "Additional Content",

pages/account/reference-content/products-availability.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@ Scaleway products are available in multiple regions and locations worldwide. Thi
2424
| Product Category | Product | Paris region | Amsterdam region | Warsaw region |
2525
|---------------------------|---------------------------------------|------------------------|-------------------------|------------------------|
2626
| **Compute** | Instances | PAR1, PAR2, PAR3 | AMS1, AMS2, AMS3 | WAW1, WAW2, WAW3 |
27+
| | GPU H100-SXM-X-80G | PAR2 | Not available yet | Not available yet |
28+
| | GPU H100-X-80G | PAR2 | Not available yet | WAW2 |
2729
| | GPU H100-X-80G | PAR2 | Not available yet | WAW2 |
2830
| | GPU L40S-X-48G | PAR2 | Not available yet | WAW2 |
2931
| | GPU L4-X-24G | PAR2 | Not available yet | WAW2 |

pages/gpu/how-to/use-nvidia-mig-technology.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ By default, the MIG feature of NVIDIA GPUs is disabled. To use it with your GPU
7979
MIG is now enabled for the GPU Instance.
8080

8181
## How to list MIG Profiles
82-
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100.
82+
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100 and H100-SXM.
8383

8484
These profiles determine the sizes and functionalities available of the MIG partitions that users can generate. Additionally, the driver supplies details regarding placements, which specify the types and quantities of Instances that can be established.
8585

pages/gpu/how-to/use-scratch-storage-h100-instances.mdx

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ categories:
1313
- compute
1414
---
1515

16-
Scaleway H100 and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
16+
Scaleway H100, H100-SXM, and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
1717

1818
Scratch storage temporarily accommodates data during computational or data processing tasks. It is commonly used for storing intermediate results, processing input data, or holding output data before that data is moved to more permanent storage.
1919

@@ -41,10 +41,15 @@ This enhancement allows us to provide the GPU with a substantial amount of scrat
4141
* for L40S-8-48G Instances: 12.8 TB
4242
* for H100-1-80G Instances: 3 TB
4343
* for H100-2-80G Instances: 6 TB
44+
* for H100-SXM-2-80G Instances: ~3 TB
45+
* for H100-SXM-4-80G Instances: ~6 TB
46+
* for H100-SXM-1-80G Instances: ~12 TB
47+
48+
4449
</Message>
4550

4651
## How can I add scratch storage to my GPU Instance using the Scaleway CLI or console?
47-
Scratch storage is automatically added when creating H100 and L40S Instances.
52+
Scratch storage is automatically added when creating H10, H100-SXM, and L40S Instances.
4853

4954
## How can I add scratch storage to my GPU Instance when using the API?
5055
You need to add an extra volume, for example:

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ It empowers European AI startups, giving them the tools (without the need for a
2222

2323
## How to choose the right GPU Instance type
2424

25-
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 PCIe Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
25+
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
2626
Below, you will find a guide to help you make an informed decision:
2727

2828
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -34,9 +34,9 @@ Below, you will find a guide to help you make an informed decision:
3434
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
3535
* Bigger GPU
3636
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
37-
* An HGX-based server setup with 8x NVlink GPUs
37+
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
3838
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
39-
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
39+
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
4040
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
4141

4242
Remember that there is no one-size-fits-all answer, and the right GPU Instance type will depend on your workload’s unique requirements and budget. It is important that you regularly reassess your choice as your workload evolves. Depending on which type best fits your evolving tasks, you can easily migrate from one GPU Instance type to another.
@@ -62,6 +62,24 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
6262
| Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
6363
| What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
6464

65+
| | **[H100-SXM-2-80G](https://www.scaleway.com/en/TBD/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/TBD/)** | **[H100-SXM-80G](https://www.scaleway.com/en/TBD/)** |
66+
|--------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|
67+
| GPU Type | 2x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 4x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 8x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM |
68+
| NVIDIA architecture | Hopper 2022 | Hopper 2022 | Hopper 2022 |
69+
| Tensor Cores | Yes | Yes | Yes |
70+
| Performance (training in FP16 Tensor Cores) | 2x 1979 TFLOPS | 4x 1979 TFLOPS | 8x 1979 TFLOPS |
71+
| VRAM | 2x 80 GB HBM3 (Memory bandwidth: 3.35TB/s) | 4x 80 GB HBM3 (Memory bandwidth: 3.35TB/s) | 8x 80 GB HBM3 (Memory bandwidth: 3.35TB/s) |
72+
| CPU Type | Xeon Platinum 8452Y (2.0 GHz) | Xeon Platinum 8452Y (2.0 GHz) | Xeon Platinum 8452Y (2.0 GHz) |
73+
| vCPUs | 32 | 64 | 128 |
74+
| RAM | 240 GB DDR5 | 480 GB DDR5 | 960 GB DDR5 |
75+
| Storage | Boot on Block 5K | Boot on Block 5K | Boot on Block 5K |
76+
| [Scratch Storage](/gpu/how-to/use-scratch-storage-h100-instances/) | Yes (~3 TB) | Yes (~6 TB) | Yes (~12 TB) |
77+
| [MIG compatibility](/gpu/how-to/use-nvidia-mig-technology/) | Yes | Yes | Yes |
78+
| Bandwidth | 20 Gbps | 20 Gbps | 20 Gbps |
79+
| Network technology | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) |
80+
| Better used for | LLM fine-tuning, LLM inference with lower quantization and/or larger parameter counts, fast computer vision training model training | LLM fine-tuning, LLM inference with lower quantization and/or larger parameter counts, fast computer vision training model training | Llama 4 or Deepseek R1 inference |
81+
| What they are not made for | Training of LLM (single node), graphic or video encoding use cases | Training of LLM (single node), Graphic or video encoding use cases | Training of LLM (single node), graphic or video encoding use cases |
82+
6583
| | **[L4-1-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-2-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-4-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-8-24G](https://www.scaleway.com/en/l4-gpu-instance/)** |
6684
|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
6785
| GPU Type | 1x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 | 2x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 | 4x [L4](https://www.nvidia.com/en-us/data-center/l4/)PCIe4 | 8x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 |

pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@ GPU workloads often involve processing large datasets, requiring high-bandwidth
3535

3636
| Instance Type | Internet Bandwidth | Block Bandwidth |
3737
|-------------------|-------------------------|---------------------|
38+
| H100-SXM-2-80G | 20 Gbit/s | 5 GiB/s |
39+
| H100-SXM-4-80G | 20 Gbit/s | 5 GiB/s |
40+
| H100-SXM-8-80G | 20 Gbit/s | 5 GiB/s |
3841
| H100-1-80G | 10 Gbit/s | 2 GiB/s |
3942
| H100-2-80G | 20 Gbit/s | 4 GiB/s |
4043
| L40S-1-48G | 2.5 Gbit/s | 1 GiB/s |

pages/gpu/reference-content/understanding-nvidia-fp8.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ content:
77
paragraph: This section provides information about NVIDIA FP8 (8-bit floating point) format
88
tags: NVIDIA FP8 GPU cloud
99
dates:
10-
validation: 2024-10-14
10+
validation: 2025-03-13
1111
posted: 2023-10-23
1212
categories:
1313
- compute
1414
---
1515

16-
Scaleway offers GPU Instances featuring [L4, L40S and H100 GPUs](https://www.scaleway.com/en/h100-pcie-try-it-now/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
16+
Scaleway offers GPU Instances featuring [L4, L40S, H100, and H100-SXM GPUs](/gpu/reference-content/choosing-gpu-instance-type/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
1717

1818
FP8 is an 8-bit floating point standard which was jointly developed by NVIDIA, ARM, and Intel to speed up AI development by improving memory efficiency during AI training and inference processes.
1919

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
meta:
3+
title: Understanding NVIDIA NVLink
4+
description: This section provides information about NVIDIA NVLink
5+
content:
6+
h1: Understanding NVIDIA NVLink
7+
paragraph: This section provides information about NVIDIA NVLink
8+
tags: NVIDIA NVLink
9+
dates:
10+
validation: 2025-03-13
11+
posted: 2025-03-13
12+
categories:
13+
- compute
14+
---
15+
16+
NVLink is NVIDIA's high-bandwidth, low-latency GPU-to-GPU interconnect with built-in resiliency features, available on Scaleway's [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table). It was designed to significantly improve the performance and efficiency when connecting GPUs, CPUs, and other components within the same node.
17+
It provides much higher bandwidth (up to 900 GB/s total GPU-to-GPU bandwidth in an 8-GPU configuration) and lower latency compared to traditional PCIe Gen 4 (up to 32 GB/s per link).
18+
This allows more data to be transferred between GPUs in less time while also reducing latency.
19+
20+
The high bandwidth and low latency make NVLink ideal for applications that require real-time data synchronization and processing, such as AI and HPC use-case scenarios.
21+
NVLink provides up to 900 GB/s total bandwidth for multi-GPU I/O and shared memory accesses, which is 7x the bandwidth of PCIe Gen 5.
22+
NVLink allows direct GPU-to-GPU interconnection, improving data transfer efficiency and reducing the need for CPU intervention, which can introduce bottlenecks.
23+
24+
NVLink supports the connection of multiple GPUs, enabling the creation of powerful multi-GPU systems capable of handling more complex and demanding workloads.
25+
Unified Memory Access allows GPUs to access each other's memory directly without CPU mediation, which is particularly beneficial for large-scale AI and HPC workloads.
26+
27+
### Comparison: NVLink vs. PCIe
28+
NVLink and PCI Express (PCIe) are both used for GPU communication, but NVLink is specifically designed to address the bandwidth and latency bottlenecks of PCIe in multi-GPU setups.
29+
30+
| Feature | NVLink 4.0 (H100-SGX) | PCIe 5.0 |
31+
|-------------------|---------------------------|------------------------------------|
32+
| **Use case** | High-performance computing, deep learning | General-purpose computing, graphics |
33+
| **Bandwidth** | Up to 900 GB/s (aggregate, multi-GPU) | 128 GB/s (x16 bidirectional) |
34+
| **Latency** | Lower than PCIe (sub-microsecond) | Higher compared to NVLink |
35+
| **Communication** | Direct GPU-to-GPU | Through CPU or PCIe switch |
36+
| **Memory sharing** | Unified memory space across GPUs | Requires CPU intervention (higher overhead) |
37+
| **Scalability** | Multi-GPU direct connection via NVSwitch | Limited by PCIe lanes |
38+
| **Efficiency** | Optimized for GPU workloads | More general-purpose |
39+
40+
In summary, NVLink, available on [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table), is **superior** for **multi-GPU AI and HPC** workloads due to its **higher bandwidth, lower latency, and memory-sharing capabilities**, while PCIe remains essential for broader system connectivity and general computing.

0 commit comments

Comments
 (0)