Skip to content

Commit e27a94f

Browse files
committed
feat(gpu): add h100-sxm
1 parent 8c34ba3 commit e27a94f

File tree

9 files changed

+89
-7
lines changed

9 files changed

+89
-7
lines changed

menu/navigation.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1817,6 +1817,10 @@
18171817
{
18181818
"label": "Understanding NVIDIA FP8 format",
18191819
"slug": "understanding-nvidia-fp8"
1820+
},
1821+
{
1822+
"label": "Understanding NVIDIA NVLink",
1823+
"slug": "understanding-nvidia-nvlink"
18201824
}
18211825
],
18221826
"label": "Additional Content",

pages/gpu/how-to/use-nvidia-mig-technology.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ By default, the MIG feature of NVIDIA GPUs is disabled. To use it with your GPU
7979
MIG is now enabled for the GPU Instance.
8080

8181
## How to list MIG Profiles
82-
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100.
82+
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100 and H100-SXM.
8383

8484
These profiles determine the sizes and functionalities available of the MIG partitions that users can generate. Additionally, the driver supplies details regarding placements, which specify the types and quantities of Instances that can be established.
8585

pages/gpu/how-to/use-scratch-storage-h100-instances.mdx

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ categories:
1313
- compute
1414
---
1515

16-
Scaleway H100 and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
16+
Scaleway H100, H100-SXM and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
1717

1818
Scratch storage temporarily accommodates data during computational or data processing tasks. It is commonly used for storing intermediate results, processing input data, or holding output data before that data is moved to more permanent storage.
1919

@@ -41,10 +41,16 @@ This enhancement allows us to provide the GPU with a substantial amount of scrat
4141
* for L40S-8-48G Instances: 12.8 TB
4242
* for H100-1-80G Instances: 3 TB
4343
* for H100-2-80G Instances: 6 TB
44+
* for H100-SXM-1-80G Instances: ~1.5 TB
45+
* for H100-SXM-2-80G Instances: ~3 TB
46+
* for H100-SXM-4-80G Instances: ~6 TB
47+
* for H100-SXM-1-80G Instances: ~12 TB
48+
49+
4450
</Message>
4551

4652
## How can I add scratch storage to my GPU Instance using the Scaleway CLI or console?
47-
Scratch storage is automatically added when creating H100 and L40S Instances.
53+
Scratch storage is automatically added when creating H10, H100-SXM and L40S Instances.
4854

4955
## How can I add scratch storage to my GPU Instance when using the API?
5056
You need to add an extra volume, for example:

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,9 @@ Below, you will find a guide to help you make an informed decision:
3434
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
3535
* Bigger GPU
3636
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
37-
* An HGX-based server setup with 8x NVlink GPUs
37+
* An HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](<ADD LINK>)
3838
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
39-
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
39+
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
4040
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
4141

4242
Remember that there is no one-size-fits-all answer, and the right GPU Instance type will depend on your workload’s unique requirements and budget. It is important that you regularly reassess your choice as your workload evolves. Depending on which type best fits your evolving tasks, you can easily migrate from one GPU Instance type to another.
@@ -62,6 +62,24 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
6262
| Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
6363
| What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
6464

65+
| | **[H100-SXM-1-80G](https://www.scaleway.com/en/TBD/)** | **[H100-SXM-2-80G](https://www.scaleway.com/en/TBD/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/TBD/)** | **[H100-SXM-80G](https://www.scaleway.com/en/TBD/)** |
66+
|--------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|
67+
| GPU Type | 1x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 2x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 4x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 8x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM |
68+
| NVIDIA architecture | Hopper 2022 | Hopper 2022 | Hopper 2022 | Hopper 2022 |
69+
| Tensor Cores | Yes | Yes | Yes | Yes |
70+
| Performance (training in FP16 Tensor Cores) | 1x 1513 TFLOPS | 2x 1513 TFLOPS | 4x 1513 TFLOPS | 8x 1513 TFLOPS |
71+
| VRAM | 1x 80 GB HBM2E (Memory bandwidth: 2TB/s) | 2x 80 GB HBM2E (Memory bandwidth: 2TB/s) | 4x 80 GB HBM2E (Memory bandwidth: 2TB/s) | 8x 80 GB HBM2E (Memory bandwidth: 2TB/s) |
72+
| CPU Type | Xeon Platinum 8452Y (2.0 GHz) | Xeon Platinum 8452Y (2.0 GHz) | Xeon Platinum 8452Y (2.0 GHz) | Xeon Platinum 8452Y (2.0 GHz) |
73+
| vCPUs | 16 | 32 | 64 | 128 |
74+
| RAM | 120 GB DDR5 | 240 GB DDR5 | 480 GB DDR5 | 960 GB DDR5 |
75+
| Storage | Boot on Block 5K | Boot on Block 5K | Boot on Block 5K | Boot on Block 5K |
76+
| [Scratch Storage](/gpu/how-to/use-scratch-storage-h100-instances/) | Yes (~1.5 TB) | Yes (~3 TB) | Yes (~6 TB) | Yes (~12 TB) |
77+
| [MIG compatibility](/gpu/how-to/use-nvidia-mig-technology/) | Yes | Yes | Yes | Yes |
78+
| Bandwidth | 10 Gbps | 20 Gbps | 20 Gbps | 20 Gbps |
79+
| Network technology | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) |
80+
| Better used for | *To be defined* | *To be defined* | *To be defined* | *To be defined* |
81+
| What they are not made for | *To be defined* | *To be defined* | *To be defined* | *To be defined* |
82+
6583
| | **[L4-1-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-2-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-4-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-8-24G](https://www.scaleway.com/en/l4-gpu-instance/)** |
6684
|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
6785
| GPU Type | 1x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 | 2x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 | 4x [L4](https://www.nvidia.com/en-us/data-center/l4/)PCIe4 | 8x [L4](https://www.nvidia.com/en-us/data-center/l4/) PCIe4 |

pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,10 @@ GPU workloads often involve processing large datasets, requiring high-bandwidth
3535

3636
| Instance Type | Internet Bandwidth | Block Bandwidth |
3737
|-------------------|-------------------------|---------------------|
38+
| H100-SXM-1-80G | 10 Gbit/s | 5 GiB/s |
39+
| H100-SXM-2-80G | 20 Gbit/s | 5 GiB/s |
40+
| H100-SXM-4-80G | 20 Gbit/s | 5 GiB/s |
41+
| H100-SXM-8-80G | 20 Gbit/s | 5 GiB/s |
3842
| H100-1-80G | 10 Gbit/s | 2 GiB/s |
3943
| H100-2-80G | 20 Gbit/s | 4 GiB/s |
4044
| L40S-1-48G | 2.5 Gbit/s | 1 GiB/s |

pages/gpu/reference-content/understanding-nvidia-fp8.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ content:
77
paragraph: This section provides information about NVIDIA FP8 (8-bit floating point) format
88
tags: NVIDIA FP8 GPU cloud
99
dates:
10-
validation: 2024-10-14
10+
validation: 2025-03-13
1111
posted: 2023-10-23
1212
categories:
1313
- compute
1414
---
1515

16-
Scaleway offers GPU Instances featuring [L4, L40S and H100 GPUs](https://www.scaleway.com/en/h100-pcie-try-it-now/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
16+
Scaleway offers GPU Instances featuring [L4, L40S, H100 and H100-SXM GPUs](/gpu/reference-content/choosing-gpu-instance-type/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
1717

1818
FP8 is an 8-bit floating point standard which was jointly developed by NVIDIA, ARM, and Intel to speed up AI development by improving memory efficiency during AI training and inference processes.
1919

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
meta:
3+
title: Understanding NVIDIA NVLink
4+
description: This section provides information about NVIDIA NVLink
5+
content:
6+
h1: Understanding NVIDIA NVLink
7+
paragraph: This section provides information about NVIDIA NVLink
8+
tags: NVIDIA NVLink
9+
dates:
10+
validation: 2025-03-13
11+
posted: 2025-03-13
12+
categories:
13+
- compute
14+
---
15+
16+
NVLink is NVIDIA's high-bandwidth, low-latency GPU-to-GPU interconnect with built-in resiliency features, available on Scaleway's [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table). It was designed to significantly improve the performance and efficiency when connecting GPUs, CPUs, and other components within the same node.
17+
It provides much higher bandwidth (up to 900 GB/s total GPU-to-GPU bandwidth in an 8-GPU configuration) and lower latency compared to traditional PCIe Gen 4 (up to 32 GB/s per link).
18+
This allows more data to be transferred between GPUs in less time while also reducing latency.
19+
20+
The high bandwidth and low latency make NVLink ideal for applications that require real-time data synchronization and processing, such as AI and HPC use-case scenarios.
21+
NVLink provides up to 900 GB/s total bandwidth for multi-GPU I/O and shared memory accesses, which is 7x the bandwidth of PCIe Gen 5.
22+
NVLink allows direct GPU-to-GPU interconnection, improving data transfer efficiency and reducing the need for CPU intervention, which can introduce bottlenecks.
23+
24+
NVLink supports the connection of multiple GPUs, enabling the creation of powerful multi-GPU systems capable of handling more complex and demanding workloads.
25+
With Unified Memory Access, NVLink enables direct memory access between GPUs without CPU mediation, enhancing efficiency in large-scale AI and HPC workloads.
26+
27+
### Comparison: NVLink vs. PCIe
28+
NVLink and PCI Express (PCIe) are both used for GPU communication, but NVLink is specifically designed to address the bandwidth and latency bottlenecks of PCIe in multi-GPU setups.
29+
30+
| Feature | NVLink 4.0 (H100-SGX) | PCIe 5.0 |
31+
|-------------------|---------------------------|------------------------------------|
32+
| **Use case** | High-performance computing, deep learning | General-purpose computing, graphics |
33+
| **Bandwidth** | Up to 900 GB/s (aggregate, multi-GPU) | 128 GB/s (x16 bidirectional) |
34+
| **Latency** | Lower than PCIe (sub-microsecond) | Higher compared to NVLink |
35+
| **Communication** | Direct GPU-to-GPU | Through CPU or PCIe switch |
36+
| **Memory sharing** | Unified memory space across GPUs | Requires CPU intervention (higher overhead) |
37+
| **Scalability** | Multi-GPU direct connection via NVSwitch | Limited by PCIe lanes |
38+
| **Efficiency** | Optimized for GPU workloads | More general-purpose |
39+
40+
**Unified Memory Access** allows GPUs to access each other's memory directly without CPU mediation, which is particularly beneficial for large-scale AI and HPC workloads.
41+
42+
In summary, NVLink, available on [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table), is **superior** for **multi-GPU AI and HPC** workloads due to its **higher bandwidth, lower latency, and memory-sharing capabilities**, while **PCIe remains essential** for broader system connectivity and general computing.

pages/instances/faq.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,10 @@ You can change the storage type and flexible IP after the Instance creation, whi
151151

152152
| Range | Available in | Price |
153153
|-------------------|------------------------|-------------------|
154+
| H100-SXM-1-80G | TBD | €X.XX/hour¹ |
155+
| H100-SXM-2-80G | TBD | €X.XX/hour¹ |
156+
| H100-SXM-4-80G | TBD | €X.XX/hour¹ |
157+
| H100-SXM-8-80G | TBD | €X.XX/hour¹ |
154158
| H100-1-80G | PAR2, WAW2 | €2.52/hour¹ |
155159
| H100-2-80G | PAR2, WAW2 | €5.04/hour¹ |
156160
| L40S-1-48G | PAR2 | €1.40/hour¹ |

0 commit comments

Comments
 (0)