You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/gpu/how-to/use-nvidia-mig-technology.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,7 +79,7 @@ By default, the MIG feature of NVIDIA GPUs is disabled. To use it with your GPU
79
79
MIG is now enabled for the GPU Instance.
80
80
81
81
## How to list MIG Profiles
82
-
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100.
82
+
The NVIDIA driver provides several predefined profiles you can choose from while setting up the MIG (Multi-Instance GPU) feature on the H100 and H100-SXM.
83
83
84
84
These profiles determine the sizes and functionalities available of the MIG partitions that users can generate. Additionally, the driver supplies details regarding placements, which specify the types and quantities of Instances that can be established.
Copy file name to clipboardExpand all lines: pages/gpu/how-to/use-scratch-storage-h100-instances.mdx
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ categories:
13
13
- compute
14
14
---
15
15
16
-
Scaleway H100 and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
16
+
Scaleway H100, H100-SXM and L40S GPU Instances are equipped with additional scratch storage. This form of temporary Local Storage operates differently from our regular Local Storage.
17
17
18
18
Scratch storage temporarily accommodates data during computational or data processing tasks. It is commonly used for storing intermediate results, processing input data, or holding output data before that data is moved to more permanent storage.
19
19
@@ -41,10 +41,16 @@ This enhancement allows us to provide the GPU with a substantial amount of scrat
41
41
* for L40S-8-48G Instances: 12.8 TB
42
42
* for H100-1-80G Instances: 3 TB
43
43
* for H100-2-80G Instances: 6 TB
44
+
* for H100-SXM-1-80G Instances: ~1.5 TB
45
+
* for H100-SXM-2-80G Instances: ~3 TB
46
+
* for H100-SXM-4-80G Instances: ~6 TB
47
+
* for H100-SXM-1-80G Instances: ~12 TB
48
+
49
+
44
50
</Message>
45
51
46
52
## How can I add scratch storage to my GPU Instance using the Scaleway CLI or console?
47
-
Scratch storage is automatically added when creating H100 and L40S Instances.
53
+
Scratch storage is automatically added when creating H10, H100-SXM and L40S Instances.
48
54
49
55
## How can I add scratch storage to my GPU Instance when using the API?
Copy file name to clipboardExpand all lines: pages/gpu/reference-content/choosing-gpu-instance-type.mdx
+20-2Lines changed: 20 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,9 @@ Below, you will find a guide to help you make an informed decision:
34
34
***Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
35
35
* Bigger GPU
36
36
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
37
-
* An HGX-based server setup with 8x NVlink GPUs
37
+
* An HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](<ADD LINK>)
38
38
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
39
-
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
39
+
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
40
40
***Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
41
41
42
42
Remember that there is no one-size-fits-all answer, and the right GPU Instance type will depend on your workload’s unique requirements and budget. It is important that you regularly reassess your choice as your workload evolves. Depending on which type best fits your evolving tasks, you can easily migrate from one GPU Instance type to another.
@@ -62,6 +62,24 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
62
62
| Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
63
63
| What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
Copy file name to clipboardExpand all lines: pages/gpu/reference-content/understanding-nvidia-fp8.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,13 @@ content:
7
7
paragraph: This section provides information about NVIDIA FP8 (8-bit floating point) format
8
8
tags: NVIDIA FP8 GPU cloud
9
9
dates:
10
-
validation: 2024-10-14
10
+
validation: 2025-03-13
11
11
posted: 2023-10-23
12
12
categories:
13
13
- compute
14
14
---
15
15
16
-
Scaleway offers GPU Instances featuring [L4, L40Sand H100 GPUs](https://www.scaleway.com/en/h100-pcie-try-it-now/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
16
+
Scaleway offers GPU Instances featuring [L4, L40S, H100 and H100-SXM GPUs](/gpu/reference-content/choosing-gpu-instance-type/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
17
17
18
18
FP8 is an 8-bit floating point standard which was jointly developed by NVIDIA, ARM, and Intel to speed up AI development by improving memory efficiency during AI training and inference processes.
description: This section provides information about NVIDIA NVLink
5
+
content:
6
+
h1: Understanding NVIDIA NVLink
7
+
paragraph: This section provides information about NVIDIA NVLink
8
+
tags: NVIDIA NVLink
9
+
dates:
10
+
validation: 2025-03-13
11
+
posted: 2025-03-13
12
+
categories:
13
+
- compute
14
+
---
15
+
16
+
NVLink is NVIDIA's high-bandwidth, low-latency GPU-to-GPU interconnect with built-in resiliency features, available on Scaleway's [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table). It was designed to significantly improve the performance and efficiency when connecting GPUs, CPUs, and other components within the same node.
17
+
It provides much higher bandwidth (up to 900 GB/s total GPU-to-GPU bandwidth in an 8-GPU configuration) and lower latency compared to traditional PCIe Gen 4 (up to 32 GB/s per link).
18
+
This allows more data to be transferred between GPUs in less time while also reducing latency.
19
+
20
+
The high bandwidth and low latency make NVLink ideal for applications that require real-time data synchronization and processing, such as AI and HPC use-case scenarios.
21
+
NVLink provides up to 900 GB/s total bandwidth for multi-GPU I/O and shared memory accesses, which is 7x the bandwidth of PCIe Gen 5.
22
+
NVLink allows direct GPU-to-GPU interconnection, improving data transfer efficiency and reducing the need for CPU intervention, which can introduce bottlenecks.
23
+
24
+
NVLink supports the connection of multiple GPUs, enabling the creation of powerful multi-GPU systems capable of handling more complex and demanding workloads.
25
+
With Unified Memory Access, NVLink enables direct memory access between GPUs without CPU mediation, enhancing efficiency in large-scale AI and HPC workloads.
26
+
27
+
### Comparison: NVLink vs. PCIe
28
+
NVLink and PCI Express (PCIe) are both used for GPU communication, but NVLink is specifically designed to address the bandwidth and latency bottlenecks of PCIe in multi-GPU setups.
|**Bandwidth**| Up to 900 GB/s (aggregate, multi-GPU) | 128 GB/s (x16 bidirectional) |
34
+
|**Latency**| Lower than PCIe (sub-microsecond) | Higher compared to NVLink |
35
+
|**Communication**| Direct GPU-to-GPU | Through CPU or PCIe switch |
36
+
|**Memory sharing**| Unified memory space across GPUs | Requires CPU intervention (higher overhead) |
37
+
|**Scalability**| Multi-GPU direct connection via NVSwitch | Limited by PCIe lanes |
38
+
|**Efficiency**| Optimized for GPU workloads | More general-purpose |
39
+
40
+
**Unified Memory Access** allows GPUs to access each other's memory directly without CPU mediation, which is particularly beneficial for large-scale AI and HPC workloads.
41
+
42
+
In summary, NVLink, available on [H100-SGX Instances](/gpu/reference-content/choosing-gpu-instance-type/#gpu-instances-and-ai-supercomputer-comparison-table), is **superior** for **multi-GPU AI and HPC** workloads due to its **higher bandwidth, lower latency, and memory-sharing capabilities**, while **PCIe remains essential** for broader system connectivity and general computing.
0 commit comments