Skip to content

Commit 849db97

Browse files
Apply suggestions from code review
Co-authored-by: Jessica <[email protected]>
1 parent a2173b9 commit 849db97

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
7878
| Bandwidth | 20 Gbps | 20 Gbps | 20 Gbps |
7979
| Network technology | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) | [NVLink](/gpu/reference-content/understanding-nvidia-nvlink/) |
8080
| Better used for | LLM fine-tuning, LLM inference with lower quantization and/or larger parameter counts, fast computer vision training model training | LLM fine-tuning, LLM inference with lower quantization and/or larger parameter counts, fast computer vision training model training | Llama 4 or Deepseek R1 inference |
81-
| What they are not made for | Training of LLM (single node), Graphic or video encoding use cases | Training of LLM (single node), Graphic or video encoding use cases | Training of LLM (single node), Graphic or video encoding use cases |
81+
| What they are not made for | Training of LLM (single node), graphic or video encoding use cases | Training of LLM (single node), Graphic or video encoding use cases | Training of LLM (single node), graphic or video encoding use cases |
8282

8383
| | **[L4-1-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-2-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-4-24G](https://www.scaleway.com/en/l4-gpu-instance/)** | **[L4-8-24G](https://www.scaleway.com/en/l4-gpu-instance/)** |
8484
|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|

pages/gpu/reference-content/understanding-nvidia-fp8.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ categories:
1313
- compute
1414
---
1515

16-
Scaleway offers GPU Instances featuring [L4, L40S, H100 and H100-SXM GPUs](/gpu/reference-content/choosing-gpu-instance-type/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
16+
Scaleway offers GPU Instances featuring [L4, L40S, H100, and H100-SXM GPUs](/gpu/reference-content/choosing-gpu-instance-type/) that support FP8 (8-bit floating point), a revolutionary datatype introduced by NVIDIA. It enables higher throughput of matrix multipliers and convolutions.
1717

1818
FP8 is an 8-bit floating point standard which was jointly developed by NVIDIA, ARM, and Intel to speed up AI development by improving memory efficiency during AI training and inference processes.
1919

0 commit comments

Comments
 (0)