diff --git a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx index f4786a0d90..4ff3cdd65f 100644 --- a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx +++ b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx @@ -56,7 +56,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t | Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference | | What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases | -| | **[H100-SXM-2-80G](https://www.scaleway.com/en/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/)** | **[H100-SXM-8-80G](https://www.scaleway.com/en/)** | +| | **[H100-SXM-2-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** | **[H100-SXM-8-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** | |--------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------| | GPU Type | 2x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 4x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 8x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | | NVIDIA architecture | Hopper 2022 | Hopper 2022 | Hopper 2022 |