Skip to content

Commit a369cc5

Browse files
fix(gpu): fix link
1 parent 73bd6e7 commit a369cc5

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a
1616

1717
## How to choose the right GPU Instance type
1818

19-
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
19+
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
2020
Below, you will find a guide to help you make an informed decision:
2121

2222
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -136,7 +136,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
136136
| Performance | 990 TFLops (in FP166 Tensor Core) |
137137
| Specifications | - GH200 SuperChip with 72 ARM Neoverse V2 cores<br />- 480 GB of LPDDR5X DRAM<br />- 96GB of HBM3 GPU memory<br />(Memory is fully merged for up to 576GB of global usable memory) |
138138
| [MIG compatibility](/gpu/how-to/use-nvidia-mig-technology/) | Yes |
139-
| Inter-GPU bandwidth (for clusters up to 256 GH200) | NVlink Switch System 900 GB/s |
139+
| Inter-GPU bandwidth (for clusters up to 256 GH200) | NVlink Switch System 900 GB/s |
140140
| Format & Features | Single chip up to GH200 clusters. (For larger setup needs, [contact us](https://www.scaleway.com/en/contact-ai-supercomputers/)) |
141141
| Use cases | - Extra large LLM and DL model inference<br />- HPC |
142142
| What they are not made for | - Graphism<br /> - (Training) |

0 commit comments

Comments
 (0)