Skip to content

Commit c427afa

Browse files
authored
fix(gpu): fix instance nameing (#4957)
1 parent 2ff0954 commit c427afa

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
6262
| Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
6363
| What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |
6464

65-
| | **[H100-SXM-2-80G](https://www.scaleway.com/en/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/)** | **[H100-SXM-80G](https://www.scaleway.com/en/)** |
65+
| | **[H100-SXM-2-80G](https://www.scaleway.com/en/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/)** | **[H100-SXM-8-80G](https://www.scaleway.com/en/)** |
6666
|--------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|
6767
| GPU Type | 2x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 4x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 8x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM |
6868
| NVIDIA architecture | Hopper 2022 | Hopper 2022 | Hopper 2022 |
@@ -141,4 +141,4 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
141141
| Inter-GPU bandwidth (for clusters up to 256 GH200) | NVlink Switch System 900 GB/s |
142142
| Format & Features | Single chip up to GH200 clusters. (For larger setup needs, [contact us](https://www.scaleway.com/en/contact-ai-supercomputers/)) |
143143
| Use cases | - Extra large LLM and DL model inference<br />- HPC |
144-
| What they are not made for | - Graphism<br /> - (Training) |
144+
| What they are not made for | - Graphism<br /> - (Training) |

0 commit comments

Comments
 (0)