Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
| Better used for | Image / Video encoding (4K) | 7B LLM Fine-Tuning / Inference | 70B LLM Fine-Tuning / Inference |
| What they are not made for | Large models (especially LLM) | Graphic or video encoding use cases | Graphic or video encoding use cases |

| | **[H100-SXM-2-80G](https://www.scaleway.com/en/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/)** | **[H100-SXM-8-80G](https://www.scaleway.com/en/)** |
| | **[H100-SXM-2-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** | **[H100-SXM-4-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** | **[H100-SXM-8-80G](https://www.scaleway.com/en/h100-pcie-try-it-now/)** |
|--------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------------|
| GPU Type | 2x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 4x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM | 8x [H100-SXM](https://www.nvidia.com/en-us/data-center/h100/) SXM |
| NVIDIA architecture | Hopper 2022 | Hopper 2022 | Hopper 2022 |
Expand Down