Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: GPU H100 SXM Instances with 2 and 4 GPUs now available in par-2
status: changed
date: 2025-08-11
category: compute
product: gpu-instances
---

Following the launch of our latest H100-SXM GPU Instances, delivering industry-leading conversational AI and speeding up large language models (LLMs), we are delighted to announce the availability of these instances in 2 GPUs and 4 GPUs sizes. The NVlink GPU-GPU communications and the 4 GPUs size brings even more possibilities and higher performance for your deployments. Available in the Paris (PAR2) region.

Key features include:

- Nvidia H100 SXM 80GB GB (Hopper architecture)
- 4th generation Tensor cores
- 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect
- Transformer Engine
- Available now in 2, 4 and 8 GPUs per VM (Additional stock deployments on-going)