Skip to content

Commit f82deba

Browse files
authored
Update changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx
1 parent bfa4bf4 commit f82deba

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,11 @@ category: compute
66
product: gpu-instances
77
---
88

9-
Following the launch of our latest H100-SXM GPU Instances, delivering industry-leading conversational AI and speeding up large language models (LLMs), we are delighted to announce the availability of these instances in 2 GPUs and 4 GPUs sizes. The NVlink GPU-GPU communications and the 4 GPUs size brings even more possibilities and higher performance for your deployments. Available in the Paris (PAR2) region.
9+
Following the launch of our H100-SXM GPU Instancesdelivering industry-leading conversational AI performance and accelerating large language models (LLMs) — we’re pleased to announce the availability of new 2-GPU and 4-GPU configurations.
1010

11-
Key features include:
11+
With NVLink GPU-to-GPU communication, the 4-GPU option unlocks even greater possibilities and higher performance for your deployments. Now available in the Paris (PAR2) region.
1212

13+
Key features include:
1314
- Nvidia H100 SXM 80GB GB (Hopper architecture)
1415
- 4th generation Tensor cores
1516
- 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect

0 commit comments

Comments
 (0)