You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/gpu-multi-instance.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Multi-instance GPU Node pool (preview)
2
+
title: Multi-instance GPU Node pool
3
3
description: Learn how to create a Multi-instance GPU Node pool and schedule tasks on it
4
4
services: container-service
5
5
ms.topic: article
@@ -13,8 +13,6 @@ Nvidia's A100 GPU can be divided in up to seven independent instances. Each inst
13
13
14
14
This article will walk you through how to create a multi-instance GPU node pool on Azure Kubernetes Service clusters and schedule tasks.
15
15
16
-
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
17
-
18
16
## GPU Instance Profile
19
17
20
18
GPU Instance Profiles define how a GPU will be partitioned. The following table shows the available GPU Instance Profile for the `Standard_ND96asr_v4`, the only instance type that supports the A100 GPU at this time.
0 commit comments