Skip to content

Commit 07db008

Browse files
Merge pull request #263572 from tamram/tamram24-0116
add preview moniker to NAP article
2 parents 64af179 + 29c12df commit 07db008

File tree

4 files changed

+53
-44
lines changed

4 files changed

+53
-44
lines changed

articles/aks/concepts-clusters-workloads.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure Kubernetes Services (AKS) Core Basic Concepts
33
description: Learn about the core components that make up workloads and clusters in Kubernetes and their counterparts on Azure Kubernetes Services (AKS).
44
ms.topic: conceptual
55
ms.custom: build-2023
6-
ms.date: 12/13/2023
6+
ms.date: 01/16/2024
77
---
88

99
# Core Kubernetes concepts for Azure Kubernetes Service
@@ -88,13 +88,14 @@ If you need advanced configuration and control on your Kubernetes node container
8888
AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the allocatable resources in AKS. Remember this information when setting requests and limits for user deployed pods.
8989

9090
To find a node's allocatable resources, run:
91+
9192
```kubectl
9293
kubectl describe node [NODE_NAME]
9394
```
9495

9596
To maintain node performance and functionality, AKS reserves resources on each node. As a node grows larger in resources, the resource reservation grows due to a higher need for management of user-deployed pods.
9697

97-
>[!NOTE]
98+
> [!NOTE]
9899
> Using AKS add-ons such as Container Insights (OMS) will consume additional node resources.
99100
100101
Two types of resources are reserved:
@@ -103,9 +104,9 @@ Two types of resources are reserved:
103104

104105
Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
105106

106-
| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32|64|
107-
|---|---|---|---|---|---|---|---|
108-
|Kube-reserved (millicores)|60|100|140|180|260|420|740|
107+
| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32 | 64 |
108+
|----------------------------|----|-----|-----|-----|-----|-----|-----|
109+
| Kube-reserved (millicores) | 60 | 100 | 140 | 180 | 260 | 420 | 740 |
109110

110111
#### Memory
111112

@@ -237,6 +238,7 @@ A pod is a logical resource, but application workloads run on the containers. Po
237238
A *deployment* represents identical pods managed by the Kubernetes Deployment Controller. A deployment defines the number of pod *replicas* to create. The Kubernetes Scheduler ensures that additional pods are scheduled on healthy nodes if pods or nodes encounter problems.
238239
239240
You can update deployments to change the configuration of pods, container image used, or attached storage. The Deployment Controller:
241+
240242
* Drains and terminates a given number of replicas.
241243
* Creates replicas from the new deployment definition.
242244
* Continues the process until all replicas in the deployment are updated.

articles/aks/node-autoprovision.md

Lines changed: 42 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,24 @@
11
---
2-
title: Node autoprovisioning (Preview)
3-
description: Learn about Azure Kubernetes Service (AKS) Node autoprovisioning
2+
title: Node autoprovisioning (preview)
3+
description: Learn about Azure Kubernetes Service (AKS) node autoprovisioning (preview).
44
ms.topic: article
55
ms.custom: devx-track-azurecli
6-
ms.date: 10/19/2023
6+
ms.date: 01/18/2024
77
ms.author: juda
88
#Customer intent: As a cluster operator or developer, how to scale my cluster based on workload requirements and right size my nodes automatically
99
---
1010

11-
# Node autoprovision
12-
When deploying workloads onto AKS, you need to make a decision about the node pool configuration regarding the VM size needed. As your workloads become more complex, and require different CPU, Memory and capabilities to run, the overhead of having to design your VM configuration for numerous resource requests becomes difficult.
11+
# Node autoprovisioning (preview)
1312

14-
Node autoprovision (NAP) decides based on pending pod resource requirements the optimal VM configuration to run those workloads in the most efficient and cost effective manner.
13+
When you deploy workloads onto AKS, you need to make a decision about the node pool configuration regarding the VM size needed. As your workloads become more complex, and require different CPU, memory, and capabilities to run, the overhead of having to design your VM configuration for numerous resource requests becomes difficult.
14+
15+
Node autoprovisioning (NAP) (preview) decides based on pending pod resource requirements the optimal VM configuration to run those workloads in the most efficient and cost effective manner.
1516

1617
NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and the [AKS provider](https://github.com/Azure/karpenter) is also Open Source. NAP automatically deploys and configures and manages Karpenter on your AKS clusters.
1718

19+
> [!IMPORTANT]
20+
> Node autoprovisioning (NAP) for AKS is currently in PREVIEW.
21+
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
1822
1923
## Before you begin
2024

@@ -60,13 +64,14 @@ NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
6064
```
6165
6266
## Limitations
63-
* Windows and Azure Linux node pools aren't supported yet
64-
* Kubelet configuration through Node pool configuration is not supported
65-
* NAP can only be enabled on new clusters currently
67+
68+
- Windows and Azure Linux node pools aren't supported yet
69+
- Kubelet configuration through Node pool configuration is not supported
70+
- NAP can only be enabled on new clusters currently
6671
6772
## Enable node autoprovisioning
68-
To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
6973
74+
To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.
7075
7176
### [Azure CLI](#tab/azure-cli)
7277
@@ -76,6 +81,7 @@ az aks create --name karpuktest --resource-group karpuk --node-provisioning-mode
7681
```
7782

7883
### [Azure ARM](#tab/azure-arm)
84+
7985
```azurecli-interactive
8086
az deployment group create --resource-group napcluster --template-file ./nap.json
8187
```
@@ -125,16 +131,17 @@ az deployment group create --resource-group napcluster --template-file ./nap.jso
125131
]
126132
}
127133
```
134+
128135
---
136+
129137
## Node pools
130-
Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.
131138

139+
Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.
132140

133141
If you have specific VM SKUs that are reserved instances, for example, you may wish to only use those VMs as the starting pool.
134142

135143
You can have multiple node pool definitions in a cluster, but AKS deploys a default node pool definition that you can modify:
136144

137-
138145
```yaml
139146
apiVersion: karpenter.sh/v1beta1
140147
kind: NodePool
@@ -172,29 +179,27 @@ spec:
172179
- D
173180
```
174181
175-
### Supported node provisioner requirements
182+
### Supported node provisioner requirements
176183
177-
#### SKU selectors with well known labels
184+
#### SKU selectors with well known labels
178185
179-
| Selector | Description | Example |
180-
---|---|---|
181-
| karpenter.azure.com/sku-family | VM SKU Family | D, F, L etc. |
182-
| karpenter.azure.com/sku-name | Explicit SKU name | Standard_A1_v2 |
183-
| karpenter.azure.com/sku-version | SKU version (without "v", can use 1) | 1 , 2 |
186+
| Selector | Description | Example |
187+
|--|--|--|
188+
| karpenter.azure.com/sku-family | VM SKU Family | D, F, L etc. |
189+
| karpenter.azure.com/sku-name | Explicit SKU name | Standard_A1_v2 |
190+
| karpenter.azure.com/sku-version | SKU version (without "v", can use 1) | 1 , 2 |
184191
| karpenter.sh/capacity-type | VM allocation type (Spot / On Demand) | spot or on-demand |
185192
| karpenter.azure.com/sku-cpu | Number of CPUs in VM | 16 |
186-
| karpenter.azure.com/sku-memory | Memory in VM in MiB | 131072 |
187-
| karpenter.azure.com/sku-gpu-name | GPU name | A100 |
188-
| karpenter.azure.com/sku-gpu-manufacturer | GPU manufacturer | nvidia |
193+
| karpenter.azure.com/sku-memory | Memory in VM in MiB | 131072 |
194+
| karpenter.azure.com/sku-gpu-name | GPU name | A100 |
195+
| karpenter.azure.com/sku-gpu-manufacturer | GPU manufacturer | nvidia |
189196
| karpenter.azure.com/sku-gpu-count | GPU count per VM | 2 |
190197
| karpenter.azure.com/sku-networking-accelerated | Whether the VM has accelerated networking | [true, false] |
191198
| karpenter.azure.com/sku-storage-premium-capable | Whether the VM supports Premium IO storage | [true, false] |
192199
| karpenter.azure.com/sku-storage-ephemeralos-maxsize | Size limit for the Ephemeral OS disk in Gb | 92 |
193-
| topology.kubernetes.io/zone | The Availability Zone(s) | [uksouth-1,uksouth-2,uksouth-3] |
194-
| kubernetes.io/os | Operating System (Linux only during preview) | linux |
195-
| kubernetes.io/arch | CPU architecture (AMD64 or ARM64) | [amd64, arm64] |
196-
197-
200+
| topology.kubernetes.io/zone | The Availability Zone(s) | [uksouth-1,uksouth-2,uksouth-3] |
201+
| kubernetes.io/os | Operating System (Linux only during preview) | linux |
202+
| kubernetes.io/arch | CPU architecture (AMD64 or ARM64) | [amd64, arm64] |
198203
199204
To list the VM SKU capabilities and allowed values, use the `vm list-skus` command from the Azure CLI.
200205

@@ -203,7 +208,8 @@ az vm list-skus --resource-type virtualMachines --location <location> --query '[
203208
```
204209

205210
## Node pool limits
206-
By default, NAP attempts to schedule your workloads within the Azure quota you have available. You can also specify the upper limit of resources that is used by a Nodepool, specifying limits within the Node pool spec.
211+
212+
By default, NAP attempts to schedule your workloads within the Azure quota you have available. You can also specify the upper limit of resources that is used by a node pool, specifying limits within the node pool spec.
207213

208214
```
209215
# Resource limits constrain the total size of the cluster.
@@ -213,23 +219,26 @@ By default, NAP attempts to schedule your workloads within the Azure quota you h
213219
memory: 1000Gi
214220
```
215221
216-
217222
## Node pool weights
218-
When you have multiple Nodepools defined, it's possible to set a preference of where a workload should be scheduled. Define the relative weight on your Node pool definitions.
223+
224+
When you have multiple node pools defined, it's possible to set a preference of where a workload should be scheduled. Define the relative weight on your Node pool definitions.
219225
220226
```
221227
# Priority given to the node pool when the scheduler considers which to select. Higher weights indicate higher priority when comparing node pools.
222228
# Specifying no weight is equivalent to specifying a weight of 0.
223229
weight: 10
224230
```
225231
226-
## Kubernetes and node image updates
232+
## Kubernetes and node image updates
233+
227234
AKS with NAP manages the Kubernetes version upgrades and VM OS disk updates for you by default.
228235
229236
### Kubernetes upgrades
237+
230238
Kubernetes upgrades for NAP node pools follows the Control Plane Kubernetes version. If you perform a cluster upgrade, your NAP nodes are updated automatically to follow the same versioning.
231239
232240
### Node image updates
241+
233242
By default NAP node pool virtual machines are automatically updated when a new image is available. If you wish to pin a node pool at a certain node image version, you can set the imageVersion on the node class:
234243
235244
```kubectl
@@ -266,14 +275,12 @@ spec:
266275
267276
Removing the imageVersion spec would revert the node pool to be updated to the latest node image version.
268277
269-
270278
## Node disruption
271279
272280
When the workloads on your nodes scale down, NAP uses disruption rules on the Node pool specification to decide when and how to remove those nodes and potentially reschedule your workloads to be more efficient.
273281
274282
You can remove a node manually using `kubectl delete node`, but NAP can also control when it should optimize your nodes.
275283

276-
277284
```yaml
278285
disruption:
279286
# Describes which types of Nodes NAP should consider for consolidation
@@ -288,7 +295,8 @@ You can remove a node manually using `kubectl delete node`, but NAP can also con
288295
consolidateAfter: 30s
289296
```
290297
291-
## Monitoring selection events
298+
## Monitoring selection events
299+
292300
Node autoprovision produces cluster events that can be used to monitor deployment and scheduling decisions being made. You can view events through the Kubernetes events stream.
293301
294302
```
@@ -297,4 +305,3 @@ kubectl get events -A --field-selector source=karpenter -w
297305

298306
[az-extension-add]: /cli/azure/extension#az-extension-add
299307
[az-extension-update]: /cli/azure/extension#az-extension-update
300-
[az-feature-register]: /cli/azure/feature#az-feature-register

articles/aks/supported-kubernetes-versions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -144,11 +144,11 @@ New Supported Version List
144144

145145
## Platform support policy
146146

147-
Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
147+
Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
148148

149-
Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
149+
Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
150150

151-
AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream.
151+
AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on Kubernetes upstream.
152152

153153
This table outlines support guidelines for Community Support compared to Platform support.
154154

includes/container-service-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.custom: include file
1212
| Resource | Limit |
1313
|--|:-|
1414
| Maximum clusters per subscription | 5000 <br />Note: spread clusters across different regions to account for Azure API throttling limits |
15-
| Maximum nodes per cluster with Virtual Machine Scale Sets and [Standard Load Balancer SKU][standard-load-balancer] | 5000 across all [node-pools][node-pool] (default limit: 1000) <br />Note: Running more than a 1000 nodes per cluster requires increasing the default node limit quota. [Contact support][Contact Support] for assistance. |
15+
| Maximum nodes per cluster with Virtual Machine Scale Sets and [Standard Load Balancer SKU][standard-load-balancer] | 5000 across all [node pools][node-pool] (default limit: 1000) <br />Note: Running more than a 1000 nodes per cluster requires increasing the default node limit quota. [Contact support][Contact Support] for assistance. |
1616
| Maximum nodes per node pool (Virtual Machine Scale Sets node pools) | 1000 |
1717
| Maximum node pools per cluster | 100 |
1818
| Maximum pods per node: with [Kubenet][Kubenet] networking plug-in<sup>1</sup> | Maximum: 250 <br /> Azure CLI default: 110 <br /> Azure Resource Manager template default: 110 <br /> Azure portal deployment default: 30 |

0 commit comments

Comments
 (0)