You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-clusters-workloads.md
+16-17Lines changed: 16 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ The control plane includes the following core Kubernetes components:
54
54
|*kube-scheduler*| When you create or scale applications, the scheduler determines what nodes can run the workload and starts the identified nodes. |
55
55
|*kube-controller-manager*| The controller manager oversees a number of smaller controllers that perform actions such as replicating pods and handling node operations. |
56
56
57
-
While you don't need to configure control plane components, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs using Azure Monitor.
57
+
Keep in mind that you can't directly access the control plane. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs using Azure Monitor.
58
58
59
59
> [!NOTE]
60
60
> If you want to configure or directly access a control plane, you can deploy a self-managed Kubernetes cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
@@ -77,7 +77,7 @@ The Azure VM size for your nodes defines CPUs, memory, size, and the storage typ
77
77
78
78
In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts, including [Azure reservations][reservation-discounts], are automatically applied.
79
79
80
-
For managed disks, the default disk size and performance is assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
80
+
For managed disks, default disk size and performance are assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
81
81
82
82
> [!NOTE]
83
83
> If you need advanced configuration and control on your Kubernetes node container runtime and OS, you can deploy a self-managed cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
@@ -86,15 +86,15 @@ For managed disks, the default disk size and performance is assigned according t
86
86
87
87
AKS supports Ubuntu 22.04 and Azure Linux 2.0 as the node operating system (OS) for clusters with Kubernetes 1.25 and higher. Ubuntu 18.04 can also be specified at node pool creation for Kubernetes versions 1.24 and below.
88
88
89
-
AKS supports Windows Server 2022 as the default OS for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
89
+
AKS supports Windows Server 2022 as the default OS for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
90
90
91
91
### Container runtime configuration
92
92
93
93
A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or OS-specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used on Kubernetes version 1.19 and higher. For Windows Server 2019 and 2022 node pools, `containerd` is generally available and is the only runtime option on Kubernetes version 1.23 and higher. As of May 2023, Docker is retired and no longer supported. For more information about this retirement, see the [AKS release notes][aks-release-notes].
94
94
95
95
[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. With`containerd`-based nodes and node pools, the kubelet talks directly to `containerd` using the CRI (container runtime interface) plugin, removing extra hops in the data flow when compared to the Docker CRI implementation. As such, you see better pod startup latency and less resource (CPU and memory) usage.
96
96
97
-
`Containerd` works on every GA version of Kubernetes in AKS, and in every newer Kubernetes version above v1.19, and supports all Kubernetes and AKS features.
97
+
`Containerd` works on every GA version of Kubernetes in AKS, in every Kubernetes version starting from v1.19, and supports all Kubernetes and AKS features.
98
98
99
99
> [!IMPORTANT]
100
100
> Clusters with Linux node pools created on Kubernetes v1.19 or higher default to the `containerd` container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`.
@@ -148,7 +148,7 @@ Reserved memory in AKS includes the sum of two values:
148
148
149
149
**AKS 1.29 and later**
150
150
151
-
1.**`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This ensures that a node always has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
151
+
1.**`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This rule ensures that a node has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
152
152
2.**A rate of memory reservations** set according to the lesser value of: *20MB * Max Pods supported on the Node + 50MB* or *25% of the total system memory resources*.
153
153
154
154
**Examples**:
@@ -159,19 +159,18 @@ Reserved memory in AKS includes the sum of two values:
159
159
160
160
**AKS versions prior to 1.29**
161
161
162
-
1.**`kubelet` daemon** is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
163
-
162
+
1.**`kubelet` daemon** has the *memory.available<750Mi* eviction rule by default. This rule ensures that a node has at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and free up memory on the host machine.
164
163
2.**A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
165
164
* 25% of the first 4GB of memory
166
165
* 20% of the next 4GB of memory (up to 8GB)
167
166
* 10% of the next 8GB of memory (up to 16GB)
168
167
* 6% of the next 112GB of memory (up to 128GB)
169
-
* 2% of any memory above 128GB
168
+
* 2% of any memory more than 128GB
170
169
171
170
> [!NOTE]
172
171
> AKS reserves an extra 2GB for system processes in Windows nodes that isn't part of the calculated memory.
173
172
174
-
Memory and CPU allocation rules are designed to do the following:
173
+
Memory and CPU allocation rules are designed to:
175
174
176
175
* Keep agent nodes healthy, including some hosting system pods critical to cluster health.
177
176
* Cause the node to report less allocatable memory and CPU than it would report if it weren't part of a Kubernetes cluster.
@@ -241,7 +240,7 @@ When you create an AKS cluster, you specify an Azure resource group to create th
241
240
* The virtual network for the cluster
242
241
* The storage for the cluster
243
242
244
-
The node resource group is assigned a name by default with the following format: *MC_resourceGroupName_clusterName_location*. During cluster creation, you have the option to specify the name assigned to your node resource group. When using an Azure Resource Manager template, you can define the name using the `nodeResourceGroup` property. When using Azure CLI, you use the `--node-resource-group` parameter with the `az aks create` command, as shown in the following example:
243
+
The node resource group is assigned a name by default with the following format: *MC_resourceGroupName_clusterName_location*. During cluster creation, you can specify the name assigned to your node resource group. When using an Azure Resource Manager template, you can define the name using the `nodeResourceGroup` property. When using Azure CLI, you use the `--node-resource-group` parameter with the `az aks create` command, as shown in the following example:
245
244
246
245
```azurecli-interactive
247
246
az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup
@@ -276,7 +275,7 @@ When you create a pod, you can define *resource requests* for a certain amount o
276
275
277
276
For more information, see [Kubernetes pods][kubernetes-pods] and [Kubernetes pod lifecycle][kubernetes-pod-lifecycle].
278
277
279
-
A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
278
+
A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, Kubernetes *Controllers*, such as the Deployment Controller, deploys and manages pods.
280
279
281
280
## Deployments and YAML manifests
282
281
@@ -329,8 +328,8 @@ A breakdown of the deployment specifications in the YAML manifest file is as fol
329
328
| ----------------- | ------------- |
330
329
| `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. |
331
330
| `.kind` | Specifies the type of resource you want to create. |
332
-
| `.metadata.name` | Specifies the name of the deployment. This file will run the *nginx* image from Docker Hub. |
333
-
| `.spec.replicas` | Specifies how many pods to create. This file will create three duplicate pods. |
331
+
| `.metadata.name` | Specifies the name of the deployment. This example YAML file runs the *nginx* image from Docker Hub. |
332
+
| `.spec.replicas` | Specifies how many pods to create. This example YAML file creates three duplicate pods. |
334
333
| `.spec.selector` | Specifies which pods will be affected by this deployment. |
335
334
| `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allow the deployment to find and manage the created pods. |
336
335
| `.spec.selector.matchLabels.app` | Has to match `.spec.template.metadata.labels`. |
@@ -345,9 +344,9 @@ A breakdown of the deployment specifications in the YAML manifest file is as fol
345
344
| `.spec.spec.resources.requests` | Specifies the minimum amount of compute resources required. |
346
345
| `.spec.spec.resources.requests.cpu` | Specifies the minimum amount of CPU required. |
347
346
| `.spec.spec.resources.requests.memory` | Specifies the minimum amount of memory required. |
348
-
| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. This limit is enforced by the kubelet. |
349
-
| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. This limit is enforced by the kubelet. |
350
-
| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. This limit is enforced by the kubelet. |
347
+
| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. The kubelet enforces this limit. |
348
+
| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. The kubelet enforces this limit. |
349
+
| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. The kubelet enforces this limit. |
351
350
352
351
More complex applications can be created by including services, such as load balancers, within the YAML manifest.
353
352
@@ -361,7 +360,7 @@ To use Helm, install the Helm client on your computer, or use the Helm client in
361
360
362
361
## StatefulSets and DaemonSets
363
362
364
-
Using the Kubernetes Scheduler, the Deployment Controller runs replicas on any available node with available resources. While this approach might be sufficient for stateless applications, the Deployment Controller isn't ideal for applications that require the following specifications:
363
+
The Deployment Controller uses the Kubernetes Scheduler and runs replicas on any available node with available resources. While this approach might be sufficient for stateless applications, the Deployment Controller isn't ideal for applications that require the following specifications:
365
364
366
365
* A persistent naming convention or storage.
367
366
* A replica to exist on each select node within a cluster.
0 commit comments