Skip to content

Commit 8009bd9

Browse files
committed
Acrolinx updates to concepts-clusters-workloads.md
1 parent 8d1da18 commit 8009bd9

File tree

1 file changed

+16
-17
lines changed

1 file changed

+16
-17
lines changed

articles/aks/concepts-clusters-workloads.md

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ The control plane includes the following core Kubernetes components:
5454
| *kube-scheduler* | When you create or scale applications, the scheduler determines what nodes can run the workload and starts the identified nodes. |
5555
| *kube-controller-manager* | The controller manager oversees a number of smaller controllers that perform actions such as replicating pods and handling node operations. |
5656

57-
While you don't need to configure control plane components, you can't access the control plane directly. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs using Azure Monitor.
57+
Keep in mind that you can't directly access the control plane. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs using Azure Monitor.
5858

5959
> [!NOTE]
6060
> If you want to configure or directly access a control plane, you can deploy a self-managed Kubernetes cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
@@ -77,7 +77,7 @@ The Azure VM size for your nodes defines CPUs, memory, size, and the storage typ
7777

7878
In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts, including [Azure reservations][reservation-discounts], are automatically applied.
7979

80-
For managed disks, the default disk size and performance is assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
80+
For managed disks, default disk size and performance are assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
8181

8282
> [!NOTE]
8383
> If you need advanced configuration and control on your Kubernetes node container runtime and OS, you can deploy a self-managed cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
@@ -86,15 +86,15 @@ For managed disks, the default disk size and performance is assigned according t
8686

8787
AKS supports Ubuntu 22.04 and Azure Linux 2.0 as the node operating system (OS) for clusters with Kubernetes 1.25 and higher. Ubuntu 18.04 can also be specified at node pool creation for Kubernetes versions 1.24 and below.
8888

89-
AKS supports Windows Server 2022 as the default OS for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
89+
AKS supports Windows Server 2022 as the default OS for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
9090

9191
### Container runtime configuration
9292

9393
A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or OS-specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used on Kubernetes version 1.19 and higher. For Windows Server 2019 and 2022 node pools, `containerd` is generally available and is the only runtime option on Kubernetes version 1.23 and higher. As of May 2023, Docker is retired and no longer supported. For more information about this retirement, see the [AKS release notes][aks-release-notes].
9494

9595
[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. With`containerd`-based nodes and node pools, the kubelet talks directly to `containerd` using the CRI (container runtime interface) plugin, removing extra hops in the data flow when compared to the Docker CRI implementation. As such, you see better pod startup latency and less resource (CPU and memory) usage.
9696

97-
`Containerd` works on every GA version of Kubernetes in AKS, and in every newer Kubernetes version above v1.19, and supports all Kubernetes and AKS features.
97+
`Containerd` works on every GA version of Kubernetes in AKS, in every Kubernetes version starting from v1.19, and supports all Kubernetes and AKS features.
9898

9999
> [!IMPORTANT]
100100
> Clusters with Linux node pools created on Kubernetes v1.19 or higher default to the `containerd` container runtime. Clusters with node pools on a earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`.
@@ -148,7 +148,7 @@ Reserved memory in AKS includes the sum of two values:
148148
149149
**AKS 1.29 and later**
150150

151-
1. **`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This ensures that a node always has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
151+
1. **`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This rule ensures that a node has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
152152
2. **A rate of memory reservations** set according to the lesser value of: *20MB * Max Pods supported on the Node + 50MB* or *25% of the total system memory resources*.
153153

154154
**Examples**:
@@ -159,19 +159,18 @@ Reserved memory in AKS includes the sum of two values:
159159

160160
**AKS versions prior to 1.29**
161161

162-
1. **`kubelet` daemon** is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
163-
162+
1. **`kubelet` daemon** has the *memory.available<750Mi* eviction rule by default. This rule ensures that a node has at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and free up memory on the host machine.
164163
2. **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
165164
* 25% of the first 4GB of memory
166165
* 20% of the next 4GB of memory (up to 8GB)
167166
* 10% of the next 8GB of memory (up to 16GB)
168167
* 6% of the next 112GB of memory (up to 128GB)
169-
* 2% of any memory above 128GB
168+
* 2% of any memory more than 128GB
170169

171170
> [!NOTE]
172171
> AKS reserves an extra 2GB for system processes in Windows nodes that isn't part of the calculated memory.
173172
174-
Memory and CPU allocation rules are designed to do the following:
173+
Memory and CPU allocation rules are designed to:
175174

176175
* Keep agent nodes healthy, including some hosting system pods critical to cluster health.
177176
* Cause the node to report less allocatable memory and CPU than it would report if it weren't part of a Kubernetes cluster.
@@ -241,7 +240,7 @@ When you create an AKS cluster, you specify an Azure resource group to create th
241240
* The virtual network for the cluster
242241
* The storage for the cluster
243242
244-
The node resource group is assigned a name by default with the following format: *MC_resourceGroupName_clusterName_location*. During cluster creation, you have the option to specify the name assigned to your node resource group. When using an Azure Resource Manager template, you can define the name using the `nodeResourceGroup` property. When using Azure CLI, you use the `--node-resource-group` parameter with the `az aks create` command, as shown in the following example:
243+
The node resource group is assigned a name by default with the following format: *MC_resourceGroupName_clusterName_location*. During cluster creation, you can specify the name assigned to your node resource group. When using an Azure Resource Manager template, you can define the name using the `nodeResourceGroup` property. When using Azure CLI, you use the `--node-resource-group` parameter with the `az aks create` command, as shown in the following example:
245244

246245
```azurecli-interactive
247246
az aks create --name myAKSCluster --resource-group myResourceGroup --node-resource-group myNodeResourceGroup
@@ -276,7 +275,7 @@ When you create a pod, you can define *resource requests* for a certain amount o
276275

277276
For more information, see [Kubernetes pods][kubernetes-pods] and [Kubernetes pod lifecycle][kubernetes-pod-lifecycle].
278277

279-
A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
278+
A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, Kubernetes *Controllers*, such as the Deployment Controller, deploys and manages pods.
280279

281280
## Deployments and YAML manifests
282281

@@ -329,8 +328,8 @@ A breakdown of the deployment specifications in the YAML manifest file is as fol
329328
| ----------------- | ------------- |
330329
| `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. |
331330
| `.kind` | Specifies the type of resource you want to create. |
332-
| `.metadata.name` | Specifies the name of the deployment. This file will run the *nginx* image from Docker Hub. |
333-
| `.spec.replicas` | Specifies how many pods to create. This file will create three duplicate pods. |
331+
| `.metadata.name` | Specifies the name of the deployment. This example YAML file runs the *nginx* image from Docker Hub. |
332+
| `.spec.replicas` | Specifies how many pods to create. This example YAML file creates three duplicate pods. |
334333
| `.spec.selector` | Specifies which pods will be affected by this deployment. |
335334
| `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allow the deployment to find and manage the created pods. |
336335
| `.spec.selector.matchLabels.app` | Has to match `.spec.template.metadata.labels`. |
@@ -345,9 +344,9 @@ A breakdown of the deployment specifications in the YAML manifest file is as fol
345344
| `.spec.spec.resources.requests` | Specifies the minimum amount of compute resources required. |
346345
| `.spec.spec.resources.requests.cpu` | Specifies the minimum amount of CPU required. |
347346
| `.spec.spec.resources.requests.memory` | Specifies the minimum amount of memory required. |
348-
| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. This limit is enforced by the kubelet. |
349-
| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. This limit is enforced by the kubelet. |
350-
| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. This limit is enforced by the kubelet. |
347+
| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. The kubelet enforces this limit. |
348+
| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. The kubelet enforces this limit. |
349+
| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. The kubelet enforces this limit. |
351350

352351
More complex applications can be created by including services, such as load balancers, within the YAML manifest.
353352

@@ -361,7 +360,7 @@ To use Helm, install the Helm client on your computer, or use the Helm client in
361360

362361
## StatefulSets and DaemonSets
363362

364-
Using the Kubernetes Scheduler, the Deployment Controller runs replicas on any available node with available resources. While this approach might be sufficient for stateless applications, the Deployment Controller isn't ideal for applications that require the following specifications:
363+
The Deployment Controller uses the Kubernetes Scheduler and runs replicas on any available node with available resources. While this approach might be sufficient for stateless applications, the Deployment Controller isn't ideal for applications that require the following specifications:
365364

366365
* A persistent naming convention or storage.
367366
* A replica to exist on each select node within a cluster.

0 commit comments

Comments
 (0)