Skip to content

Commit 9f7bda2

Browse files
Merge pull request #228171 from MGoedtel/AKSAvailZone222
updated create cluster across zones section
2 parents b102114 + f3a4be0 commit 9f7bda2

File tree

1 file changed

+35
-33
lines changed

1 file changed

+35
-33
lines changed

articles/aks/availability-zones.md

Lines changed: 35 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -4,65 +4,67 @@ description: Learn how to create a cluster that distributes nodes across availab
44
services: container-service
55
ms.custom: fasttrack-edit, references_regions, devx-track-azurecli
66
ms.topic: article
7-
ms.date: 03/31/2022
7+
ms.date: 02/22/2023
88

99
---
1010

1111
# Create an Azure Kubernetes Service (AKS) cluster that uses availability zones
1212

13-
An Azure Kubernetes Service (AKS) cluster distributes resources such as nodes and storage across logical sections of underlying Azure infrastructure. This deployment model when using availability zones, ensures nodes in a given availability zone are physically separated from those defined in another availability zone. AKS clusters deployed with multiple availability zones configured across a cluster provide a higher level of availability to protect against a hardware failure or a planned maintenance event.
13+
An Azure Kubernetes Service (AKS) cluster distributes resources such as nodes and storage across logical sections of underlying Azure infrastructure. Using availability zones physically separates nodes from other nodes deployed to different availability zones. AKS clusters deployed with multiple availability zones configured across a cluster provide a higher level of availability to protect against a hardware failure or a planned maintenance event.
1414

15-
By defining node pools in a cluster to span multiple zones, nodes in a given node pool are able to continue operating even if a single zone has gone down. Your applications can continue to be available even if there is a physical failure in a single datacenter if orchestrated to tolerate failure of a subset of nodes.
15+
By defining node pools in a cluster to span multiple zones, nodes in a given node pool are able to continue operating even if a single zone has gone down. Your applications can continue to be available even if there's a physical failure in a single datacenter if orchestrated to tolerate failure of a subset of nodes.
1616

1717
This article shows you how to create an AKS cluster and distribute the node components across availability zones.
1818

1919
## Before you begin
2020

21-
You need the Azure CLI version 2.0.76 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
21+
You need the Azure CLI version 2.0.76 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
2222

2323
## Limitations and region availability
2424

25-
AKS clusters can be created using availability zones in any Azure region that has availability zones.
25+
AKS clusters can use availability zones in any Azure region that has availability zones.
2626

2727
The following limitations apply when you create an AKS cluster using availability zones:
2828

29-
* You can only define availability zones when the cluster or node pool is created.
30-
* Availability zone settings can't be updated after the cluster is created. You also can't update an existing, non-availability zone cluster to use availability zones.
29+
* You can only define availability zones during creation of the cluster or node pool.
30+
* It is not possible to update an existing non-availability zone cluster to use availability zones after creating the cluster.
3131
* The chosen node size (VM SKU) selected must be available across all availability zones selected.
32-
* Clusters with availability zones enabled require use of Azure Standard Load Balancers for distribution across zones. This load balancer type can only be defined at cluster create time. For more information and the limitations of the standard load balancer, see [Azure load balancer standard SKU limitations][standard-lb-limitations].
32+
* Clusters with availability zones enabled require using Azure Standard Load Balancers for distribution across zones. You can only define this load balancer type at cluster create time. For more information and the limitations of the standard load balancer, see [Azure load balancer standard SKU limitations][standard-lb-limitations].
3333

3434
### Azure disk availability zone support
3535

36-
- Volumes that use Azure managed LRS disks are not zone-redundant resources, those volumes cannot be attached across zones and must be co-located in the same zone as a given node hosting the target pod.
37-
- Volumes that use Azure managed ZRS disks(supported by Azure Disk CSI driver v1.5.0+) are zone-redundant resources, those volumes can be scheduled on all zone and non-zone agent nodes.
36+
- Volumes that use Azure managed LRS disks aren't zone-redundant resources, attaching across zones and aren't supported. You need to co-locate volumes in the same zone as the specified node hosting the target pod.
37+
- Volumes that use Azure managed ZRS disks (supported by Azure Disk CSI driver v1.5.0 and later) are zone-redundant resources. You can schedule those volumes on all zone and non-zone agent nodes.
3838

39-
Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and [Kubernetes will take care of scheduling](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones) any pod that claims this PVC in the correct availability zone.
39+
Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and [Kubernetes takes care of scheduling](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones) any pod that claims this PVC in the correct availability zone.
4040

4141
### Azure Resource Manager templates and availability zones
4242

43-
When *creating* an AKS cluster, if you explicitly define a [null value in a template][arm-template-null] with syntax such as `"availabilityZones": null`, the Resource Manager template treats the property as if it doesn't exist, which means your cluster won’t have availability zones enabled. Also, if you create a cluster with a Resource Manager template that omits the availability zones property, availability zones are disabled.
43+
When *creating* an AKS cluster, understand the following details about specifying availability zones in a template:
4444

45-
You can't update settings for availability zones on an existing cluster, so the behavior is different when updating an AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, there are no changes made to your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**.
45+
* If you explicitly define a [null value in a template][arm-template-null], for example by specifying `"availabilityZones": null`, the Resource Manager template treats the property as if it doesn't exist. This means your cluster doesn't deploy in an availability zone.
46+
* If you don't include the `"availabilityZones":` property in your Resource Manager template, your cluster doesn't deploy in an availability zone.
47+
* You can't update settings for availability zones on an existing cluster, the behavior is different when you update an AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, it doesn't update your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**.
4648

4749
## Overview of availability zones for AKS clusters
4850

49-
Availability zones are a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's always more than one zone in all zone enabled regions. The physical separation of availability zones within a region protects applications and data from datacenter failures.
51+
Availability zones are a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone includes one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's always more than one zone in all zone enabled regions. The physical separation of availability zones within a region protects applications and data from datacenter failures.
5052

5153
For more information, see [What are availability zones in Azure?][az-overview].
5254

53-
AKS clusters that are deployed using availability zones can distribute nodes across multiple zones within a single region. For example, a cluster in the *East US 2* region can create nodes in all three availability zones in *East US 2*. This distribution of AKS cluster resources improves cluster availability as they're resilient to failure of a specific zone.
55+
AKS clusters deployed using availability zones can distribute nodes across multiple zones within a single region. For example, a cluster in the *East US 2* region can create nodes in all three availability zones in *East US 2*. This distribution of AKS cluster resources improves cluster availability as they're resilient to failure of a specific zone.
5456

5557
![AKS node distribution across availability zones](media/availability-zones/aks-availability-zones.png)
5658

57-
If a single zone becomes unavailable, your applications continue to run if the cluster is spread across multiple zones.
59+
If a single zone becomes unavailable, your applications continue to run on clusters configured to spread across multiple zones.
5860

5961
## Create an AKS cluster across availability zones
6062

61-
When you create a cluster using the [az aks create][az-aks-create] command, the `--zones` parameter defines which zones agent nodes are deployed into. The control plane components such as etcd or the API are spread across the available zones in the region if you define the `--zones` parameter at cluster creation time. The specific zones which the control plane components are spread across are independent of what explicit zones are selected for the initial node pool.
63+
When you create a cluster using the [az aks create][az-aks-create] command, the `--zones` parameter specifies the zones to deploy agent nodes into. The control plane components such as etcd or the API spread across the available zones in the region during cluster deployment. The specific zones that the control plane components spread across, are independent of what explicit zones you select for the initial node pool.
6264

63-
If you don't define any zones for the default agent pool when you create an AKS cluster, control plane components are not guaranteed to spread across availability zones. You can add additional node pools using the [az aks nodepool add][az-aks-nodepool-add] command and specify `--zones` for new nodes, but it will not change how the control plane has been spread across zones. Availability zone settings can only be defined at cluster or node pool create-time.
65+
If you don't specify any zones for the default agent pool when you create an AKS cluster, the control plane components aren't present in availability zones. You can add more node pools using the [az aks nodepool add][az-aks-nodepool-add] command and specify `--zones` for new nodes. The command converts the AKS control plane to spread across availability zones.
6466

65-
The following example creates an AKS cluster named *myAKSCluster* in the resource group named *myResourceGroup*. A total of *3* nodes are created - one agent in zone *1*, one in *2*, and then one in *3*.
67+
The following example creates an AKS cluster named *myAKSCluster* in the resource group named *myResourceGroup* with a total of three nodes. One agent in zone *1*, one in *2*, and then one in *3*.
6668

6769
```azurecli-interactive
6870
az group create --name myResourceGroup --location eastus2
@@ -79,11 +81,11 @@ az aks create \
7981

8082
It takes a few minutes to create the AKS cluster.
8183

82-
When deciding what zone a new node should belong to, a given AKS node pool will use a [best effort zone balancing offered by underlying Azure Virtual Machine Scale Sets][vmss-zone-balancing]. A given AKS node pool is considered "balanced" if each zone has the same number of VMs or +\- 1 VM in all other zones for the scale set.
84+
When deciding what zone a new node should belong to, a specified AKS node pool uses a [best effort zone balancing offered by underlying Azure Virtual Machine Scale Sets][vmss-zone-balancing]. The AKS node pool is "balanced" when each zone has the same number of VMs or +\- one VM in all other zones for the scale set.
8385

8486
## Verify node distribution across zones
8587

86-
When the cluster is ready, list the agent nodes in the scale set to see what availability zone they're deployed in.
88+
When the cluster is ready, list what availability zone the agent nodes in the scale set are in.
8789

8890
First, get the AKS cluster credentials using the [az aks get-credentials][az-aks-get-credentials] command:
8991

@@ -93,7 +95,7 @@ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
9395

9496
Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the `topology.kubernetes.io/zone` value. The following example is for a Bash shell.
9597

96-
```console
98+
```bash
9799
kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"
98100
```
99101

@@ -108,15 +110,15 @@ Name: aks-nodepool1-28993262-vmss000002
108110
topology.kubernetes.io/zone=eastus2-3
109111
```
110112

111-
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
113+
As you add more nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
112114

113-
Note that in newer Kubernetes versions (1.17.0 and later), AKS is using the newer label `topology.kubernetes.io/zone` in addition to the deprecated `failure-domain.beta.kubernetes.io/zone`. You can get the same result as above with by running the following script:
115+
With Kubernetes versions 1.17.0 and later, AKS uses the newer label `topology.kubernetes.io/zone` and the deprecated `failure-domain.beta.kubernetes.io/zone`. You can get the same result from running the `kubelet describe nodes` command in the previous step, by running the following script:
114116

115-
```console
117+
```console
116118
kubectl get nodes -o custom-columns=NAME:'{.metadata.name}',REGION:'{.metadata.labels.topology\.kubernetes\.io/region}',ZONE:'{metadata.labels.topology\.kubernetes\.io/zone}'
117119
```
118120

119-
Which will give you a more succinct output:
121+
The following example resembles the output with more verbose details:
120122

121123
```console
122124
NAME REGION ZONE
@@ -127,7 +129,7 @@ aks-nodepool1-34917322-vmss000002 eastus eastus-3
127129

128130
## Verify pod distribution across zones
129131

130-
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
132+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. To test the label and scale your cluster from 3 to 5 nodes, run the following command to verify the pod correctly spreads:
131133

132134
```azurecli-interactive
133135
az aks scale \
@@ -136,7 +138,7 @@ az aks scale \
136138
--node-count 5
137139
```
138140

139-
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
141+
When the scale operation completes after a few minutes, run the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell. The following output resembles the results:
140142

141143
```console
142144
Name: aks-nodepool1-28993262-vmss000000
@@ -151,14 +153,14 @@ Name: aks-nodepool1-28993262-vmss000004
151153
topology.kubernetes.io/zone=eastus2-2
152154
```
153155

154-
We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example:
156+
You now have two more nodes in zones 1 and 2. You can deploy an application consisting of three replicas. The following example uses NGINX:
155157

156-
```console
158+
```bash
157159
kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
158160
kubectl scale deployment nginx --replicas=3
159161
```
160162

161-
By viewing nodes where your pods are running, you see pods are running on the nodes corresponding to three different availability zones. For example, with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` in a Bash shell you would get an output similar to this:
163+
By viewing nodes where your pods are running, you see pods are running on the nodes corresponding to three different availability zones. For example, with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` in a Bash shell, you see the following example output:
162164

163165
```console
164166
Name: nginx-6db489d4b7-ktdwg
@@ -169,11 +171,11 @@ Name: nginx-6db489d4b7-xz6wj
169171
Node: aks-nodepool1-28993262-vmss000004/10.240.0.8
170172
```
171173

172-
As you can see from the previous output, the first pod is running on node 0, which is located in the availability zone `eastus2-1`. The second pod is running on node 2, which corresponds to `eastus2-3`, and the third one in node 4, in `eastus2-2`. Without any additional configuration, Kubernetes is spreading the pods correctly across all three availability zones.
174+
As you can see from the previous output, the first pod is running on node 0 located in the availability zone `eastus2-1`. The second pod is running on node 2, corresponding to `eastus2-3`, and the third one in node 4, in `eastus2-2`. Without any extra configuration, Kubernetes spreads the pods correctly across all three availability zones.
173175

174176
## Next steps
175177

176-
This article detailed how to create an AKS cluster that uses availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr].
178+
This article described how to create an AKS cluster using availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr].
177179

178180
<!-- LINKS - internal -->
179181
[install-azure-cli]: /cli/azure/install-azure-cli

0 commit comments

Comments
 (0)