You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
120
121
122
+
Note that in newer Kubernetes versions (1.17.0 and later), AKS is using the newer label `topology.kubernetes.io/zone` in addition to the deprecated `failure-domain.beta.kubernetes.io/zone`.
123
+
124
+
## Verify pod distribution across zones
125
+
126
+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
127
+
128
+
```azurecli-interactive
129
+
az aks scale \
130
+
--resource-group myResourceGroup \
131
+
--name myAKSCluster \
132
+
--node-count 5
133
+
```
134
+
135
+
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` should give an output similar to this sample:
136
+
137
+
```console
138
+
Name: aks-nodepool1-28993262-vmss000000
139
+
failure-domain.beta.kubernetes.io/zone=eastus2-1
140
+
Name: aks-nodepool1-28993262-vmss000001
141
+
failure-domain.beta.kubernetes.io/zone=eastus2-2
142
+
Name: aks-nodepool1-28993262-vmss000002
143
+
failure-domain.beta.kubernetes.io/zone=eastus2-3
144
+
Name: aks-nodepool1-28993262-vmss000003
145
+
failure-domain.beta.kubernetes.io/zone=eastus2-1
146
+
Name: aks-nodepool1-28993262-vmss000004
147
+
failure-domain.beta.kubernetes.io/zone=eastus2-2
148
+
```
149
+
150
+
As you can see, we have now two additional nodes in zones 1 and 2. You can now deploy an application consisting of 3 replicas, we will use NGINX as example:
151
+
152
+
```console
153
+
kubectl run nginx --image=nginx --replicas=3
154
+
```
155
+
156
+
If you verify that nodes where your pods are running, you will see that the pods are running on the pods corresponding to three different Availability Zones. For example with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` you would get an output similar to this:
As you can see from the previous output, the first pod is running on node 0, which is located in the Availability Zone `eastus2-1`. The second pod is running on node 2, which corresponds to `eastus2-3`, and the third one in node 4, in `eastus2-2`. As you can see, without any additional configuration Kubernetes is spreading the pods correctly across all three Availability Zones.
168
+
121
169
## Next steps
122
170
123
171
This article detailed how to create an AKS cluster that uses availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr].
@@ -140,3 +188,4 @@ This article detailed how to create an AKS cluster that uses availability zones.
0 commit comments