Skip to content

Commit 51510ec

Browse files
authored
Pod distirbution across AZ
Included examples to illustrate automatic AZ-spread of pods, as well as the new label format on 1.17
1 parent e10c086 commit 51510ec

File tree

1 file changed

+49
-0
lines changed

1 file changed

+49
-0
lines changed

articles/aks/availability-zones.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ description: Learn how to create a cluster that distributes nodes across availab
44
services: container-service
55
author: mlearned
66

7+
ms.custom: fasttrack-edit
78
ms.service: container-service
89
ms.topic: article
910
ms.date: 06/24/2019
@@ -118,6 +119,53 @@ Name: aks-nodepool1-28993262-vmss000002
118119

119120
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
120121

122+
Note that in newer Kubernetes versions (1.17.0 and later), AKS is using the newer label `topology.kubernetes.io/zone` in addition to the deprecated `failure-domain.beta.kubernetes.io/zone`.
123+
124+
## Verify pod distribution across zones
125+
126+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
127+
128+
```azurecli-interactive
129+
az aks scale \
130+
--resource-group myResourceGroup \
131+
--name myAKSCluster \
132+
--node-count 5
133+
```
134+
135+
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` should give an output similar to this sample:
136+
137+
```console
138+
Name: aks-nodepool1-28993262-vmss000000
139+
failure-domain.beta.kubernetes.io/zone=eastus2-1
140+
Name: aks-nodepool1-28993262-vmss000001
141+
failure-domain.beta.kubernetes.io/zone=eastus2-2
142+
Name: aks-nodepool1-28993262-vmss000002
143+
failure-domain.beta.kubernetes.io/zone=eastus2-3
144+
Name: aks-nodepool1-28993262-vmss000003
145+
failure-domain.beta.kubernetes.io/zone=eastus2-1
146+
Name: aks-nodepool1-28993262-vmss000004
147+
failure-domain.beta.kubernetes.io/zone=eastus2-2
148+
```
149+
150+
As you can see, we have now two additional nodes in zones 1 and 2. You can now deploy an application consisting of 3 replicas, we will use NGINX as example:
151+
152+
```console
153+
kubectl run nginx --image=nginx --replicas=3
154+
```
155+
156+
If you verify that nodes where your pods are running, you will see that the pods are running on the pods corresponding to three different Availability Zones. For example with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` you would get an output similar to this:
157+
158+
```console
159+
Name: nginx-6db489d4b7-ktdwg
160+
Node: aks-nodepool1-28993262-vmss000000/10.240.0.4
161+
Name: nginx-6db489d4b7-v7zvj
162+
Node: aks-nodepool1-28993262-vmss000002/10.240.0.6
163+
Name: nginx-6db489d4b7-xz6wj
164+
Node: aks-nodepool1-28993262-vmss000004/10.240.0.8
165+
```
166+
167+
As you can see from the previous output, the first pod is running on node 0, which is located in the Availability Zone `eastus2-1`. The second pod is running on node 2, which corresponds to `eastus2-3`, and the third one in node 4, in `eastus2-2`. As you can see, without any additional configuration Kubernetes is spreading the pods correctly across all three Availability Zones.
168+
121169
## Next steps
122170

123171
This article detailed how to create an AKS cluster that uses availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr].
@@ -140,3 +188,4 @@ This article detailed how to create an AKS cluster that uses availability zones.
140188

141189
<!-- LINKS - external -->
142190
[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
191+
[kubectl-well_known_labels]: https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/

0 commit comments

Comments
 (0)