Skip to content

Commit 75057fc

Browse files
authored
Merge pull request #102653 from erjosito/patch-12
Pod distribution across AZ
2 parents 9891941 + da86ec5 commit 75057fc

File tree

1 file changed

+54
-5
lines changed

1 file changed

+54
-5
lines changed

articles/aks/availability-zones.md

Lines changed: 54 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,17 @@
11
---
2-
title: Use Availability Zones in Azure Kubernetes Service (AKS)
2+
title: Use availability zones in Azure Kubernetes Service (AKS)
33
description: Learn how to create a cluster that distributes nodes across availability zones in Azure Kubernetes Service (AKS)
44
services: container-service
55
author: mlearned
66

7+
ms.custom: fasttrack-edit
78
ms.service: container-service
89
ms.topic: article
910
ms.date: 06/24/2019
1011
ms.author: mlearned
1112
---
1213

13-
# Create an Azure Kubernetes Service (AKS) cluster that uses Availability Zones
14+
# Create an Azure Kubernetes Service (AKS) cluster that uses availability zones
1415

1516
An Azure Kubernetes Service (AKS) cluster distributes resources such as the nodes and storage across logical sections of the underlying Azure compute infrastructure. This deployment model makes sure that the nodes run across separate update and fault domains in a single Azure datacenter. AKS clusters deployed with this default behavior provide a high level of availability to protect against a hardware failure or planned maintenance event.
1617

@@ -54,11 +55,11 @@ Volumes that use Azure managed disks are currently not zonal resources. Pods res
5455

5556
If you must run stateful workloads, use taints and tolerations in your pod specs to tell the Kubernetes scheduler to create pods in the same zone as your disks. Alternatively, use network-based storage such as Azure Files that can attach to pods as they're scheduled between zones.
5657

57-
## Overview of Availability Zones for AKS clusters
58+
## Overview of availability zones for AKS clusters
5859

59-
Availability Zones is a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure.
60+
Availability zones is a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. The physical separation of availability zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across availability zones to protect from single-points-of-failure.
6061

61-
For more information, see [What are Availability Zones in Azure?][az-overview].
62+
For more information, see [What are availability zones in Azure?][az-overview].
6263

6364
AKS clusters that are deployed using availability zones can distribute nodes across multiple zones within a single region. For example, a cluster in the *East US 2* region can create nodes in all three availability zones in *East US 2*. This distribution of AKS cluster resources improves cluster availability as they're resilient to failure of a specific zone.
6465

@@ -118,6 +119,53 @@ Name: aks-nodepool1-28993262-vmss000002
118119

119120
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
120121

122+
Note that in newer Kubernetes versions (1.17.0 and later), AKS is using the newer label `topology.kubernetes.io/zone` in addition to the deprecated `failure-domain.beta.kubernetes.io/zone`.
123+
124+
## Verify pod distribution across zones
125+
126+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
127+
128+
```azurecli-interactive
129+
az aks scale \
130+
--resource-group myResourceGroup \
131+
--name myAKSCluster \
132+
--node-count 5
133+
```
134+
135+
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` should give an output similar to this sample:
136+
137+
```console
138+
Name: aks-nodepool1-28993262-vmss000000
139+
failure-domain.beta.kubernetes.io/zone=eastus2-1
140+
Name: aks-nodepool1-28993262-vmss000001
141+
failure-domain.beta.kubernetes.io/zone=eastus2-2
142+
Name: aks-nodepool1-28993262-vmss000002
143+
failure-domain.beta.kubernetes.io/zone=eastus2-3
144+
Name: aks-nodepool1-28993262-vmss000003
145+
failure-domain.beta.kubernetes.io/zone=eastus2-1
146+
Name: aks-nodepool1-28993262-vmss000004
147+
failure-domain.beta.kubernetes.io/zone=eastus2-2
148+
```
149+
150+
As you can see, we now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as example:
151+
152+
```console
153+
kubectl run nginx --image=nginx --replicas=3
154+
```
155+
156+
If you verify that nodes where your pods are running, you will see that the pods are running on the pods corresponding to three different availability zones. For example with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` you would get an output similar to this:
157+
158+
```console
159+
Name: nginx-6db489d4b7-ktdwg
160+
Node: aks-nodepool1-28993262-vmss000000/10.240.0.4
161+
Name: nginx-6db489d4b7-v7zvj
162+
Node: aks-nodepool1-28993262-vmss000002/10.240.0.6
163+
Name: nginx-6db489d4b7-xz6wj
164+
Node: aks-nodepool1-28993262-vmss000004/10.240.0.8
165+
```
166+
167+
As you can see from the previous output, the first pod is running on node 0, which is located in the availability zone `eastus2-1`. The second pod is running on node 2, which corresponds to `eastus2-3`, and the third one in node 4, in `eastus2-2`. Without any additional configuration, Kubernetes is spreading the pods correctly across all three availability zones.
168+
121169
## Next steps
122170

123171
This article detailed how to create an AKS cluster that uses availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr].
@@ -140,3 +188,4 @@ This article detailed how to create an AKS cluster that uses availability zones.
140188

141189
<!-- LINKS - external -->
142190
[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
191+
[kubectl-well_known_labels]: https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/

0 commit comments

Comments
 (0)