Skip to content

Commit 8db106d

Browse files
authored
Merge pull request #190772 from julie-ng/patch-2
AKS Availability Zones - update zone topology label per k8s docs
2 parents d4f74d3 + 9bc2fb0 commit 8db106d

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

articles/aks/availability-zones.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -114,21 +114,21 @@ First, get the AKS cluster credentials using the [az aks get-credentials][az-aks
114114
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
115115
```
116116

117-
Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the *failure-domain.beta.kubernetes.io/zone* value. The following example is for a Bash shell.
117+
Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the `topology.kubernetes.io/zone` value. The following example is for a Bash shell.
118118

119119
```console
120-
kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"
120+
kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"
121121
```
122122

123123
The following example output shows the three nodes distributed across the specified region and availability zones, such as *eastus2-1* for the first availability zone and *eastus2-2* for the second availability zone:
124124

125125
```console
126126
Name: aks-nodepool1-28993262-vmss000000
127-
failure-domain.beta.kubernetes.io/zone=eastus2-1
127+
topology.kubernetes.io/zone=eastus2-1
128128
Name: aks-nodepool1-28993262-vmss000001
129-
failure-domain.beta.kubernetes.io/zone=eastus2-2
129+
topology.kubernetes.io/zone=eastus2-2
130130
Name: aks-nodepool1-28993262-vmss000002
131-
failure-domain.beta.kubernetes.io/zone=eastus2-3
131+
topology.kubernetes.io/zone=eastus2-3
132132
```
133133

134134
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
@@ -150,7 +150,7 @@ aks-nodepool1-34917322-vmss000002 eastus eastus-3
150150

151151
## Verify pod distribution across zones
152152

153-
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
153+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
154154

155155
```azurecli-interactive
156156
az aks scale \
@@ -159,19 +159,19 @@ az aks scale \
159159
--node-count 5
160160
```
161161

162-
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
162+
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
163163

164164
```console
165165
Name: aks-nodepool1-28993262-vmss000000
166-
failure-domain.beta.kubernetes.io/zone=eastus2-1
166+
topology.kubernetes.io/zone=eastus2-1
167167
Name: aks-nodepool1-28993262-vmss000001
168-
failure-domain.beta.kubernetes.io/zone=eastus2-2
168+
topology.kubernetes.io/zone=eastus2-2
169169
Name: aks-nodepool1-28993262-vmss000002
170-
failure-domain.beta.kubernetes.io/zone=eastus2-3
170+
topology.kubernetes.io/zone=eastus2-3
171171
Name: aks-nodepool1-28993262-vmss000003
172-
failure-domain.beta.kubernetes.io/zone=eastus2-1
172+
topology.kubernetes.io/zone=eastus2-1
173173
Name: aks-nodepool1-28993262-vmss000004
174-
failure-domain.beta.kubernetes.io/zone=eastus2-2
174+
topology.kubernetes.io/zone=eastus2-2
175175
```
176176

177177
We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example:

0 commit comments

Comments
 (0)