You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/availability-zones.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,21 +114,21 @@ First, get the AKS cluster credentials using the [az aks get-credentials][az-aks
114
114
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
115
115
```
116
116
117
-
Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the *failure-domain.beta.kubernetes.io/zone* value. The following example is for a Bash shell.
117
+
Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the `topology.kubernetes.io/zone` value. The following example is for a Bash shell.
The following example output shows the three nodes distributed across the specified region and availability zones, such as *eastus2-1* for the first availability zone and *eastus2-2* for the second availability zone:
124
124
125
125
```console
126
126
Name: aks-nodepool1-28993262-vmss000000
127
-
failure-domain.beta.kubernetes.io/zone=eastus2-1
127
+
topology.kubernetes.io/zone=eastus2-1
128
128
Name: aks-nodepool1-28993262-vmss000001
129
-
failure-domain.beta.kubernetes.io/zone=eastus2-2
129
+
topology.kubernetes.io/zone=eastus2-2
130
130
Name: aks-nodepool1-28993262-vmss000002
131
-
failure-domain.beta.kubernetes.io/zone=eastus2-3
131
+
topology.kubernetes.io/zone=eastus2-3
132
132
```
133
133
134
134
As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `failure-domain.beta.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
153
+
As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading:
154
154
155
155
```azurecli-interactive
156
156
az aks scale \
@@ -159,19 +159,19 @@ az aks scale \
159
159
--node-count 5
160
160
```
161
161
162
-
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
162
+
When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample:
163
163
164
164
```console
165
165
Name: aks-nodepool1-28993262-vmss000000
166
-
failure-domain.beta.kubernetes.io/zone=eastus2-1
166
+
topology.kubernetes.io/zone=eastus2-1
167
167
Name: aks-nodepool1-28993262-vmss000001
168
-
failure-domain.beta.kubernetes.io/zone=eastus2-2
168
+
topology.kubernetes.io/zone=eastus2-2
169
169
Name: aks-nodepool1-28993262-vmss000002
170
-
failure-domain.beta.kubernetes.io/zone=eastus2-3
170
+
topology.kubernetes.io/zone=eastus2-3
171
171
Name: aks-nodepool1-28993262-vmss000003
172
-
failure-domain.beta.kubernetes.io/zone=eastus2-1
172
+
topology.kubernetes.io/zone=eastus2-1
173
173
Name: aks-nodepool1-28993262-vmss000004
174
-
failure-domain.beta.kubernetes.io/zone=eastus2-2
174
+
topology.kubernetes.io/zone=eastus2-2
175
175
```
176
176
177
177
We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example:
0 commit comments