You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+33-5Lines changed: 33 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,6 +121,7 @@ The frontend subfield defines the settings for the frontend pods. The two main s
121
121
-**Replicas**: The number of frontend replicas (pods) to deploy. Increasing the number of frontend replicas provides high availability in case one of the frontend pods fails.
122
122
123
123
-**Workers**: The number of logical frontend workers per replica. Each worker can consume up to one CPU core at most.
124
+
124
125
#### Backend chain
125
126
126
127
The backend chain subfield defines the settings for the backend partitions. The three main settings are:
@@ -256,15 +257,42 @@ To learn more, see [Azure CLI support for advanced MQTT broker configuration](ht
256
257
257
258
---
258
259
259
-
<!-- TODO -->
260
+
## Multi-node deployment
261
+
262
+
To ensure high availability and resilience with multi-node deployments, the Azure IoT Operations MQTT broker automatically sets [anti-affinity rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) for backend pods.
263
+
264
+
These rules are predefined and cannot be modified.
265
+
266
+
### Purpose of anti-affinity rules
260
267
261
-
<!-- ## Best practices
268
+
The anti-affinity rules ensure that backend pods from the same partition don't run on the same node. This helps to distribute the load and provides resilience against node failures. Specifically, backend pods from the same partition have anti-affinity with each other.
262
269
263
-
### Configuration required to ensure the backends and dataflow instances are scheduled on separate nodes/node-pools
270
+
### Verifying anti-affinity settings
264
271
265
-
### Load balancers required in front of the frontends
272
+
To verify the anti-affinity settings for a backend pod, use the following command:
273
+
274
+
```sh
275
+
kubectl get pod aio-broker-backend-1-0 -n azure-iot-operations -o yaml | grep affinity -A 15
276
+
```
277
+
278
+
The output will show the anti-affinity configuration, similar to the following:
279
+
280
+
```yaml
281
+
affinity:
282
+
podAntiAffinity:
283
+
preferredDuringSchedulingIgnoredDuringExecution:
284
+
- podAffinityTerm:
285
+
labelSelector:
286
+
matchExpressions:
287
+
- key: chain-number
288
+
operator: In
289
+
values:
290
+
- "1"
291
+
topologyKey: kubernetes.io/hostname
292
+
weight: 100
293
+
```
266
294
267
-
### Self-healing and resilience -->
295
+
These are the only anti-affinity rules set for the broker.
0 commit comments