You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+42-8Lines changed: 42 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.topic: how-to
7
7
ms.subservice: azure-mqtt-broker
8
8
ms.custom:
9
9
- ignite-2023
10
-
ms.date: 11/11/2024
10
+
ms.date: 01/13/2025
11
11
12
12
#CustomerIntent: As an operator, I want to understand the settings for the MQTT broker so that I can configure it for high availability and scale.
13
13
ms.service: azure-iot-operations
@@ -24,7 +24,7 @@ For a list of the available settings, see the [Broker](/rest/api/iotoperations/b
24
24
## Configure scaling settings
25
25
26
26
> [!IMPORTANT]
27
-
> This setting requires modifying the Broker resource and can only be configured at initial deployment time using the Azure CLI or Azure Portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
27
+
> This setting requires modifying the Broker resource and can only be configured at initial deployment time using the Azure CLI or Azure portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
28
28
29
29
To configure the scaling settings of MQTT broker, specify the **cardinality** fields in the specification of the Broker resource during Azure IoT Operations deployment.
30
30
@@ -140,8 +140,10 @@ For example, if your cluster has three nodes, each with eight CPU cores, then se
140
140
141
141
## Configure memory profile
142
142
143
+
The memory profile specifies the broker's memory usage for resource-limited environments. You can choose from predefined memory profiles that have different memory usage characteristics. The memory profile setting is used to configure the memory usage of the frontend and backend replicas. The memory profile interacts with the cardinality settings to determine the total memory usage of the broker.
144
+
143
145
> [!IMPORTANT]
144
-
> This setting requires modifying the Broker resource and can only be configured at initial deployment time using the Azure CLI or Azure Portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
146
+
> This setting requires modifying the Broker resource and can only be configured at initial deployment time using the Azure CLI or Azure portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
145
147
146
148
To configure the memory profile settings MQTT broker, specify the memory profile fields in the specification of the Broker resource during Azure IoT Operations deployment.
147
149
@@ -165,7 +167,7 @@ To learn more, see [`az iot ops create` optional parameters](/cli/azure/iot/ops#
165
167
166
168
---
167
169
168
-
There are a few memory profiles to choose from, each with different memory usage characteristics.
170
+
There are predefined memory profiles with different memory usage characteristics.
169
171
170
172
### Tiny
171
173
@@ -191,27 +193,59 @@ Recommendations when using this profile:
191
193
- Only one or two frontends should be used.
192
194
- Clients shouldn't send large packets. You should only send packets smaller than 10 MiB.
193
195
196
+
If the memory profile is set to *Low* and each message coming in is 10 MB, here's how to figure out the throughput limit in messages/s.
197
+
194
198
### Medium
195
199
200
+
Use this profile when you need to handle a moderate number of connections and messages.
201
+
196
202
Medium is the default profile.
197
203
198
204
- Maximum memory usage of each frontend replica is approximately 1.9 GiB but the actual maximum memory usage might be higher.
199
205
- Maximum memory usage of each backend replica is approximately 1.5 GiB multiplied by the number of backend workers, but the actual maximum memory usage might be higher.
200
206
201
207
### High
202
208
209
+
Use this profile when you need to handle a large number of connections and messages.
210
+
203
211
- Maximum memory usage of each frontend replica is approximately 4.9 GiB but the actual maximum memory usage might be higher.
204
212
- Maximum memory usage of each backend replica is approximately 5.8 GiB multiplied by the number of backend workers, but the actual maximum memory usage might be higher.
205
213
214
+
215
+
## Calculate total memory usage
216
+
217
+
The memory profile setting specifies the memory usage for each frontend and backend replica and interacts with the cardinality settings. You can calculate the total memory usage per replica using the formula:
|`M_fe`| The memory usage of each frontend replica |
227
+
|`P_be`| The number of backend partitions |
228
+
|`RF_be`| Backend redundancy factor |
229
+
|`M_be`| The memory usage of each backend replica |
230
+
|`W_be`| The number of workers per backend replica |
231
+
232
+
For example if you select the *Medium* memory profile with two frontend replicas, two backend partitions, and a backend redundancy factor of two, the total memory usage per replica would be:
In comparison, the *Tiny* memory profile with two frontend replicas, two backend partitions, and a backend redundancy factor of two, the total memory usage per replica would be:
To prevent resource starvation in the cluster, the broker is configured by default to [request Kubernetes CPU resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). Scaling the number of replicas or workers proportionally increases the CPU resources required. A deployment error is emitted if there are insufficient CPU resources available in the cluster. This helps avoid situations where the requested broker cardinality lacks enough resources to run optimally. It also helps to avoid potential CPU contention and pod evictions.
209
243
210
244
MQTT broker currently requests one (1.0) CPU unit per frontend worker and two (2.0) CPU units per backend worker. See [Kubernetes CPU resource units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu) for more details.
211
245
212
-
For example, the below cardinality would request the following CPU resources:
213
-
- For frontends: 2 CPU units per frontend pod, totalling 6 CPU units.
214
-
- For backends: 4 CPU units per backend pod (for two backend workers), times 2 (redundancy factor), times 3 (number of partitions), totalling 24 CPU units.
246
+
For example, the following cardinality would request the following CPU resources:
247
+
- For frontends: 2 CPU units per frontend pod, totaling 6 CPU units.
248
+
- For backends: 4 CPU units per backend pod (for two backend workers), times 2 (redundancy factor), times 3 (number of partitions), totaling 24 CPU units.
215
249
216
250
```json
217
251
{
@@ -275,7 +309,7 @@ To verify the anti-affinity settings for a backend pod, use the following comman
275
309
kubectl get pod aio-broker-backend-1-0 -n azure-iot-operations -o yaml | grep affinity -A 15
276
310
```
277
311
278
-
The output will show the anti-affinity configuration, similar to the following:
312
+
The output shows the anti-affinity configuration, similar to the following:
0 commit comments