You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+15-9Lines changed: 15 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,19 +30,25 @@ For a list of the available settings, see the [Broker](/rest/api/iotoperationsmq
30
30
31
31
To configure the scaling settings MQTT broker, you need to specify the `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
32
32
33
-
The `cardinality` field is a nested field that has these subfields:
33
+
### Automatic deployment cardinality
34
+
35
+
To automatically determine the initial cardinality during deployment, omit the `cardinality` field in the *Broker* resource. The MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware. This is useful for non-production scenarios where you don't need high-availability or scale.
36
+
37
+
However, this is *not* auto-scaling. The operator doesn't automatically scale the number of pods based on the load. The operator only determines the initial number of pods to deploy based on the cluster hardware. As noted above, the cardinality can only be set at initial deployment time, and a new deployment is required if the cardinality settings need to be changed.
38
+
39
+
### Configure cardinality directly
40
+
41
+
To configure the cardinality settings directly, specify the `cardinality` field. The `cardinality` field is a nested field that has these subfields:
34
42
35
43
-`frontend`: This subfield defines the settings for the frontend pods, such as:
36
-
-`replicas`: The number of frontend pods to deploy. This subfield is required if the `mode` field is set to `distributed`.
37
-
-`workers`: The number of workers to deploy per frontend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
44
+
-`replicas`: The number of frontend replicas pods to deploy. Increasing the number of frontend replicas increases the number of connections that the broker can handle, and it also provides high availability in case one of the frontend pods fails.
45
+
-`workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica increases the number of connections that the frontend pod can handle.
38
46
-`backendChain`: This subfield defines the settings for the backend chains, such as:
39
-
-`redundancyFactor`: The number of data copies in each backend chain. This subfield is required if the `mode` field is set to `distributed`.
40
-
-`partitions`: The number of partitions to deploy. This subfield is required if the `mode` field is set to `distributed`.
41
-
-`workers`: The number of workers to deploy per backend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
42
-
43
-
If `cardinality` field is omitted, cardinality is determined by MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware.
47
+
-`partitions`: The number of partitions to deploy. Increasing the number of partitions increases the number of messages that the broker can handle. Through a process called *sharding*, each partition is responsible for a portion of the messages, divided by topic ID and session ID. The frontend pods distribute message traffic across the partitions.
48
+
-`redundancyFactor`: The number of backend replicas (pods) to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49
+
-`workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle.
44
50
45
-
To configure the scaling settings MQTT broker, you need to specify the `mode` and `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
51
+
Generally, increasing these values increases the broker's capacity to handle more connections and messages, and it also provides high availability in case one of the pods or nodes fails. However, increasing these values also increases the resource consumption of the broker. Combined with memory profile settings, carefully consider to tune the resource consumption of the broker when increasing these values.
0 commit comments