You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,12 +41,12 @@ However, this is *not* auto-scaling. The operator doesn't automatically scale th
41
41
To configure the cardinality settings directly, specify the `cardinality` field. The `cardinality` field is a nested field that has these subfields:
42
42
43
43
-`frontend`: This subfield defines the settings for the frontend pods, such as:
44
-
-`replicas`: The number of frontend replicas pods to deploy. Increasing the number of frontend replicas increases the number of connections that the broker can handle, and it also provides high availability in case one of the frontend pods fails.
45
-
-`workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica increases the number of connections that the frontend pod can handle.
44
+
-`replicas`: The number of frontend pods to deploy. Increasing the number of frontend replicas provides high availability in case one of the frontend pods fails.
45
+
-`workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica improves CPU core utilization because each worker can use only one CPU core at most. For example, if your cluster has 3 nodes, each with 8 CPU cores, then set the number of replicas to match the number of nodes (3) and increase the number of workers up to 8 per replica as you need more frontend throughput. This way, each frontend replica can use all the CPU cores on the node without workers competing for CPU resources.
46
46
-`backendChain`: This subfield defines the settings for the backend chains, such as:
47
47
-`partitions`: The number of partitions to deploy. Increasing the number of partitions increases the number of messages that the broker can handle. Through a process called *sharding*, each partition is responsible for a portion of the messages, divided by topic ID and session ID. The frontend pods distribute message traffic across the partitions.
48
-
-`redundancyFactor`: The number of backend replicas (pods) to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49
-
-`workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle.
48
+
-`redundancyFactor`: The number of backend pods to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49
+
-`workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle. Each worker can consume up to 2 CPU cores at most, so be careful when increasing the number of workers per replica to not exceed the number of CPU cores in the cluster.
50
50
51
51
Generally, increasing these values increases the broker's capacity to handle more connections and messages, and it also provides high availability in case one of the pods or nodes fails. However, increasing these values also increases the resource consumption of the broker. Combined with memory profile settings, carefully consider to tune the resource consumption of the broker when increasing these values.
0 commit comments