Skip to content

Commit 2e4f56a

Browse files
committed
Clarify workers
1 parent 6247d9b commit 2e4f56a

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -41,12 +41,12 @@ However, this is *not* auto-scaling. The operator doesn't automatically scale th
4141
To configure the cardinality settings directly, specify the `cardinality` field. The `cardinality` field is a nested field that has these subfields:
4242

4343
- `frontend`: This subfield defines the settings for the frontend pods, such as:
44-
- `replicas`: The number of frontend replicas pods to deploy. Increasing the number of frontend replicas increases the number of connections that the broker can handle, and it also provides high availability in case one of the frontend pods fails.
45-
- `workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica increases the number of connections that the frontend pod can handle.
44+
- `replicas`: The number of frontend pods to deploy. Increasing the number of frontend replicas provides high availability in case one of the frontend pods fails.
45+
- `workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica improves CPU core utilization because each worker can use only one CPU core at most. For example, if your cluster has 3 nodes, each with 8 CPU cores, then set the number of replicas to match the number of nodes (3) and increase the number of workers up to 8 per replica as you need more frontend throughput. This way, each frontend replica can use all the CPU cores on the node without workers competing for CPU resources.
4646
- `backendChain`: This subfield defines the settings for the backend chains, such as:
4747
- `partitions`: The number of partitions to deploy. Increasing the number of partitions increases the number of messages that the broker can handle. Through a process called *sharding*, each partition is responsible for a portion of the messages, divided by topic ID and session ID. The frontend pods distribute message traffic across the partitions.
48-
- `redundancyFactor`: The number of backend replicas (pods) to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49-
- `workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle.
48+
- `redundancyFactor`: The number of backend pods to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49+
- `workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle. Each worker can consume up to 2 CPU cores at most, so be careful when increasing the number of workers per replica to not exceed the number of CPU cores in the cluster.
5050

5151
Generally, increasing these values increases the broker's capacity to handle more connections and messages, and it also provides high availability in case one of the pods or nodes fails. However, increasing these values also increases the resource consumption of the broker. Combined with memory profile settings, carefully consider to tune the resource consumption of the broker when increasing these values.
5252

0 commit comments

Comments
 (0)