You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/discover-manage-assets/concept-opcua-message-format.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ The connector for OPC UA publishes messages from OPC UA servers to the MQTT brok
20
20
The payload of an OPC UA message is a JSON object that contains the telemetry data from the OPC UA server. The following example shows the payload of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic:
The output from the previous command looks like the following example:
@@ -43,7 +43,7 @@ Client $server-generated/05a22b94-c5a2-4666-9c62-837431ca6f7e received PUBLISH (
43
43
The headers in the messages published by the connector for OPC UA are based on the [CloudEvents specification for OPC UA](https://github.com/cloudevents/spec/blob/main/cloudevents/extensions/opcua.md). The headers from an OPC UA message become user properties in a message published to the MQTT broker. The following example shows the user properties of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic:
The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/mq-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
325
+
The mosquitto client uses the service account token mounted at `/var/run/secrets/tokens/broker-sat` to authenticate with the broker. The token is valid for 24 hours. The client also uses the default root CA cert mounted at `/var/run/certs/ca.crt` to verify the broker's TLS certificate chain.
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+15-9Lines changed: 15 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,19 +30,25 @@ For a list of the available settings, see the [Broker](/rest/api/iotoperationsmq
30
30
31
31
To configure the scaling settings MQTT broker, you need to specify the `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
32
32
33
-
The `cardinality` field is a nested field that has these subfields:
33
+
### Automatic deployment cardinality
34
+
35
+
To automatically determine the initial cardinality during deployment, omit the `cardinality` field in the *Broker* resource. The MQTT broker operator automatically deploys the appropriate number of pods based on the number of available nodes at the time of the deployment. This is useful for non-production scenarios where you don't need high-availability or scale.
36
+
37
+
However, this is *not* auto-scaling. The operator doesn't automatically scale the number of pods based on the load. The operator only determines the initial number of pods to deploy based on the cluster hardware. As noted above, the cardinality can only be set at initial deployment time, and a new deployment is required if the cardinality settings need to be changed.
38
+
39
+
### Configure cardinality directly
40
+
41
+
To configure the cardinality settings directly, specify the `cardinality` field. The `cardinality` field is a nested field that has these subfields:
34
42
35
43
-`frontend`: This subfield defines the settings for the frontend pods, such as:
36
-
-`replicas`: The number of frontend pods to deploy. This subfield is required if the `mode` field is set to `distributed`.
37
-
-`workers`: The number of workers to deploy per frontend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
44
+
-`replicas`: The number of frontend pods to deploy. Increasing the number of frontend replicas provides high availability in case one of the frontend pods fails.
45
+
-`workers`: The number of logical frontend workers per replica. Increasing the number of workers per frontend replica improves CPU core utilization because each worker can use only one CPU core at most. For example, if your cluster has 3 nodes, each with 8 CPU cores, then set the number of replicas to match the number of nodes (3) and increase the number of workers up to 8 per replica as you need more frontend throughput. This way, each frontend replica can use all the CPU cores on the node without workers competing for CPU resources.
38
46
-`backendChain`: This subfield defines the settings for the backend chains, such as:
39
-
-`redundancyFactor`: The number of data copies in each backend chain. This subfield is required if the `mode` field is set to `distributed`.
40
-
-`partitions`: The number of partitions to deploy. This subfield is required if the `mode` field is set to `distributed`.
41
-
-`workers`: The number of workers to deploy per backend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
42
-
43
-
If `cardinality` field is omitted, cardinality is determined by MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware.
47
+
-`partitions`: The number of partitions to deploy. Increasing the number of partitions increases the number of messages that the broker can handle. Through a process called *sharding*, each partition is responsible for a portion of the messages, divided by topic ID and session ID. The frontend pods distribute message traffic across the partitions.
48
+
-`redundancyFactor`: The number of backend pods to deploy per partition. Increasing the redundancy factor increases the number of data copies to provide resiliency against node failures in the cluster.
49
+
-`workers`: The number of workers to deploy per backend replica. The workers take care of storing and delivering messages to clients together. Increasing the number of workers per backend replica increases the number of messages that the backend pod can handle. Each worker can consume up to 2 CPU cores at most, so be careful when increasing the number of workers per replica to not exceed the number of CPU cores in the cluster.
44
50
45
-
To configure the scaling settings MQTT broker, you need to specify the `mode` and `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
51
+
When you increase these values, the broker's capacity to handle more connections and messages improves, and it enhances high availability in case of pod or node failures. However, this also leads to higher resource consumption. So, when adjusting cardinality values, consider the memory profile settings and balance these factors to optimize the broker's resource usage.
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-tls-auto.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -259,7 +259,7 @@ The `--cafile` argument enables TLS on the mosquitto client and specifies that t
259
259
260
260
Replace `$HOST` with the appropriate host:
261
261
262
-
- If connecting from [within the same cluster](howto-test-connection.md#connect-from-a-pod-within-the-cluster-with-default-configuration), replace with the service name given (`my-new-tls-listener`in example) or the service `CLUSTER-IP`.
262
+
- If connecting from [within the same cluster](howto-test-connection.md#connect-to-the-default-listener-inside-the-cluster), replace with the service name given (`my-new-tls-listener`in example) or the service `CLUSTER-IP`.
263
263
- If connecting from outside the cluster, the service `EXTERNAL-IP`.
264
264
265
265
Remember to specify authentication methods if needed.
0 commit comments