You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-broker-diagnostics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ By using diagnostic settings, you can configure metrics, logs, and self-check fo
20
20
21
21
## Metrics
22
22
23
-
Metrics provide information about the current and past health and status of the MQTT broker. These metrics are emitted in OpenTelemetry Protocol format (OTLP). You can convert them to Prometheus format by using an OpenTelemetry Collector and route them to Azure Managed Grafana dashboards by using Azure Monitor managed service for Prometheus. To learn more, see [Configure observability and monitoring](../configure-observability-monitoring/howto-configure-observability.md).
23
+
Metrics provide information about the current and past health and status of the MQTT broker. These metrics are emitted in OpenTelemetry Protocol (OTLP) format. You can convert them to Prometheus format by using an OpenTelemetry Collector and route them to Azure Managed Grafana dashboards by using Azure Monitor managed service for Prometheus. To learn more, see [Configure observability and monitoring](../configure-observability-monitoring/howto-configure-observability.md).
24
24
25
25
For the full list of metrics available, see [MQTT broker metrics](../reference/observability-metrics-mqtt-broker.md).
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ For a list of the available settings, see the [Broker](/rest/api/iotoperations/b
26
26
> [!IMPORTANT]
27
27
> This setting requires that you modify the Broker resource. It's configured only at initial deployment by using the Azure CLI or the Azure portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
28
28
29
-
To configure the scaling settings of an MQTT broker, specify the **cardinality** fields in the specification of the Broker resource during Azure IoT Operations deployment.
29
+
To configure the scaling settings of the MQTT broker, specify the **cardinality** fields in the specification of the Broker resource during Azure IoT Operations deployment.
30
30
31
31
### Automatic deployment cardinality
32
32
@@ -60,7 +60,7 @@ To learn more, see [Azure CLI support for advanced MQTT broker configuration](ht
60
60
61
61
The MQTT broker operator automatically deploys the appropriate number of pods based on the number of available nodes at the time of the deployment. This capability is useful for nonproduction scenarios where you don't need high availability or scale.
62
62
63
-
However, this capability is *not* autoscaling. The operator doesn't automatically scale the number of pods based on the load. The operator only determines the initial number of pods to deploy based on the cluster hardware. As noted previously, cardinality is set only at initial deployment time. A new deployment is required if the cardinality settings need to be changed.
63
+
This capability is *not* autoscaling. The operator doesn't automatically scale the number of pods based on the load. The operator determines the initial number of pods to deploy only based on the cluster hardware. As noted previously, cardinality is set only at initial deployment time. A new deployment is required if the cardinality settings need to be changed.
64
64
65
65
### Configure cardinality directly
66
66
@@ -131,7 +131,7 @@ The backend chain subfield defines the settings for the backend partitions. The
131
131
132
132
#### Considerations
133
133
134
-
When you increase the cardinality values, the broker's capacity to handle more connections and messages generally improves, and it enhances high availability if there are pod or node failures. However, this increased capacity also leads to higher resource consumption. So, when you adjust cardinality values, consider the [memory profile settings](#configure-memory-profile) and broker's [CPU resource requests](#cardinality-and-kubernetes-resource-limits). Increasing the number of workers per frontend replica can help increase CPU core utilization if you discover that frontend CPU utilization is a bottleneck. Increasing the number of backend workers can help with the message throughput if backend CPU utilization is a bottleneck.
134
+
When you increase the cardinality values, the broker's capacity to handle more connections and messages generally improves, and it enhances high availability if there are pod or node failures. This increased capacity also leads to higher resource consumption. So when you adjust cardinality values, consider the [memory profile settings](#configure-memory-profile) and broker's [CPU resource requests](#cardinality-and-kubernetes-resource-limits). Increasing the number of workers per frontend replica can help increase CPU core utilization if you discover that frontend CPU utilization is a bottleneck. Increasing the number of backend workers can help with the message throughput if backend CPU utilization is a bottleneck.
135
135
136
136
For example, if your cluster has three nodes, each with eight CPU cores, then set the number of frontend replicas to match the number of nodes (3) and set the number of workers to 1. Set the number of backend partitions to match the number of nodes (3) and set the backend workers to 1. Set the redundancy factor as desired (2 or 3). Increase the number of frontend workers if you discover that frontend CPU utilization is a bottleneck. Remember that backend and frontend workers might compete for CPU resources with each other and other pods.
137
137
@@ -140,7 +140,7 @@ For example, if your cluster has three nodes, each with eight CPU cores, then se
140
140
> [!IMPORTANT]
141
141
> This setting requires you to modify the Broker resource. It's configured only at initial deployment by using the Azure CLI or the Azure portal. A new deployment is required if Broker configuration changes are needed. To learn more, see [Customize default Broker](./overview-broker.md#customize-default-broker).
142
142
143
-
To configure the memory profile settings of an MQTT broker, specify the memory profile fields in the specification of the Broker resource during IoT Operations deployment.
143
+
To configure the memory profile settings of the MQTT broker, specify the memory profile fields in the specification of the Broker resource during IoT Operations deployment.
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-disk-backed-message-buffer.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -157,7 +157,7 @@ For example, to use a persistent volume with a capacity of 1 gigabyte, specify t
157
157
158
158
An [emptyDir volume](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) is the least preferred option after persistent volume.
159
159
160
-
Only use an `emptyDir` volume when you use a cluster with filesystem quotas. For more information, see details on the [Filesystem project quota tab](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption). If the feature isn't enabled, the cluster does periodic scanning that doesn't enforce any limit and allows the host node to fill disk space and mark the whole host node as unhealthy.
160
+
Use an `emptyDir` volume only when you use a cluster with filesystem quotas. For more information, see the [Filesystem project quota tab](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption). If the feature isn't enabled, the cluster does periodic scanning that doesn't enforce any limit and allows the host node to fill disk space and mark the whole host node as unhealthy.
161
161
162
162
For example, to use an `emptyDir` volume with a capacity of 1 gigabyte, specify the following parameters in your Broker resource:
0 commit comments