Skip to content

Commit b17254d

Browse files
committed
Add missing IDs and issues
1 parent b72a2da commit b17254d

File tree

3 files changed

+111
-19
lines changed

3 files changed

+111
-19
lines changed

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -447,7 +447,7 @@ sourceSettings:
447447

448448
### Specify source schema
449449

450-
When using MQTT or Kafka as the source, you can specify a [schema](concept-schema-registry.md) to display the list of data points in the operations experience web UI. Using a schema to deserialize and validate incoming messages [isn't currently supported](../troubleshoot/known-issues.md#data-flows).
450+
When using MQTT or Kafka as the source, you can specify a [schema](concept-schema-registry.md) to display the list of data points in the operations experience web UI. Using a schema to deserialize and validate incoming messages [isn't currently supported](../troubleshoot/known-issues.md#data-flows-issues).
451451

452452
If the source is an asset, the schema is automatically inferred from the asset definition.
453453

articles/iot-operations/manage-mqtt-broker/howto-broker-diagnostics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The validation process checks if the system works correctly by comparing the tes
4242
The diagnostics probe periodically runs MQTT operations (PING, CONNECT, PUBLISH, SUBSCRIBE, UNSUBSCRIBE) on the MQTT broker and monitors the corresponding ACKs and traces to check for latency, message loss, and correctness of the replication protocol.
4343

4444
> [!IMPORTANT]
45-
> The self-check diagnostics probe publishes messages to the `azedge/dmqtt/selftest` topic. Don't publish or subscribe to diagnostic probe topics that start with `azedge/dmqtt/selftest`. Publishing or subscribing to these topics might affect the probe or self-test checks and result in invalid results. Invalid results might be listed in diagnostic probe logs, metrics, or dashboards. For example, you might see the issue "Path verification failed for probe event with operation type 'Publish'" in the diagnostics-probe logs. For more information, see [Known issues](../troubleshoot/known-issues.md#mqtt-broker).
45+
> The self-check diagnostics probe publishes messages to the `azedge/dmqtt/selftest` topic. Don't publish or subscribe to diagnostic probe topics that start with `azedge/dmqtt/selftest`. Publishing or subscribing to these topics might affect the probe or self-test checks and result in invalid results. Invalid results might be listed in diagnostic probe logs, metrics, or dashboards. For example, you might see the issue "Path verification failed for probe event with operation type 'Publish'" in the diagnostics-probe logs. For more information, see [Known issues](../troubleshoot/known-issues.md#mqtt-broker-issues).
4646
>
4747
> Even though the MQTT broker's [diagnostics](../manage-mqtt-broker/howto-broker-diagnostics.md) produces telemetry on its own topic, you might still get messages from the self-test when you subscribe to `#` topic. This is a limitation and expected behavior.
4848

articles/iot-operations/troubleshoot/known-issues.md

Lines changed: 109 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ This article lists the current known issues for Azure IoT Operations.
1313

1414
## Deploy, update, and uninstall issues
1515

16+
This section lists current known issues that might occur when you deploy, update, or uninstall Azure IoT Operations.
17+
1618
### Unable to retrieve some image pull secrets
1719

1820
---
@@ -123,6 +125,8 @@ To work around this issue, follow these steps:
123125

124126
## MQTT broker issues
125127

128+
This section lists current known issues for the MQTT broker.
129+
126130
### MQTT broker high memory usage
127131

128132
---
@@ -174,7 +178,7 @@ There's currently no workaround for this issue.
174178
175179
---
176180
177-
Issue ID: 0000
181+
Issue ID: 1567
178182
179183
---
180184
@@ -188,11 +192,13 @@ There's currently no workaround for this issue.
188192
189193
## Azure IoT Layered Network Management (preview) issues
190194
195+
This section lists current known issues for Azure IoT Layered Network Management.
196+
191197
### Layered Network Management service doesn't get an IP address
192198
193199
---
194200
195-
Issue ID: 0000
201+
Issue ID: 7864
196202
197203
---
198204
@@ -214,7 +220,7 @@ To learn more, see [Networking | K3s](https://docs.k3s.io/networking#traefik-ing
214220
215221
---
216222
217-
Issue ID: 0000
223+
Issue ID: 7955
218224
219225
---
220226
@@ -228,11 +234,13 @@ To work around this issue, upgrade to Ubuntu 22.04 and reinstall K3S.
228234
229235
## Connector for OPC UA issues
230236
237+
This section lists current known issues for the connector for OPC UA.
238+
231239
### Connector pod doesn't restart after configuration change
232240
233241
---
234242
235-
Issue ID: 0000
243+
Issue ID: 7518
236244
237245
---
238246
@@ -242,13 +250,11 @@ Log signature: N/A
242250
243251
When you add a new asset with a new asset endpoint profile to the OPC UA broker and trigger a reconfiguration, the deployment of the `opc.tcp` pods changes to accommodate the new secret mounts for username and password. If the new mount fails for some reason, the pod does not restart and therefore the old flow for the correctly configured assets stops as well.
244252
245-
To work around this issue...
246-
247253
### OPC UA servers reject application certificate
248254
249255
---
250256
251-
Issue ID: 0000
257+
Issue ID: 7679
252258
253259
---
254260
@@ -262,7 +268,7 @@ The subject name and application URI must exactly match the provided certificate
262268
263269
---
264270
265-
Issue ID: 0000
271+
Issue ID: 8446
266272
267273
---
268274
@@ -272,13 +278,83 @@ Log signature: N/A
272278
273279
Providing a new invalid OPC UA application instance certificate after a successful installation of AIO can lead to connection errors. To resolve the issue, delete your Azure IoT Operations instances and restart the installation.
274280
281+
## Connector for media and connector for ONVIF issues
282+
283+
This section lists current known issues for the connector for media and the connector for ONVIF.
284+
285+
### Cleanup of unused media-connector resources
286+
287+
---
288+
289+
Issue ID: 2142
290+
291+
---
292+
293+
Log signature: N/A
294+
295+
---
296+
297+
If you delete all the `Microsoft.Media` asset endpoint profiles the deployment for media processing is not deleted.
298+
299+
To work around this issue, run the following command using the full name of your media connector deployment:
300+
301+
```bash
302+
kubectl delete deployment aio-opc-media-... -n azure-iot-operations
303+
```
304+
305+
### Cleanup of unused onvif-connector resources
306+
307+
---
308+
309+
Issue ID: 3322
310+
311+
---
312+
313+
Log signature: N/A
314+
315+
---
316+
317+
If you delete all the `Microsoft.Onvif` asset endpoint profiles the deployment for media processing is not deleted.
318+
319+
To work around this issue, run the following command using the full name of your ONVIF connector deployment:
320+
321+
```bash
322+
kubectl delete deployment aio-opc-onvif-... -n azure-iot-operations
323+
```
324+
325+
### AssetType CRD removal process doesn't complete
326+
327+
---
328+
329+
Issue ID: 6065
330+
331+
---
332+
333+
Log signature: `"Error HelmUninstallUnknown: Helm encountered an error while attempting to uninstall the release aio-118117837-connectors in the namespace azure-iot-operations. (caused by: Unknown: 1 error occurred: * timed out waiting for the condition"`
334+
335+
---
336+
337+
Sometimes, when you attempt to uninstall Azure IoT Operations from the cluster, the system can get to a state where CRD removal job is stuck in pending state and that blocks cleanup of Azure IoT Operations.
338+
339+
To work around this issue, you need to manually delete the CRD and finish the uninstall. To do this, complete the following steps:
340+
341+
1. Delete the AssetType CRD manually: `kubectl delete crd assettypes.opcuabroker.iotoperations.azure.com --ignore-not-found=true`
342+
343+
1. Delete the job definition: `kubectl delete job aio-opc-delete-crds-job-<version> -n azure-iot-operations`
344+
345+
1. Find the Helm release for the connectors, it's the one with `-connectors` suffix: `helm ls -a -n azure-iot-operations`
346+
347+
1. Uninstall Helm release without running the hook: `helm uninstall aio-<id>-connectors -n azure-iot-operations --no-hooks`
348+
275349
## OPC PLC simulator issues
276350
351+
This section lists current known issues for the OPC PLC simulator.
352+
277353
### The simulator doesn't send data to the MQTT broker after you create an asset endpoint
278354
279355
---
280356
281-
Issue ID: 0000
357+
Issue ID: 8616
282358
283359
---
284360
@@ -335,11 +411,13 @@ kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
335411
336412
## Data flows issues
337413
414+
This section lists current known issues for data flows.
415+
338416
### Data flow resources aren't visible in the operations experience web UI
339417
340418
---
341419
342-
Issue ID: 0000
420+
Issue ID: 8724
343421
344422
---
345423
@@ -355,7 +433,7 @@ There's currently no workaround for this issue.
355433
356434
---
357435
358-
Issue ID: 0000
436+
Issue ID: 8750
359437
360438
---
361439
@@ -369,7 +447,7 @@ X.509 authentication for custom Kafka endpoints isn't currently supported.
369447
370448
---
371449
372-
Issue ID: 0000
450+
Issue ID: 8794
373451
374452
---
375453
@@ -383,7 +461,7 @@ When you create a data flow, you can specify a schema in the source configuratio
383461
384462
---
385463
386-
Issue ID: 0000
464+
Issue ID: 8841
387465
388466
---
389467
@@ -400,7 +478,7 @@ To work around this issue, create the [multi-line secrets through Azure Key Vaul
400478
401479
---
402480
403-
Issue ID: 0000
481+
Issue ID: 8891
404482
405483
---
406484
@@ -416,7 +494,7 @@ To work around this issue, add randomness to the data flow names in your deploym
416494
417495
---
418496
419-
Issue ID: 0000
497+
Issue ID: 8953
420498
421499
---
422500
@@ -432,7 +510,7 @@ To work around this issue, restart your data flow pods.
432510
433511
---
434512
435-
Issue ID: 0000
513+
Issue ID: 9289
436514
437515
---
438516
@@ -448,7 +526,7 @@ To work around this issue, avoid using control characters in Kafka headers.
448526
449527
---
450528
451-
Issue ID: 0000
529+
Issue ID: 9411
452530
453531
---
454532
@@ -486,3 +564,17 @@ To work around this issue, use the following steps to manually delete the data f
486564
1. Run `kubectl delete pod aio-dataflow-operator-0 -n azure-iot-operations` to delete the data flow operator pod. Deleting the pod clears the crash status and restarts the pod.
487565
488566
1. Wait for the operator pod to restart and deploy the data flow.
567+
568+
### Data flows error metrics
569+
570+
---
571+
572+
Issue ID: 2382
573+
574+
---
575+
576+
Log signature: N/A
577+
578+
---
579+
580+
Data flows marks message retries and reconnects as errors, and as a result data flows may look unhealthy. This behavior is only seen in previous versions of data flows. Review the logs to determine if the data flow is healthy.

0 commit comments

Comments
 (0)