You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -447,7 +447,7 @@ sourceSettings:
447
447
448
448
### Specify source schema
449
449
450
-
When using MQTT or Kafka as the source, you can specify a [schema](concept-schema-registry.md) to display the list of data points in the operations experience web UI. Using a schema to deserialize and validate incoming messages [isn't currently supported](../troubleshoot/known-issues.md#data-flows).
450
+
When using MQTT or Kafka as the source, you can specify a [schema](concept-schema-registry.md) to display the list of data points in the operations experience web UI. Using a schema to deserialize and validate incoming messages [isn't currently supported](../troubleshoot/known-issues.md#data-flows-issues).
451
451
452
452
If the source is an asset, the schema is automatically inferred from the asset definition.
Copy file name to clipboardExpand all lines: articles/iot-operations/manage-mqtt-broker/howto-broker-diagnostics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ The validation process checks if the system works correctly by comparing the tes
42
42
The diagnostics probe periodically runs MQTT operations (PING, CONNECT, PUBLISH, SUBSCRIBE, UNSUBSCRIBE) on the MQTT broker and monitors the corresponding ACKs and traces to check for latency, message loss, and correctness of the replication protocol.
43
43
44
44
> [!IMPORTANT]
45
-
> The self-check diagnostics probe publishes messages to the `azedge/dmqtt/selftest` topic. Don't publish or subscribe to diagnostic probe topics that start with `azedge/dmqtt/selftest`. Publishing or subscribing to these topics might affect the probe or self-test checks and result in invalid results. Invalid results might be listed in diagnostic probe logs, metrics, or dashboards. For example, you might see the issue "Path verification failed for probe event with operation type 'Publish'" in the diagnostics-probe logs. For more information, see [Known issues](../troubleshoot/known-issues.md#mqtt-broker).
45
+
> The self-check diagnostics probe publishes messages to the `azedge/dmqtt/selftest` topic. Don't publish or subscribe to diagnostic probe topics that start with `azedge/dmqtt/selftest`. Publishing or subscribing to these topics might affect the probe or self-test checks and result in invalid results. Invalid results might be listed in diagnostic probe logs, metrics, or dashboards. For example, you might see the issue "Path verification failed for probe event with operation type 'Publish'" in the diagnostics-probe logs. For more information, see [Known issues](../troubleshoot/known-issues.md#mqtt-broker-issues).
46
46
>
47
47
> Even though the MQTT broker's [diagnostics](../manage-mqtt-broker/howto-broker-diagnostics.md) produces telemetry on its own topic, you might still get messages from the self-test when you subscribe to `#` topic. This is a limitation and expected behavior.
This section lists current known issues for Azure IoT Layered Network Management.
196
+
191
197
### Layered Network Management service doesn't get an IP address
192
198
193
199
---
194
200
195
-
Issue ID: 0000
201
+
Issue ID: 7864
196
202
197
203
---
198
204
@@ -214,7 +220,7 @@ To learn more, see [Networking | K3s](https://docs.k3s.io/networking#traefik-ing
214
220
215
221
---
216
222
217
-
Issue ID: 0000
223
+
Issue ID: 7955
218
224
219
225
---
220
226
@@ -228,11 +234,13 @@ To work around this issue, upgrade to Ubuntu 22.04 and reinstall K3S.
228
234
229
235
## Connector for OPC UA issues
230
236
237
+
This section lists current known issues for the connector for OPC UA.
238
+
231
239
### Connector pod doesn't restart after configuration change
232
240
233
241
---
234
242
235
-
Issue ID: 0000
243
+
Issue ID: 7518
236
244
237
245
---
238
246
@@ -242,13 +250,11 @@ Log signature: N/A
242
250
243
251
When you add a new asset with a new asset endpoint profile to the OPC UA broker and trigger a reconfiguration, the deployment of the `opc.tcp` pods changes to accommodate the new secret mounts for username and password. If the new mount fails for some reason, the pod does not restart and therefore the old flow for the correctly configured assets stops as well.
244
252
245
-
To work around this issue...
246
-
247
253
### OPC UA servers reject application certificate
248
254
249
255
---
250
256
251
-
Issue ID: 0000
257
+
Issue ID: 7679
252
258
253
259
---
254
260
@@ -262,7 +268,7 @@ The subject name and application URI must exactly match the provided certificate
262
268
263
269
---
264
270
265
-
Issue ID: 0000
271
+
Issue ID: 8446
266
272
267
273
---
268
274
@@ -272,13 +278,83 @@ Log signature: N/A
272
278
273
279
Providing a new invalid OPC UA application instance certificate after a successful installation of AIO can lead to connection errors. To resolve the issue, delete your Azure IoT Operations instances and restart the installation.
274
280
281
+
## Connector for media and connector for ONVIF issues
282
+
283
+
This section lists current known issues for the connector for media and the connector for ONVIF.
284
+
285
+
### Cleanup of unused media-connector resources
286
+
287
+
---
288
+
289
+
Issue ID: 2142
290
+
291
+
---
292
+
293
+
Log signature: N/A
294
+
295
+
---
296
+
297
+
If you delete all the `Microsoft.Media` asset endpoint profiles the deployment for media processing is not deleted.
298
+
299
+
To work around this issue, run the following command using the full name of your media connector deployment:
### AssetType CRD removal process doesn't complete
326
+
327
+
---
328
+
329
+
Issue ID: 6065
330
+
331
+
---
332
+
333
+
Log signature: `"Error HelmUninstallUnknown: Helm encountered an error while attempting to uninstall the release aio-118117837-connectors in the namespace azure-iot-operations. (caused by: Unknown: 1 error occurred: * timed out waiting for the condition"`
334
+
335
+
---
336
+
337
+
Sometimes, when you attempt to uninstall Azure IoT Operations from the cluster, the system can get to a state where CRD removal job is stuck in pending state and that blocks cleanup of Azure IoT Operations.
338
+
339
+
To work around this issue, you need to manually delete the CRD and finish the uninstall. To do this, complete the following steps:
1. Find the Helm release for the connectors, it's the one with `-connectors` suffix: `helm ls -a -n azure-iot-operations`
346
+
347
+
1. Uninstall Helm release without running the hook: `helm uninstall aio-<id>-connectors -n azure-iot-operations --no-hooks`
348
+
275
349
## OPC PLC simulator issues
276
350
351
+
This section lists current known issues for the OPC PLC simulator.
352
+
277
353
### The simulator doesn't send data to the MQTT broker after you create an asset endpoint
278
354
279
355
---
280
356
281
-
Issue ID: 0000
357
+
Issue ID: 8616
282
358
283
359
---
284
360
@@ -335,11 +411,13 @@ kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
335
411
336
412
## Data flows issues
337
413
414
+
This section lists current known issues for data flows.
415
+
338
416
### Data flow resources aren't visible in the operations experience web UI
339
417
340
418
---
341
419
342
-
Issue ID: 0000
420
+
Issue ID: 8724
343
421
344
422
---
345
423
@@ -355,7 +433,7 @@ There's currently no workaround for this issue.
355
433
356
434
---
357
435
358
-
Issue ID: 0000
436
+
Issue ID: 8750
359
437
360
438
---
361
439
@@ -369,7 +447,7 @@ X.509 authentication for custom Kafka endpoints isn't currently supported.
369
447
370
448
---
371
449
372
-
Issue ID: 0000
450
+
Issue ID: 8794
373
451
374
452
---
375
453
@@ -383,7 +461,7 @@ When you create a data flow, you can specify a schema in the source configuratio
383
461
384
462
---
385
463
386
-
Issue ID: 0000
464
+
Issue ID: 8841
387
465
388
466
---
389
467
@@ -400,7 +478,7 @@ To work around this issue, create the [multi-line secrets through Azure Key Vaul
400
478
401
479
---
402
480
403
-
Issue ID: 0000
481
+
Issue ID: 8891
404
482
405
483
---
406
484
@@ -416,7 +494,7 @@ To work around this issue, add randomness to the data flow names in your deploym
416
494
417
495
---
418
496
419
-
Issue ID: 0000
497
+
Issue ID: 8953
420
498
421
499
---
422
500
@@ -432,7 +510,7 @@ To work around this issue, restart your data flow pods.
432
510
433
511
---
434
512
435
-
Issue ID: 0000
513
+
Issue ID: 9289
436
514
437
515
---
438
516
@@ -448,7 +526,7 @@ To work around this issue, avoid using control characters in Kafka headers.
448
526
449
527
---
450
528
451
-
Issue ID: 0000
529
+
Issue ID: 9411
452
530
453
531
---
454
532
@@ -486,3 +564,17 @@ To work around this issue, use the following steps to manually delete the data f
486
564
1. Run `kubectl delete pod aio-dataflow-operator-0 -n azure-iot-operations` to delete the data flow operator pod. Deleting the pod clears the crash status and restarts the pod.
487
565
488
566
1. Wait for the operator pod to restart and deploy the data flow.
567
+
568
+
### Data flows error metrics
569
+
570
+
---
571
+
572
+
Issue ID: 2382
573
+
574
+
---
575
+
576
+
Log signature: N/A
577
+
578
+
---
579
+
580
+
Data flows marks message retries and reconnects as errors, and as a result data flows may look unhealthy. This behavior is only seen in previous versions of data flows. Review the logs to determine if the data flow is healthy.
0 commit comments