You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/troubleshoot/known-issues.md
-146Lines changed: 0 additions & 146 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,66 +47,6 @@ If you deploy Azure IoT Operations in GitHub Codespaces, shutting down and resta
47
47
48
48
Currently, there's no workaround for the issue. If you need a cluster that supports shutting down and restarting, choose one of the options in [Prepare your Azure Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md).
49
49
50
-
### Helm package enters a stuck state during update
51
-
52
-
---
53
-
54
-
Issue ID: 9928
55
-
56
-
---
57
-
58
-
Log signature: `"Message: Update failed for this resource, as there is a conflicting operation in progress. Please try after sometime."`
59
-
60
-
---
61
-
62
-
When you update Azure IoT Operations, the Helm package might enter a stuck state, preventing any helm install or upgrade operations from proceeding. This scenario results in the error message `Update failed for this resource, as there is a conflicting operation in progress. Please try after sometime.`, which blocks further updates.
63
-
64
-
To work around this issue, follow these steps:
65
-
66
-
1. Identify the stuck components by running the following command:
67
-
68
-
```sh
69
-
helm list -n azure-iot-operations --pending
70
-
```
71
-
72
-
In the output, look for the release name of components, `<component-release-name>`, which have a status of `pending-upgrade` or `pending-install`. This issue might affect the following components:
73
-
74
-
-`-adr`
75
-
-`-akri`
76
-
-`-connectors`
77
-
-`-mqttbroker`
78
-
-`-dataflows`
79
-
-`-schemaregistry`
80
-
81
-
1. Using the `<component-release-name>` from step 1, retrieve the revision history of the stuck release. You need to run the following command for **each component from step 1**. For example, if components `-adr` and `-mqttbroker` are stuck, you run the following command twice, once for each component:
Make sure to replace `<component-release-name>` with the release name of the components that are stuck. In the output, look for the last revision that has a status of `Deployed` or `Superseded` and note the revision number.
88
-
89
-
1. Using the **revision number from step 2**, roll back the Helm release to the last successful revision. You need to run the following command for each component, `<component-release-name>`, and its revision number, `<revision-number>`, from steps 1 and 2.
> You need to repeat steps 2 and 3 for each component that is stuck. You reattempt the upgrade only after all components are rolled back to the last successful revision.
97
-
98
-
1. After the rollback of each component is complete, reattempt the upgrade using the following command:
99
-
100
-
```sh
101
-
az iot ops update
102
-
```
103
-
104
-
If you receive a message stating `Nothing to upgrade or upgrade complete`, force the upgrade by appending:
105
-
106
-
```sh
107
-
az iot ops upgrade ....... --release-train stable
108
-
```
109
-
110
50
## MQTT broker issues
111
51
112
52
This section lists current known issues for the MQTT broker.
@@ -238,46 +178,6 @@ To work around this issue, you can either:
238
178
239
179
This section lists current known issues for the connector for media and the connector for ONVIF.
240
180
241
-
### Cleanup of unused media-connector resources
242
-
243
-
---
244
-
245
-
Issue ID: 2142
246
-
247
-
---
248
-
249
-
Log signature: N/A
250
-
251
-
---
252
-
253
-
If you delete all the `Microsoft.Media` asset endpoint profiles, the deployment for media processing isn't deleted.
254
-
255
-
To work around this issue, run the following command using the full name of your media connector deployment:
### AssetType CRD removal process doesn't complete
282
182
283
183
---
@@ -336,20 +236,6 @@ Log signature: N/A
336
236
337
237
X.509 authentication for custom Kafka endpoints isn't currently supported.
338
238
339
-
### Data points aren't validated against a schema
340
-
341
-
---
342
-
343
-
Issue ID: 8794
344
-
345
-
---
346
-
347
-
Log signature: N/A
348
-
349
-
---
350
-
351
-
When you create a data flow, you can specify a schema in the source configuration. However, deserializing and validating messages using a schema isn't supported yet. Specifying a schema in the source configuration only allows the operations experience to display the list of data points, but the data points aren't validated against the schema.
352
-
353
239
### Connection failures with Azure Event Grid
354
240
355
241
---
@@ -366,38 +252,6 @@ When you connect multiple IoT Operations instances to the same Event Grid MQTT n
366
252
367
253
To work around this issue, add randomness to the data flow names in your deployment templates.
368
254
369
-
### Data flow errors after a network disruption
370
-
371
-
---
372
-
373
-
Issue ID: 8953
374
-
375
-
---
376
-
377
-
Log signature: N/A
378
-
379
-
---
380
-
381
-
When the network connection is disrupted, data flows might encounter errors sending messages because of a mismatched producer ID.
382
-
383
-
To work around this issue, restart your data flow pods.
384
-
385
-
### Disconnections from Kafka endpoints
386
-
387
-
---
388
-
389
-
Issue ID: 9289
390
-
391
-
---
392
-
393
-
Log signature: N/A
394
-
395
-
---
396
-
397
-
If you use control characters in Kafka headers, you might encounter disconnections. Control characters in Kafka headers such as `0x01`, `0x02`, `0x03`, `0x04` are UTF-8 compliant but the IoT Operations MQTT broker rejects them. This issue happens during the data flow process when Kafka headers are converted to MQTT properties using a UTF-8 parser. Packets with control characters might be treated as invalid and rejected by the broker and lead to data flow failures.
398
-
399
-
To work around this issue, avoid using control characters in Kafka headers.
0 commit comments