You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/troubleshoot/known-issues.md
+45-73Lines changed: 45 additions & 73 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -129,52 +129,41 @@ Log signature: N/A
129
129
130
130
When you add a new asset with a new asset endpoint profile to the OPC UA broker and trigger a reconfiguration, the deployment of the `opc.tcp` pods changes to accommodate the new secret mounts for username and password. If the new mount fails for some reason, the pod doesn't restart and therefore the old flow for the correctly configured assets stops as well.
131
131
132
-
### Data spike every 2.5 hours with some OPC UA simulators
132
+
### An OPC UA server modeled as a device can only have one inbound endpoint of type "Microsoft.OpcUa"
133
133
134
134
---
135
135
136
-
Issue ID: 6513
136
+
Issue ID: 2411
137
137
138
138
---
139
139
140
-
Log signature: Increased message volume every 2.5 hours
140
+
`2025-07-24T13:29:30.280Z aio-opc-supervisor-85b8c78df5-26tn5 - Maintaining the new asset test-opcua-asset | - | 1 is skipped because the endpoint profile test-opcua.opcplc-e2e-anon-000000 is not present`
141
141
142
142
---
143
143
144
-
Data values spike every 2.5 hours when using particular OPC UA simulators causing CPU and memory spikes. This issue isn't seen with OPC PLC simulator used in the quickstarts. No data is lost, but you can see an increase in the volume of data published from the server to the MQTT broker.
144
+
When you create an OPC UA device, you can only have one inbound endpoint of type `Microsoft.OpcUa`. Currently, any other endpoints aren't used.
145
145
146
-
### No message schema generated if selected nodes in a dataset reference the same complex data type definition
146
+
Workaround: Create multiple devices with a single endpoint each if you want to use namespace assets.
147
147
148
-
---
148
+
An OPC UA namespaced asset can only have a single dataset. Currently, any other datasets aren't used.
149
149
150
-
Issue ID: 7369
150
+
Workaround: Create multiple namespace assets each with a single dataset.
151
151
152
-
---
153
152
154
-
Log signature: `An item with the same key has already been added. Key: <element name of the data type>`
153
+
154
+
### Data spike every 2.5 hours with some OPC UA simulators
155
155
156
156
---
157
157
158
-
No message schema is generated if selected nodes in a dataset reference the same complex data type definition (a UDT of type struct or enum).
158
+
Issue ID: 6513
159
159
160
-
If you select data points (node IDs) for a dataset that share non-OPC UA namespace complex type definitions (struct or enum), then the JSON schema isn't generated. The default open schema is shown when you create a data flow instead. For example, if the data set contains three values of a data type, then whether it works or not is shown in the following table. You can substitute `int` for any OPC UA built in type or primitive type such as `string`, `double`, `float`, or `long`:
160
+
---
161
161
162
-
| Type of Value 1 | Type of Value 2 | Type of Value 3 | Successfully generates schema |
Log signature: Increased message volume every 2.5 hours
172
163
173
-
To work around this issue, you can either:
164
+
---
174
165
175
-
- Split the dataset across two or more assets.
176
-
- Manually upload a schema.
177
-
- Use the default nonschema experience in the data flow designer.
166
+
Data values spike every 2.5 hours when using particular OPC UA simulators causing CPU and memory spikes. This issue isn't seen with OPC PLC simulator used in the quickstarts. No data is lost, but you can see an increase in the volume of data published from the server to the MQTT broker.
178
167
179
168
## Connector for media and connector for ONVIF issues
180
169
@@ -192,7 +181,7 @@ Log signature: `"Error HelmUninstallUnknown: Helm encountered an error while att
192
181
193
182
---
194
183
195
-
Sometimes, when you attempt to uninstall Azure IoT Operations from the cluster, the system can get to a state where CRD removal job is stuck in pending state and that blocks the cleanup of Azure IoT Operations.
184
+
Sometimes, when you attempt to uninstall Azure IoT Operations from the cluster, the system reaches a state where CRD removal job is stuck in pending state, which blocks the cleanup of Azure IoT Operations.
196
185
197
186
To work around this issue, complete the following steps to manually delete the CRD and finish the uninstall:
198
187
@@ -204,115 +193,98 @@ To work around this issue, complete the following steps to manually delete the C
204
193
205
194
1. Uninstall Helm release without running the hook: `helm uninstall aio-<id>-connectors -n azure-iot-operations --no-hooks`
206
195
207
-
## Asset discovery with Akri services issues
208
-
209
-
This section lists current known issues for asset discovery with Akri services.
210
-
211
-
### Asset discovery doesn't work for one hour after upgrade
196
+
### Media and ONVIF devices with an underscore character in the endpoint name are ignored
212
197
213
198
---
214
199
215
-
Issue ID: 0407
200
+
Issue ID: 5712
216
201
217
202
---
218
203
219
204
Log signature: N/A
220
205
221
206
---
222
207
223
-
When you upgrade the Akri services, you might experience some loss of messages and assets for an hour after the upgrade.
224
-
225
-
To workaround this issue, wait for an hour after the upgrade and run the asset detection scenario again.
226
-
227
-
## Data flows issues
208
+
If you create a media or ONVIF device with an endpoint name that contains an underscore ("_") character, the connector for media ignores the device.
228
209
229
-
This section lists current known issues for data flows.
210
+
To work around this issue, use a hyphen ("-") instead of an underscore in the endpoint name.
230
211
231
-
### Data flow resources aren't visible in the operations experience web UI
212
+
### Media connector doesn't use the path in destination configuration
232
213
233
214
---
234
215
235
-
Issue ID: 8724
216
+
Issue ID: 6797
236
217
237
218
---
238
219
239
220
Log signature: N/A
240
221
241
222
---
242
223
243
-
Data flow custom resources created in your cluster using Kubernetes aren't visible in the operations experience web UI. This result is expected because [managing Azure IoT Operations components using Kubernetes is in preview](../deploy-iot-ops/howto-manage-update-uninstall.md#manage-components-using-kubernetes-deployment-manifests-preview), and synchronizing resources from the edge to the cloud isn't currently supported.
224
+
Media assets with a task type of "snapshot-to-fs" or "clip-to-fs" don't honor the path in the destination configuration. Instead, they use the "Additional configuration" path field.
244
225
245
-
There's currently no workaround for this issue.
246
-
247
-
### Connection failures with Azure Event Grid
226
+
### Media connector ignores MQTT topic setting in asset
248
227
249
228
---
250
229
251
-
Issue ID: 8891
230
+
Issue ID: 6780
252
231
253
232
---
254
233
255
234
Log signature: N/A
256
235
257
236
---
258
237
259
-
When you connect multiple IoT Operations instances to the same Event Grid MQTT namespace, connection failures might occur due to client ID conflicts. Client IDs are currently derived from data flow resource names, and when using infrastructure as code patterns for deployment, the generated client IDs might be identical.
260
-
261
-
To work around this issue, add randomness to the data flow names in your deployment templates.
238
+
The media connector ignores the MQTT destination topic setting in the asset. Instead, it uses the default topic: `/azure-iot-operations/data/<asset-name>/snapshot-to-mqtt`.
262
239
263
-
### Data flow deployment doesn't complete
240
+
### Media connector inbound endpoint addresses aren't fully validated
264
241
265
242
---
266
243
267
-
Issue ID: 9411
244
+
Issue ID: 2679
268
245
269
246
---
270
247
271
-
Log signature:
272
-
273
-
`"Dataflow pod had error: Bad pod condition: Pod 'aio-dataflow-operator-0' container 'aio-dataflow-operator' stuck in a bad state due to 'CrashLoopBackOff'"`
274
-
275
-
`"Failed to create webhook cert resources: Failed to update ApiError: Internal error occurred: failed calling webhook "webhook.cert-manager.io" [...]"`
248
+
Log signature: N/A
276
249
277
250
---
278
251
279
-
When you create a new data flow, it might not finish deployment. The cause is that the `cert-manager` wasn't ready or running.
252
+
In the public preview release, the media connector accepts device inbound endpoint addresses with the following schemes: `async`, `cache`, `concat`, `concatf`, `crypto`, `data`, `fd`, `ffrtmpcrypt`, `ffrtmphttp`, `file`, `ftp`, `gopher`, `gophers`, `hls`, `http`, `httpproxy`, `https`, `mmsh`, `mmst`, `pipe`, `rtmp`, `rtmpe`, `rtmps`, `rtmpt`, `rtmpte`, `rtmpts`, `rtp`, `srtp`, `subfile`, `tcp`, `tls`, `udp`, `udplite`, `unix`, `ipfx`, `ipns`.
280
253
281
-
To work around this issue, use the following steps to manually delete the data flow operator pod to clear the crash status:
254
+
This enables input data from multiple source types. However, because the output configuration is based on the `streamConfiguration`, the possibilities for using data from these sources are limited.
282
255
283
-
1. Run `kubectl get pods -n azure-iot-operations`.
284
-
In the output, Verify _aio-dataflow-operator-0_ is only data flow operator pod running.
256
+
## Data flows issues
285
257
286
-
1. Run `kubectl logs --namespace azure-iot-operations aio-dataflow-operator-0` to check the logs for the data flow operator pod.
258
+
This section lists current known issues for data flows.
287
259
288
-
In the output, check for the final log entry:
260
+
### Data flow resources aren't visible in the operations experience web UI
289
261
290
-
`Dataflow pod had error: Bad pod condition: Pod 'aio-dataflow-operator-0' container 'aio-dataflow-operator' stuck in a bad state due to 'CrashLoopBackOff'`
262
+
---
291
263
292
-
1. Run the _kubectl logs_ command again with the `--previous` option.
`Failed to create webhook cert resources: Failed to update ApiError: Internal error occurred: failed calling webhook "webhook.cert-manager.io" [...]`.
299
-
Issue ID:2382
300
-
If you see both log entries from the two _kubectl log_ commands, the cert-manager wasn't ready or running.
270
+
---
301
271
302
-
1. Run `kubectl delete pod aio-dataflow-operator-0 -n azure-iot-operations` to delete the data flow operator pod. Deleting the pod clears the crash status and restarts the pod.
272
+
Data flow custom resources created in your cluster using Kubernetes aren't visible in the operations experience web UI. This result is expected because [managing Azure IoT Operations components using Kubernetes is in preview](../deploy-iot-ops/howto-manage-update-uninstall.md#manage-components-using-kubernetes-deployment-manifests-preview), and synchronizing resources from the edge to the cloud isn't currently supported.
303
273
304
-
1. Wait for the operator pod to restart and deploy the data flow.
274
+
There's currently no workaround for this issue.
305
275
306
-
### Data flows error metrics
276
+
### Connection failures with Azure Event Grid
307
277
308
278
---
309
279
310
-
Issue ID: 2382
280
+
Issue ID: 8891
311
281
312
282
---
313
283
314
284
Log signature: N/A
315
285
316
286
---
317
287
318
-
Data flows marks message retries and reconnects as errors, and as a result data flows might look unhealthy. This behavior is only seen in previous versions of data flows. Review the logs to determine if the data flow is healthy.
288
+
When you connect multiple IoT Operations instances to the same Event Grid MQTT namespace, connection failures might occur due to client ID conflicts. Client IDs are currently derived from data flow resource names, and when using infrastructure as code patterns for deployment, the generated client IDs might be identical.
289
+
290
+
To work around this issue, add randomness to the data flow names in your deployment templates.
0 commit comments