You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/concept-schema-registry.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,7 +97,7 @@ Each dataflow source can optionally specify a message schema. Currently, dataflo
97
97
98
98
Asset sources have a predefined message schema that was created by the connector for OPC UA.
99
99
100
-
Schemas can be uploaded for MQTT sources. Currently, Azure IoT Operations supports JSON for source schemas, also known as input schemas. In the operations experience, you can select an existing schema or upload one while defining an MQTT source:
100
+
Schemas can be uploaded for message broker sources. Currently, Azure IoT Operations supports JSON for source schemas, also known as input schemas. In the operations experience, you can select an existing schema or upload one while defining a message broker source:
101
101
102
102
:::image type="content" source="./media/concept-schema-registry/upload-schema.png" alt-text="Screenshot that shows uploading a message schema in the operations experience portal.":::
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
+44-26Lines changed: 44 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -173,26 +173,27 @@ To configure a source for the dataflow, specify the endpoint reference and a lis
173
173
174
174
If the default endpoint isn't used as the source, it must be used as the [destination](#destination). To learn more about, see [Dataflows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
175
175
176
-
### Option 1: Use default MQTT endpoint as source
176
+
### Option 1: Use default message broker endpoint as source
177
177
178
178
# [Portal](#tab/portal)
179
179
180
-
1. Under **Source details**, select **MQTT**.
180
+
1. Under **Source details**, select **Message broker**.
181
181
182
-
:::image type="content" source="media/howto-create-dataflow/dataflow-source-mqtt.png" alt-text="Screenshot using operations experience to select MQTT as the source endpoint.":::
182
+
:::image type="content" source="media/howto-create-dataflow/dataflow-source-mqtt.png" alt-text="Screenshot using operations experience to select message broker as the source endpoint.":::
183
183
184
-
1. Enter the following settings for the MQTT source:
184
+
1. Enter the following settings for the message broker source:
| MQTT topic | The MQTT topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
188
+
| Dataflow endpoint | Select *default* to use the default MQTT message broker endpoint. |
189
+
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
189
190
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
190
191
191
192
1. Select **Apply**.
192
193
193
194
# [Bicep](#tab/bicep)
194
195
195
-
The MQTT endpoint is configured in the Bicep template file. For example, the following endpoint is a source for the dataflow.
196
+
The message broker endpoint is configured in the Bicep template file. For example, the following endpoint is a source for the dataflow.
196
197
197
198
```bicep
198
199
sourceSettings: {
@@ -208,7 +209,7 @@ Here, `dataSources` allow you to specify multiple MQTT or Kafka topics without n
208
209
209
210
# [Kubernetes (preview)](#tab/kubernetes)
210
211
211
-
For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
212
+
For example, to configure a source using a message broker endpoint and two topic filters, use the following configuration:
212
213
213
214
```yaml
214
215
sourceSettings:
@@ -256,14 +257,26 @@ Once configured, the data from the asset reached the dataflow via the local MQTT
256
257
257
258
If you created a custom MQTT or Kafka dataflow endpoint (for example, to use with Event Grid or Event Hubs), you can use it as the source for the dataflow. Remember that storage type endpoints, like Data Lake or Fabric OneLake, can't be used as source.
258
259
259
-
To configure, use Kubernetes YAML or Bicep. Replace placeholder values with your custom endpoint name and topics.
260
-
261
260
# [Portal](#tab/portal)
262
261
263
-
Using a custom MQTT or Kafka endpoint as a source is currently not supported in the operations experience.
262
+
1. Under **Source details**, select **Message broker**.
263
+
264
+
:::image type="content" source="media/howto-create-dataflow/dataflow-source-custom.png" alt-text="Screenshot using operations experience to select a custom message broker as the source endpoint.":::
265
+
266
+
1. Enter the following settings for the message broker source:
| Dataflow endpoint | Use the **Reselect** button to select a custom MQTT or Kafka dataflow endpoint. For more information, see [Configure MQTT dataflow endpoints](howto-configure-mqtt-endpoint.md) or [Configure Azure Event Hubs and Kafka dataflow endpoints](howto-configure-kafka-endpoint.md).|
271
+
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
272
+
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
273
+
274
+
1. Select **Apply**.
264
275
265
276
# [Bicep](#tab/bicep)
266
277
278
+
Replace placeholder values with your custom endpoint name and topics.
279
+
267
280
```bicep
268
281
sourceSettings: {
269
282
endpointRef: '<CUSTOM_ENDPOINT_NAME>'
@@ -277,6 +290,8 @@ sourceSettings: {
277
290
278
291
# [Kubernetes (preview)](#tab/kubernetes)
279
292
293
+
Replace placeholder values with your custom endpoint name and topics.
294
+
280
295
```yaml
281
296
sourceSettings:
282
297
endpointRef: <CUSTOM_ENDPOINT_NAME>
@@ -298,20 +313,20 @@ When the source is an MQTT (Event Grid included) endpoint, you can use the MQTT
298
313
299
314
# [Portal](#tab/portal)
300
315
301
-
In the operations experience dataflow **Source details**, select **MQTT**, then use the **MQTT topic** field to specify the MQTT topic filter to subscribe to for incoming messages.
316
+
In the operations experience dataflow **Source details**, select **Message broker**, then use the **Topic** field to specify the MQTT topic filter to subscribe to for incoming messages.
302
317
303
318
> [!NOTE]
304
-
> Only one MQTT topic filter can be specified in the operations experience. To use multiple MQTT topic filters, use Bicep or Kubernetes.
319
+
> Only one topic filter can be specified in the operations experience. To use multiple topic filters, use Bicep or Kubernetes.
305
320
306
321
# [Bicep](#tab/bicep)
307
322
308
323
```bicep
309
324
sourceSettings: {
310
-
endpointRef: '<MQTT_ENDPOINT_NAME>'
325
+
endpointRef: '<MESSAGE_BROKER_ENDPOINT_NAME>'
311
326
dataSources: [
312
-
'<MQTT_TOPIC_FILTER_1>'
313
-
'<MQTT_TOPIC_FILTER_2>'
314
-
// Add more MQTT topic filters as needed
327
+
'<TOPIC_FILTER_1>'
328
+
'<TOPIC_FILTER_2>'
329
+
// Add more topic filters as needed
315
330
]
316
331
}
317
332
```
@@ -334,14 +349,14 @@ Here, the wildcard `+` is used to select all devices under the `thermostats` and
334
349
335
350
```yaml
336
351
sourceSettings:
337
-
endpointRef: <MQTT_ENDPOINT_NAME>
352
+
endpointRef: <ENDPOINT_NAME>
338
353
dataSources:
339
-
- <MQTT_TOPIC_FILTER_1>
340
-
- <MQTT_TOPIC_FILTER_2>
341
-
# Add more MQTT topic filters as needed
354
+
- <TOPIC_FILTER_1>
355
+
- <TOPIC_FILTER_2>
356
+
# Add more topic filters as needed
342
357
```
343
358
344
-
Example with multiple MQTT topic filters with wildcards:
359
+
Example with multiple topic filters with wildcards:
345
360
346
361
```yaml
347
362
sourceSettings:
@@ -357,11 +372,11 @@ Here, the wildcard `+` is used to select all devices under the `thermostats` and
357
372
358
373
##### Shared subscriptions
359
374
360
-
To use shared subscriptions with MQTT sources, you can specify the shared subscription topic in the form of `$shared/<GROUP_NAME>/<TOPIC_FILTER>`.
375
+
To use shared subscriptions with message broker sources, you can specify the shared subscription topic in the form of `$shared/<GROUP_NAME>/<TOPIC_FILTER>`.
361
376
362
377
# [Portal](#tab/portal)
363
378
364
-
In operations experience dataflow **Source details**, select **MQTT** and use the **MQTT topic** field to specify the shared subscription group and topic.
379
+
In operations experience dataflow **Source details**, select **Message broker** and use the **Topic** field to specify the shared subscription group and topic.
365
380
366
381
# [Bicep](#tab/bicep)
367
382
@@ -384,7 +399,7 @@ sourceSettings:
384
399
---
385
400
386
401
387
-
If the instance count in the [dataflow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all dataflows that use MQTT source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a dataflow profile with an instance count of 3, and your dataflow uses an MQTT endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
402
+
If the instance count in the [dataflow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all dataflows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a dataflow profile with an instance count of 3, and your dataflow uses an message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
388
403
389
404
You can explicitly create a topic named `$shared/mygroup/topic` in your configuration. However, adding the `$shared` topic explicitly isn't recommended since the `$shared` prefix is automatically added when needed. Dataflows can make optimizations with the group name if it isn't set. For example, `$share` isn't set and dataflows only has to operate over the topic name.
390
405
@@ -402,7 +417,10 @@ To configure the Kafka topics:
402
417
403
418
# [Portal](#tab/portal)
404
419
405
-
Using a Kafka endpoint as a source is currently not supported in the operations experience.
420
+
In the operations experience dataflow **Source details**, select **Message broker**, then use the **Topic** field to specify the Kafka topic filter to subscribe to for incoming messages.
421
+
422
+
> [!NOTE]
423
+
> Only one topic filter can be specified in the operations experience. To use multiple topic filters, use Bicep or Kubernetes.
406
424
407
425
# [Bicep](#tab/bicep)
408
426
@@ -443,7 +461,7 @@ To configure the schema used to deserialize the incoming messages from a source:
443
461
444
462
# [Portal](#tab/portal)
445
463
446
-
In operations experience dataflow **Source details**, select **MQTT** and use the **Message schema** field to specify the schema. You can use the **Upload** button to upload a schema file first. To learn more, see [Understand message schemas](concept-schema-registry.md).
464
+
In operations experience dataflow **Source details**, select **Message broker** and use the **Message schema** field to specify the schema. You can use the **Upload** button to upload a schema file first. To learn more, see [Understand message schemas](concept-schema-registry.md).
0 commit comments