You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,24 @@ Use the following table to choose the endpoint type to configure:
27
27
|[Microsoft Fabric OneLake](howto-configure-fabric-endpoint.md)| For uploading data to Microsoft Fabric OneLake lakehouses. | No | Yes |
28
28
|[Local storage](howto-configure-local-storage-endpoint.md)| For sending data to a locally available persistent volume, through which you can upload data via Azure Container Storage enabled by Azure Arc edge volumes. | No | Yes |
29
29
30
+
## Dataflows must use local MQTT broker endpoint
31
+
32
+
When you create a dataflow, you specify the source and destination endpoints. The dataflow moves data from the source endpoint to the destination endpoint. You can use the same endpoint for multiple dataflows, and you can use the same endpoint as both the source and destination in a dataflow.
33
+
34
+
However, using custom endpoints as both the source and destination in a dataflow isn't supported. This means the built-in MQTT broker in Azure IoT Operations must be either the source or destination for every dataflow. To avoid dataflow deployment failures, use the [default MQTT dataflow endpoint](./howto-configure-mqtt-endpoint.md#default-endpoint) as either the source or destination for every dataflow.
35
+
36
+
The specific requirement is that each dataflow must have either the source or destination with an MQTT endpoint with the host `aio-broker`. So while using the default endpoint isn't strictly required, and you can create new other redundant dataflow endpoints pointing to the local MQTT broker as long as the host is `aio-broker`, it's recommended to use the default endpoint to avoid confusion and manageability issues.
37
+
38
+
The following table shows the supported scenarios:
39
+
40
+
| Scenario | Supported |
41
+
|----------|-----------|
42
+
| Default endpoint as source | Yes |
43
+
| Default endpoint as destination | Yes |
44
+
| Custom endpoint as source | Yes, if destination is default endpoint or an MQTT endpoint with host `aio-broker`|
45
+
| Custom endpoint as destination | Yes, if source is default endpoint or an MQTT endpoint with host `aio-broker`|
46
+
| Custom endpoint as source and destination | No, unless one of them is an MQTT endpoints with host `aio-broker`|
47
+
30
48
## Reuse endpoints
31
49
32
50
Think of each dataflow endpoint as a bundle of configuration settings that contains where the data should come from or go to (the `host` value), how to authenticate with the endpoint, and other settings like TLS configuration or batching preference. So you just need to create it once and then you can reuse it in multiple dataflows where these settings would be the same.
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md
+12-5Lines changed: 12 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: patricka
6
6
ms.service: azure-iot-operations
7
7
ms.subservice: azure-data-flows
8
8
ms.topic: how-to
9
-
ms.date: 10/30/2024
9
+
ms.date: 11/01/2024
10
10
ai-usage: ai-assisted
11
11
12
12
#CustomerIntent: As an operator, I want to understand how to understand how to configure dataflow endpoints for MQTT sources and destinations in Azure IoT Operations so that I can send data to and from MQTT brokers.
@@ -25,17 +25,24 @@ MQTT dataflow endpoints are used for MQTT sources and destinations. You can conf
25
25
26
26
## Azure IoT Operations local MQTT broker
27
27
28
+
Azure IoT Operations provides a [built-in local MQTT broker](../manage-mqtt-broker/overview-iot-mq.md) that you can use with dataflows. You can use the MQTT broker as a source to receive messages from other systems or as a destination to send messages to other systems.
29
+
28
30
### Default endpoint
29
31
30
-
Azure IoT Operations provides a built-in MQTT broker that you can use with dataflows. When you deploy Azure IoT Operations, an MQTT broker dataflow endpoint named "default" is created with default settings. You can use this endpoint as a source or destination for dataflows. The default endpoint uses the following settings:
32
+
When you deploy Azure IoT Operations, an MQTT broker dataflow endpoint named "default" is created with default settings. You can use this endpoint as a source or destination for dataflows.
33
+
34
+
> [!IMPORTANT]
35
+
> The default endpoint **must always be used as either the source or destination in every dataflow**. To learn more about, see [Dataflows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
36
+
37
+
The default endpoint uses the following settings:
31
38
32
39
- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)
33
40
- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)
34
41
- TLS: Enabled
35
42
- Trusted CA certificate: The default CA certificate `azure-iot-operations-aio-ca-trust-bundle` from the [default root CA](../deploy-iot-ops/concept-default-root-ca.md)
36
43
37
-
> [!IMPORTANT]
38
-
> If any of these default MQTT broker settings change, the dataflow endpoint must be updated to reflect the new settings. For example, if the default MQTT broker listener changes to use a different service name `my-mqtt-broker` and port 8885, you must update the endpoint to use the new host `host: my-mqtt-broker:8885`. Same applies to other settings like authentication and TLS.
44
+
> [!CAUTION]
45
+
> Don't delete the default endpoint. If you delete the default endpoint, you must recreate it with the same settings.
39
46
40
47
To view or edit the default MQTT broker endpoint settings:
You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or other settings.
114
+
You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or authorization settings. However, you must still always use the default endpoint as either the source or destination in every dataflow, even if you create new endpoints.
#CustomerIntent: As an operator, I want to understand how to create a dataflow to connect data sources.
@@ -39,7 +39,10 @@ flowchart LR
39
39
40
40
:::image type="content" source="media/howto-create-dataflow/dataflow.svg" alt-text="Diagram of a dataflow showing flow from source to transform then destination.":::
41
41
42
-
To define the source and destination, you need to configure the dataflow endpoints. The transformation is optional and can include operations like enriching the data, filtering the data, and mapping the data to another field.
42
+
To define the source and destination, you need to configure the dataflow endpoints. The transformation is optional and can include operations like enriching the data, filtering the data, and mapping the data to another field.
43
+
44
+
> [!IMPORTANT]
45
+
> Each dataflow must have the Azure IoT Operations local MQTT broker default endpoint [as *either* the source or destination](#proper-dataflow-configuration).
43
46
44
47
You can use the operations experience in Azure IoT Operations to create a dataflow. The operations experience provides a visual interface to configure the dataflow. You can also use Bicep to create a dataflow using a Bicep template file, or use Kubernetes to create a dataflow using a YAML file.
45
48
@@ -168,7 +171,7 @@ Review the following sections to learn how to configure the operation types of t
168
171
169
172
To configure a source for the dataflow, specify the endpoint reference and a list of data sources for the endpoint.
170
173
171
-
### Use Asset as source
174
+
### Use asset as source
172
175
173
176
# [Portal](#tab/portal)
174
177
@@ -194,6 +197,10 @@ Configuring an asset as a source is only available in the operations experience.
194
197
195
198
---
196
199
200
+
When using an asset as the source, the asset definition is used to infer the schema for the dataflow. The asset definition includes the schema for the asset's datapoints. To learn more, see [Manage asset configurations remotely](../discover-manage-assets/howto-manage-assets-remotely.md).
201
+
202
+
Once configured, the data from the asset reached the dataflow via the local MQTT broker. So, when using an asset as the source, the dataflow uses the local MQTT broker default endpoint as the source in actuality.
203
+
197
204
### Use default MQTT endpoint as source
198
205
199
206
# [Portal](#tab/portal)
@@ -243,7 +250,7 @@ Because `dataSources` allows you to specify MQTT or Kafka topics without modifyi
243
250
244
251
---
245
252
246
-
For more information about the default MQTT endpoint and creating an MQTT endpoint as a dataflow source, see [MQTT Endpoint](howto-configure-mqtt-endpoint.md).
253
+
If the default endpoint isn't used as the source, it must be used as the [destination](#destination). To learn more about, see [Dataflows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
247
254
248
255
### Use custom MQTT or Kafka dataflow endpoint as source
249
256
@@ -748,7 +755,7 @@ For more information about schema registry, see [Understand message schemas](con
748
755
749
756
To configure a destination for the dataflow, specify the endpoint reference and data destination. You can specify a list of data destinations for the endpoint.
750
757
751
-
To send data to a destination other than the local MQTT broker, create a dataflow endpoint. To learn how, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md).
758
+
To send data to a destination other than the local MQTT broker, create a dataflow endpoint. To learn how, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md). If the destination isn't the local MQTT broker, it must be used as a source. To learn more about, see [Dataflows must use local MQTT broker endpoint](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
752
759
753
760
> [!IMPORTANT]
754
761
> Storage endpoints require a schema reference. If you've created storage destination endpoints for Microsoft Fabric OneLake, ADLS Gen 2, Azure Data Explorer and Local Storage, you must specify schema reference.
@@ -878,7 +885,113 @@ destinationSettings:
878
885
879
886
## Example
880
887
881
-
The following example is a dataflow configuration that uses the MQTT endpoint for the source and destination. The source filters the data from the MQTT topics `thermostats/+/telemetry/temperature/#` and `humidifiers/+/telemetry/humidity/#`. The transformation converts the temperature to Fahrenheit and filters the data where the temperature is less than 100000. The destination sends the data to the MQTT topic `factory`.
888
+
The following example is a dataflow configuration that uses the MQTT endpoint for the source and destination. The source filters the data from the MQTT topic `azure-iot-operations/data/thermostat`. The transformation converts the temperature to Fahrenheit and filters the data where the temperature multiplied by the humiditiy is less than 100000. The destination sends the data to the MQTT topic `factory`.
# Filter the data from the MQTT topic azure-iot-operations/data/thermostat
896
1012
dataSources:
897
-
- thermostats/+/telemetry/temperature/#
898
-
- humidifiers/+/telemetry/humidity/#
1013
+
- azure-iot-operations/data/thermostat
1014
+
# Transformation optional
899
1015
- operationType: builtInTransformation
900
1016
builtInTransformationSettings:
1017
+
# Filter the data where temperature * "Tag 10" < 100000
901
1018
filter:
902
1019
- inputs:
903
1020
- 'temperature.Value'
904
1021
- '"Tag 10".Value'
905
-
expression: "$1*$2<100000"
1022
+
expression: '$1 * $2 < 100000'
906
1023
map:
1024
+
# Passthrough all values by default
907
1025
- inputs:
908
1026
- '*'
909
1027
output: '*'
1028
+
# Convert temperature to Fahrenheit and output it to TemperatureF
910
1029
- inputs:
911
1030
- temperature.Value
912
1031
output: TemperatureF
913
1032
expression: cToF($1)
1033
+
# Extract the "Tag 10" value and output it to Humidity
914
1034
- inputs:
915
1035
- '"Tag 10".Value'
916
-
output: 'Tag 10'
1036
+
output: 'Humidity'
917
1037
- operationType: Destination
918
1038
destinationSettings:
1039
+
# Use the default MQTT endpoint as the destination
919
1040
endpointRef: default
1041
+
# Send the data to the MQTT topic factory
920
1042
dataDestination: factory
921
1043
```
922
1044
923
-
<!-- TODO: add links to examples in the reference docs -->
924
-
925
1045
---
926
1046
1047
+
To see more examples of dataflow configurations, see [Azure REST API - Dataflow](/rest/api/iotoperations/dataflow/create-or-update#examples) and the [quickstart Bicep](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/quickstart.bicep).
1048
+
927
1049
## Verify a dataflow is working
928
1050
929
1051
Follow [Tutorial: Bi-directional MQTT bridge to Azure Event Grid](tutorial-mqtt-bridge.md) to verify the dataflow is working.
To ensure the dataflow is working as expected, verify the following:
1078
+
1079
+
- The default MQTT dataflow endpoint [must be used as *either* the source or destination](./howto-configure-dataflow-endpoint.md#dataflows-must-use-local-mqtt-broker-endpoint).
1080
+
- The dataflow profile exists and is referenced in the dataflow configuration.
1081
+
- Source is either an MQTT endpoint, Kafka endpoint, or an asset. [Storage type endpoints can't be used as a source](./howto-configure-dataflow-endpoint.md).
1082
+
- When using Event Grid as the source, the [dataflow profile instance count](./howto-configure-dataflow-profile.md#scaling) is set to 1 because Event Grid MQTT broker doesn't support shared subscriptions.
1083
+
- When using Event Hubs as the source, each event hub in the namespace is a separate Kafka topic and must be specified as the data source.
1084
+
- Transformation, if used, is configured with proper syntax, including consideration for proper [escapaing of special characters](./concept-dataflow-mapping.md#escaping).
1085
+
- When using storage type endpoints as destination, a [schema reference is specified](#serialize-data-according-to-a-schema).
1086
+
953
1087
## Next steps
954
1088
955
1089
- [Map data by using dataflows](concept-dataflow-mapping.md)
0 commit comments