You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: PatAltimore
5
5
ms.author: patricka
6
6
ms.subservice: azure-data-flows
7
7
ms.topic: how-to
8
-
ms.date: 10/04/2024
8
+
ms.date: 10/07/2024
9
9
ai-usage: ai-assisted
10
10
11
11
#CustomerIntent: As an operator, I want to understand how to create a dataflow to connect data sources.
@@ -60,16 +60,16 @@ To create a dataflow in the operations experience portal, select **Dataflow** >
60
60
61
61
# [Bicep](#tab/bicep)
62
62
63
-
This Bicep template file from[Bicep File to create Dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary resources for dataflows.
63
+
The[Bicep File to create Dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary resources for dataflows.
64
64
65
-
1. Download the file to your local, and replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
65
+
1. Download the template file and replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
66
+
1. Deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
66
67
67
-
2. Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
68
+
```azurecli
69
+
az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
70
+
```
68
71
69
-
```azurecli
70
-
az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
71
-
```
72
-
The overall structure of a dataflow configuration for Bicep is as follows:
72
+
The overall structure of a dataflow configuration for Bicep is as follows:
@@ -180,7 +180,7 @@ Configuring an asset as a source is only available in the operations experience
180
180
181
181
# [Bicep](#tab/bicep)
182
182
183
-
The MQTT endpoint is configured in the Bicep template file. This endpoint serves as a source for the dataflow, using the following configuration:
183
+
The MQTT endpoint is configured in the Bicep template file. For example, the following endpoint is a source for the dataflow.
184
184
185
185
```bicep
186
186
{
@@ -196,9 +196,9 @@ The MQTT endpoint is configured in the Bicep template file. This endpoint serves
196
196
}
197
197
```
198
198
199
-
`dataSources`: This is an array of MQTT topic(s) that define where the data will be sourced from. In this example, `azure-iot-operations/data/thermostat` refers to one of the topics in the dataSources array where thermostat data is being published.
199
+
The `dataSources` setting is an array of MQTT topics that define the data source. In this example, `azure-iot-operations/data/thermostat` refers to one of the topics in the dataSources array where thermostat data is published.
200
200
201
-
Datasources allow you to specify multiple MQTT or Kafka topics without needing to modify the endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. To learn more, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
201
+
Datasources allow you to specify multiple MQTT or Kafka topics without needing to modify the endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. For more information, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
202
202
203
203
<!-- TODO: Put the right article link here -->
204
204
For more information about creating an MQTT endpoint as a dataflow source, see [MQTT Endpoint](howto-configure-mqtt-endpoint.md).
@@ -324,9 +324,9 @@ In the operations experience portal, select **Dataflow** > **Add transform (opti
324
324
}
325
325
```
326
326
327
-
#### Specify output schema to transform data
327
+
### Specify output schema to transform data
328
328
329
-
The following configuration demonstrates how to define an output schema in your Bicep file. In this example, the schema defines fields such as `asset_id`, `asset_name`, `location`, `temperature`, `manufacturer`, `production_date`, and `serial_number`. Each field is assigned a specific data type (e.g., `string`) and marked as non-nullable. This ensures all incoming messages contain these fields with valid data.
329
+
The following configuration demonstrates how to define an output schema in your Bicep file. In this example, the schema defines fields such as `asset_id`, `asset_name`, `location`, `temperature`, `manufacturer`, `production_date`, and `serial_number`. Each field is assigned a data type and marked as non-nullable. The assignment ensures all incoming messages contain these fields with valid data.
330
330
331
331
```bicep
332
332
var assetDeltaSchema = '''
@@ -677,7 +677,7 @@ To configure a destination for the dataflow, specify the endpoint reference and
677
677
678
678
# [Bicep](#tab/bicep)
679
679
680
-
Here is an example of configuring Fabric OneLake as a destination with a static MQTT topic, after deploying Microsoft's Fabric OneLake dataflow endpoint:
680
+
The following is an example of configuring Fabric OneLake as a destination with a static MQTT topic.
0 commit comments