Skip to content

Commit dccd337

Browse files
committed
More dataflow changes
1 parent affad6c commit dccd337

File tree

5 files changed

+16
-16
lines changed

5 files changed

+16
-16
lines changed

articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ If using system-assigned managed identity, in Azure portal, go to your Azure IoT
6767
6868
# [Portal](#tab/portal)
6969
70-
1. In the operations experience, select the **Dataflow endpoints** tab.
70+
1. In the operations experience, select the **Data flow endpoints** tab.
7171
1. Under **Create new data flow endpoint**, select **Azure Data Explorer** > **New**.
7272
7373
:::image type="content" source="media/howto-configure-adx-endpoint/create-adx-endpoint.png" alt-text="Screenshot using operations experience to create an Azure Data Explorer data flow endpoint.":::

articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Once the Azure Event Hubs namespace and event hub is configured, you can create
5353

5454
# [Portal](#tab/portal)
5555

56-
1. In the [operations experience](https://iotoperations.azure.com/), select the **Dataflow endpoints** tab.
56+
1. In the [operations experience](https://iotoperations.azure.com/), select the **Data flow endpoints** tab.
5757
1. Under **Create new data flow endpoint**, select **Azure Event Hubs** > **New**.
5858

5959
:::image type="content" source="media/howto-configure-kafka-endpoint/create-event-hubs-endpoint.png" alt-text="Screenshot using operations experience to create an Azure Event Hubs data flow endpoint.":::

articles/iot-operations/connect-to-cloud/howto-create-dataflow.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Once you have the prerequisites, you can start to create a data flow.
6666

6767
# [Portal](#tab/portal)
6868

69-
To create a data flow in [operations experience](https://iotoperations.azure.com/), select **Dataflow** > **Create data flow**. Then, you see the page where you can configure the source, transformation, and destination for the data flow.
69+
To create a data flow in [operations experience](https://iotoperations.azure.com/), select **Data flow** > **Create data flow**. Then, you see the page where you can configure the source, transformation, and destination for the data flow.
7070

7171
:::image type="content" source="media/howto-create-dataflow/create-dataflow.png" alt-text="Screenshot using operations experience to create a data flow.":::
7272

@@ -185,7 +185,7 @@ If the default endpoint isn't used as the source, it must be used as the [destin
185185

186186
| Setting | Description |
187187
| -------------------- | ------------------------------------------------------------------------------------------------- |
188-
| Dataflow endpoint | Select *default* to use the default MQTT message broker endpoint. |
188+
| Data flow endpoint | Select *default* to use the default MQTT message broker endpoint. |
189189
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
190190
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
191191

@@ -251,7 +251,7 @@ Configuring an asset as a source is only available in the operations experience.
251251

252252
When using an asset as the source, the asset definition is used to infer the schema for the data flow. The asset definition includes the schema for the asset's datapoints. To learn more, see [Manage asset configurations remotely](../discover-manage-assets/howto-manage-assets-remotely.md).
253253

254-
Once configured, the data from the asset reaches the dataflow via the local MQTT broker. So, when using an asset as the source, the data flow uses the local MQTT broker default endpoint as the source in actuality.
254+
Once configured, the data from the asset reaches the data flow via the local MQTT broker. So, when using an asset as the source, the data flow uses the local MQTT broker default endpoint as the source in actuality.
255255

256256
### Option 3: Use custom MQTT or Kafka data flow endpoint as source
257257

@@ -267,7 +267,7 @@ If you created a custom MQTT or Kafka data flow endpoint (for example, to use wi
267267

268268
| Setting | Description |
269269
| -------------------- | ------------------------------------------------------------------------------------------------- |
270-
| Dataflow endpoint | Use the **Reselect** button to select a custom MQTT or Kafka dataflow endpoint. For more information, see [Configure MQTT dataflow endpoints](howto-configure-mqtt-endpoint.md) or [Configure Azure Event Hubs and Kafka dataflow endpoints](howto-configure-kafka-endpoint.md).|
270+
| Data flow endpoint | Use the **Reselect** button to select a custom MQTT or Kafka data flow endpoint. For more information, see [Configure MQTT data flow endpoints](howto-configure-mqtt-endpoint.md) or [Configure Azure Event Hubs and Kafka data flow endpoints](howto-configure-kafka-endpoint.md).|
271271
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
272272
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
273273

@@ -399,7 +399,7 @@ sourceSettings:
399399
---
400400

401401

402-
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a dataflow profile with an instance count of 3, and your dataflow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
402+
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a data flow profile with an instance count of 3, and your data flow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
403403

404404
You can explicitly create a topic named `$shared/mygroup/topic` in your configuration. However, adding the `$shared` topic explicitly isn't recommended since the `$shared` prefix is automatically added when needed. Data flows can make optimizations with the group name if it isn't set. For example, `$share` isn't set and data flows only has to operate over the topic name.
405405

@@ -417,7 +417,7 @@ To configure the Kafka topics:
417417

418418
# [Portal](#tab/portal)
419419

420-
In the operations experience dataflow **Source details**, select **Message broker**, then use the **Topic** field to specify the Kafka topic filter to subscribe to for incoming messages.
420+
In the operations experience data flow **Source details**, select **Message broker**, then use the **Topic** field to specify the Kafka topic filter to subscribe to for incoming messages.
421421

422422
> [!NOTE]
423423
> Only one topic filter can be specified in the operations experience. To use multiple topic filters, use Bicep or Kubernetes.
@@ -500,7 +500,7 @@ This section is an introduction to data flow transforms. For more detailed infor
500500

501501
# [Portal](#tab/portal)
502502

503-
In the operations experience, select **Dataflow** > **Add transform (optional)**.
503+
In the operations experience, select **Data flow** > **Add transform (optional)**.
504504

505505
:::image type="content" source="media/howto-create-dataflow/dataflow-transform.png" alt-text="Screenshot using operations experience to add a transform to a data flow.":::
506506

@@ -867,7 +867,7 @@ To send data to a destination other than the local MQTT broker, create a data fl
867867

868868
:::image type="content" source="media/howto-create-dataflow/dataflow-destination.png" alt-text="Screenshot using operations experience to select Event Hubs destination endpoint.":::
869869

870-
Storage endpoints require a [schema for serialization](./concept-schema-registry.md). If you choose a Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage destination endpoint, you must [specify a schema reference](#serialize-data-according-to-a-schema). For example, to serialize the data to a Microsoft Fabric endpoint in Delta format, you need to upload a schema to the schema registry and reference it in the dataflow destination endpoint configuration.
870+
Storage endpoints require a [schema for serialization](./concept-schema-registry.md). If you choose a Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage destination endpoint, you must [specify a schema reference](#serialize-data-according-to-a-schema). For example, to serialize the data to a Microsoft Fabric endpoint in Delta format, you need to upload a schema to the schema registry and reference it in the data flow destination endpoint configuration.
871871

872872
:::image type="content" source="media/howto-create-dataflow/serialization-schema.png" alt-text="Screenshot using operations experience to choose output schema and serialization format.":::
873873

@@ -994,7 +994,7 @@ The following example is a data flow configuration that uses the MQTT endpoint f
994994

995995
# [Portal](#tab/portal)
996996

997-
:::image type="content" source="media/howto-create-dataflow/dataflow-example.png" alt-text="Screenshot showing the operations experience dataflow example with a source endpoint, transforms, and a destination endpoint." lightbox="media/howto-create-dataflow/dataflow-example.png":::
997+
:::image type="content" source="media/howto-create-dataflow/dataflow-example.png" alt-text="Screenshot showing the operations experience data flow example with a source endpoint, transforms, and a destination endpoint." lightbox="media/howto-create-dataflow/dataflow-example.png":::
998998

999999
# [Bicep](#tab/bicep)
10001000

articles/iot-operations/deploy-iot-ops/concept-production-examples.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ The following table shows the MQTT broker configuration for the single node exam
4343

4444
The end-to-end data flow in the example looks like this:
4545

46-
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Dataflows -> Event Hubs`
46+
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Data flows -> Event Hubs`
4747

4848
The data volumes in the example are:
4949

@@ -52,7 +52,7 @@ The data volumes in the example are:
5252
- The connector for OPC UA sends 125 message/second to the MQTT broker.
5353
- One data flow pipeline pushes 6,250 tags to an Event Hubs endpoint.
5454

55-
In this example, Microsoft recommends using Event Hubs because you can only create one dataflow instance with a 4-core CPU. If you choose Event Grid, it can only handle 100 messages/sec.
55+
In this example, Microsoft recommends using Event Hubs because you can only create one data flow instance with a 4-core CPU. If you choose Event Grid, it can only handle 100 messages/sec.
5656

5757
### Performance
5858

@@ -94,7 +94,7 @@ In this example, an asset doesn't represent a real piece of equipment, but is a
9494

9595
The first end-to-end data flow in the example looks like this:
9696

97-
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Dataflows -> Event Hubs`
97+
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Data flows -> Event Hubs`
9898

9999
The data volumes in the first data flow in the example are:
100100

@@ -105,7 +105,7 @@ The data volumes in the first data flow in the example are:
105105

106106
The second end-to-end data flow in the example looks like this:
107107

108-
`MQTT client (Paho) -> MQTT Broker -> Dataflows -> Event Hubs`
108+
`MQTT client (Paho) -> MQTT Broker -> Data flows -> Event Hubs`
109109

110110
The data volumes in the second data flow in the example are:
111111

articles/iot-operations/end-to-end-tutorials/tutorial-upload-telemetry-to-cloud.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ To create the data flow:
117117

118118
Your new data flow endpoint is created and displays in the list on the **Data flow endpoints** page.
119119

120-
1. Select **Data flows** and then select **+ Create data flow**. The **\<new-dataflow\>** page displays:
120+
1. Select **Data flows** and then select **+ Create data flow**. The **\<new-data-flow\>** page displays:
121121

122122
:::image type="content" source="media/tutorial-upload-telemetry-to-cloud/new-dataflow.png" alt-text="Screenshot of the data flows page.":::
123123

0 commit comments

Comments
 (0)