You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ If using system-assigned managed identity, in Azure portal, go to your Azure IoT
67
67
68
68
# [Portal](#tab/portal)
69
69
70
-
1. In the operations experience, select the **Dataflow endpoints** tab.
70
+
1. In the operations experience, select the **Data flow endpoints** tab.
71
71
1. Under **Create new data flow endpoint**, select **Azure Data Explorer** > **New**.
72
72
73
73
:::image type="content" source="media/howto-configure-adx-endpoint/create-adx-endpoint.png" alt-text="Screenshot using operations experience to create an Azure Data Explorer data flow endpoint.":::
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Once the Azure Event Hubs namespace and event hub is configured, you can create
53
53
54
54
# [Portal](#tab/portal)
55
55
56
-
1. In the [operations experience](https://iotoperations.azure.com/), select the **Dataflow endpoints** tab.
56
+
1. In the [operations experience](https://iotoperations.azure.com/), select the **Data flow endpoints** tab.
57
57
1. Under **Create new data flow endpoint**, select **Azure Event Hubs** > **New**.
58
58
59
59
:::image type="content" source="media/howto-configure-kafka-endpoint/create-event-hubs-endpoint.png" alt-text="Screenshot using operations experience to create an Azure Event Hubs data flow endpoint.":::
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ Once you have the prerequisites, you can start to create a data flow.
66
66
67
67
# [Portal](#tab/portal)
68
68
69
-
To create a data flow in [operations experience](https://iotoperations.azure.com/), select **Dataflow** > **Create data flow**. Then, you see the page where you can configure the source, transformation, and destination for the data flow.
69
+
To create a data flow in [operations experience](https://iotoperations.azure.com/), select **Data flow** > **Create data flow**. Then, you see the page where you can configure the source, transformation, and destination for the data flow.
70
70
71
71
:::image type="content" source="media/howto-create-dataflow/create-dataflow.png" alt-text="Screenshot using operations experience to create a data flow.":::
72
72
@@ -185,7 +185,7 @@ If the default endpoint isn't used as the source, it must be used as the [destin
|Dataflow endpoint | Select *default* to use the default MQTT message broker endpoint. |
188
+
|Data flow endpoint | Select *default* to use the default MQTT message broker endpoint. |
189
189
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
190
190
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
191
191
@@ -251,7 +251,7 @@ Configuring an asset as a source is only available in the operations experience.
251
251
252
252
When using an asset as the source, the asset definition is used to infer the schema for the data flow. The asset definition includes the schema for the asset's datapoints. To learn more, see [Manage asset configurations remotely](../discover-manage-assets/howto-manage-assets-remotely.md).
253
253
254
-
Once configured, the data from the asset reaches the dataflow via the local MQTT broker. So, when using an asset as the source, the data flow uses the local MQTT broker default endpoint as the source in actuality.
254
+
Once configured, the data from the asset reaches the data flow via the local MQTT broker. So, when using an asset as the source, the data flow uses the local MQTT broker default endpoint as the source in actuality.
255
255
256
256
### Option 3: Use custom MQTT or Kafka data flow endpoint as source
257
257
@@ -267,7 +267,7 @@ If you created a custom MQTT or Kafka data flow endpoint (for example, to use wi
| Dataflow endpoint | Use the **Reselect** button to select a custom MQTT or Kafka dataflow endpoint. For more information, see [Configure MQTT dataflow endpoints](howto-configure-mqtt-endpoint.md) or [Configure Azure Event Hubs and Kafka dataflow endpoints](howto-configure-kafka-endpoint.md).|
270
+
| Data flow endpoint | Use the **Reselect** button to select a custom MQTT or Kafka data flow endpoint. For more information, see [Configure MQTT data flow endpoints](howto-configure-mqtt-endpoint.md) or [Configure Azure Event Hubs and Kafka data flow endpoints](howto-configure-kafka-endpoint.md).|
271
271
| Topic | The topic filter to subscribe to for incoming messages. See [Configure MQTT or Kafka topics](#configure-data-sources-mqtt-or-kafka-topics). |
272
272
| Message schema | The schema to use to deserialize the incoming messages. See [Specify schema to deserialize data](#specify-source-schema). |
273
273
@@ -399,7 +399,7 @@ sourceSettings:
399
399
---
400
400
401
401
402
-
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a dataflow profile with an instance count of 3, and your dataflow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
402
+
If the instance count in the [data flow profile](howto-configure-dataflow-profile.md) is greater than one, shared subscription is automatically enabled for all data flows that use a message broker source. In this case, the `$shared` prefix is added and the shared subscription group name automatically generated. For example, if you have a data flow profile with an instance count of 3, and your data flow uses a message broker endpoint as source configured with topics `topic1` and `topic2`, they are automatically converted to shared subscriptions as `$shared/<GENERATED_GROUP_NAME>/topic1` and `$shared/<GENERATED_GROUP_NAME>/topic2`.
403
403
404
404
You can explicitly create a topic named `$shared/mygroup/topic` in your configuration. However, adding the `$shared` topic explicitly isn't recommended since the `$shared` prefix is automatically added when needed. Data flows can make optimizations with the group name if it isn't set. For example, `$share` isn't set and data flows only has to operate over the topic name.
405
405
@@ -417,7 +417,7 @@ To configure the Kafka topics:
417
417
418
418
# [Portal](#tab/portal)
419
419
420
-
In the operations experience dataflow **Source details**, select **Message broker**, then use the **Topic** field to specify the Kafka topic filter to subscribe to for incoming messages.
420
+
In the operations experience data flow **Source details**, select **Message broker**, then use the **Topic** field to specify the Kafka topic filter to subscribe to for incoming messages.
421
421
422
422
> [!NOTE]
423
423
> Only one topic filter can be specified in the operations experience. To use multiple topic filters, use Bicep or Kubernetes.
@@ -500,7 +500,7 @@ This section is an introduction to data flow transforms. For more detailed infor
500
500
501
501
# [Portal](#tab/portal)
502
502
503
-
In the operations experience, select **Dataflow** > **Add transform (optional)**.
503
+
In the operations experience, select **Data flow** > **Add transform (optional)**.
504
504
505
505
:::image type="content" source="media/howto-create-dataflow/dataflow-transform.png" alt-text="Screenshot using operations experience to add a transform to a data flow.":::
506
506
@@ -867,7 +867,7 @@ To send data to a destination other than the local MQTT broker, create a data fl
867
867
868
868
:::image type="content" source="media/howto-create-dataflow/dataflow-destination.png" alt-text="Screenshot using operations experience to select Event Hubs destination endpoint.":::
869
869
870
-
Storage endpoints require a [schema for serialization](./concept-schema-registry.md). If you choose a Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage destination endpoint, you must [specify a schema reference](#serialize-data-according-to-a-schema). For example, to serialize the data to a Microsoft Fabric endpoint in Delta format, you need to upload a schema to the schema registry and reference it in the dataflow destination endpoint configuration.
870
+
Storage endpoints require a [schema for serialization](./concept-schema-registry.md). If you choose a Microsoft Fabric OneLake, Azure Data Lake Storage, Azure Data Explorer, or Local Storage destination endpoint, you must [specify a schema reference](#serialize-data-according-to-a-schema). For example, to serialize the data to a Microsoft Fabric endpoint in Delta format, you need to upload a schema to the schema registry and reference it in the data flow destination endpoint configuration.
871
871
872
872
:::image type="content" source="media/howto-create-dataflow/serialization-schema.png" alt-text="Screenshot using operations experience to choose output schema and serialization format.":::
873
873
@@ -994,7 +994,7 @@ The following example is a data flow configuration that uses the MQTT endpoint f
994
994
995
995
# [Portal](#tab/portal)
996
996
997
-
:::image type="content" source="media/howto-create-dataflow/dataflow-example.png" alt-text="Screenshot showing the operations experience dataflow example with a source endpoint, transforms, and a destination endpoint." lightbox="media/howto-create-dataflow/dataflow-example.png":::
997
+
:::image type="content" source="media/howto-create-dataflow/dataflow-example.png" alt-text="Screenshot showing the operations experience data flow example with a source endpoint, transforms, and a destination endpoint." lightbox="media/howto-create-dataflow/dataflow-example.png":::
Copy file name to clipboardExpand all lines: articles/iot-operations/deploy-iot-ops/concept-production-examples.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ The following table shows the MQTT broker configuration for the single node exam
43
43
44
44
The end-to-end data flow in the example looks like this:
45
45
46
-
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Dataflows -> Event Hubs`
46
+
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Data flows -> Event Hubs`
47
47
48
48
The data volumes in the example are:
49
49
@@ -52,7 +52,7 @@ The data volumes in the example are:
52
52
- The connector for OPC UA sends 125 message/second to the MQTT broker.
53
53
- One data flow pipeline pushes 6,250 tags to an Event Hubs endpoint.
54
54
55
-
In this example, Microsoft recommends using Event Hubs because you can only create one dataflow instance with a 4-core CPU. If you choose Event Grid, it can only handle 100 messages/sec.
55
+
In this example, Microsoft recommends using Event Hubs because you can only create one data flow instance with a 4-core CPU. If you choose Event Grid, it can only handle 100 messages/sec.
56
56
57
57
### Performance
58
58
@@ -94,7 +94,7 @@ In this example, an asset doesn't represent a real piece of equipment, but is a
94
94
95
95
The first end-to-end data flow in the example looks like this:
96
96
97
-
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Dataflows -> Event Hubs`
97
+
`Assets -> PLC -> Connector for OPC UA -> MQTT broker -> Data flows -> Event Hubs`
98
98
99
99
The data volumes in the first data flow in the example are:
100
100
@@ -105,7 +105,7 @@ The data volumes in the first data flow in the example are:
105
105
106
106
The second end-to-end data flow in the example looks like this:
0 commit comments