You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can use dataflow conversions to transform data in Azure IoT Operations. The *conversion* element in a dataflow is used to compute values for output fields. You can use input fields, available operations, data types, and type conversions in dataflow conversions.
17
+
You can use data flow conversions to transform data in Azure IoT Operations. The *conversion* element in a data flow is used to compute values for output fields. You can use input fields, available operations, data types, and type conversions in data flow conversions.
18
18
19
-
Dataflow*conversion* element is used to compute values for output fields:
19
+
The data flow*conversion* element is used to compute values for output fields:
20
20
21
21
```yaml
22
22
- inputs:
@@ -219,7 +219,7 @@ Functions can be used in the conversion formula to perform various operations.
219
219
220
220
## Available operations
221
221
222
-
Dataflow offers a wide range of out-of-the-box (OOTB) conversion functions that allow users to easily perform unit conversions without the need for complex calculations. These predefined functions cover common conversions such as temperature, pressure, length, weight, and volume. The following is a list of the available conversion functions, along with their corresponding formulas and function names:
222
+
Data flows offer a wide range of out-of-the-box (OOTB) conversion functions that allow users to easily perform unit conversions without the need for complex calculations. These predefined functions cover common conversions such as temperature, pressure, length, weight, and volume. The following is a list of the available conversion functions, along with their corresponding formulas and function names:
Use the dataflow mapping language to transform data in Azure IoT Operations. The syntax is a simple, yet powerful, way to define mappings that transform data from one format to another. This article provides an overview of the dataflow mapping language and key concepts.
17
+
Use the data flow mapping language to transform data in Azure IoT Operations. The syntax is a simple, yet powerful, way to define mappings that transform data from one format to another. This article provides an overview of the data flow mapping language and key concepts.
18
18
19
19
Mapping allows you to transform data from one format to another. Consider the following input record:
20
20
21
21
```json
22
22
{
23
-
"Name": "John Doe",
23
+
"Name": "Grace Owens",
24
24
"Place of birth": "London, TX",
25
25
"Birth Date": "19840202",
26
26
"Start Date": "20180812",
@@ -34,7 +34,7 @@ Compare it with the output record:
34
34
```json
35
35
{
36
36
"Employee": {
37
-
"Name": "John Doe",
37
+
"Name": "Grace Owens",
38
38
"Date of Birth": "19840202"
39
39
},
40
40
"Employment": {
@@ -104,7 +104,7 @@ Dot-notation is widely used in computer science to reference fields, even recurs
104
104
- Person.Address.Street.Number
105
105
```
106
106
107
-
However, in a dataflow a path described by dot-notation might include strings and some special characters without needing escaping:
107
+
However, in a data flow, a path described by dot-notation might include strings and some special characters without needing escaping:
108
108
109
109
```yaml
110
110
- inputs:
@@ -120,7 +120,7 @@ However, in other cases, escaping is necessary:
120
120
121
121
The previous example, among other special characters, contains dots within the field name, which, without escaping, would serve as a separator in the dot-notation itself.
122
122
123
-
While dataflow parses a path, it treats only two characters as special:
123
+
While a data flow parses a path, it treats only two characters as special:
124
124
125
125
* Dots ('.') act as field separators.
126
126
* Quotes, when placed at the beginning or the end of a segment, start an escaped section where dots aren't treated as field separators.
@@ -450,12 +450,12 @@ Consider a special case for the same fields to help deciding the right action:
450
450
451
451
An empty `output` field in the second definition implies not writing the fields in the output record (effectively removing `Opacity`). This setup is more of a `Specialization` than a `Second Rule`.
452
452
453
-
Resolution of overlapping mappings by dataflow:
453
+
Resolution of overlapping mappings by data flows:
454
454
455
455
* The evaluation progresses from the top rule in the mapping definition.
456
456
* If a new mapping resolves to the same fields as a previous rule, the following applies:
457
457
* A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
458
-
* If the `Rank` of the latter rule is equal to or higher than the previous rule, dataflow treats it as a `Second Rule`.
458
+
* If the `Rank` of the latter rule is equal to or higher than the previous rule, a data flow treats it as a `Second Rule`.
459
459
* Otherwise, it treats the configuration as a `Specialization`.
460
460
461
461
For example, the mapping that directs `Opacity.Max` and `Opacity.Min` to an empty output has a `Rank` of zero. Since the second rule has a lower `Rank` than the previous, it's considered a specialization and overrides the previous rule, which would calculate a value for `Opacity`
To get started with dataflows, you need to configure endpoints. An endpoint is the connection point for the dataflow. You can use an endpoint as a source or destination for the dataflow. Some endpoint types can be used as [both sources and destinations](#endpoint-types-for-use-as-sources-and-destinations), while others are for [destinations only](#endpoint-types-for-destinations-only). A dataflow needs at least one source endpoint and one destination endpoint.
17
+
To get started with data flows, you need to configure endpoints. An endpoint is the connection point for the data flow. You can use an endpoint as a source or destination for the data flow. Some endpoint types can be used as [both sources and destinations](#endpoint-types-for-use-as-sources-and-destinations), while others are for [destinations only](#endpoint-types-for-destinations-only). A data flow needs at least one source endpoint and one destination endpoint.
18
18
19
19
The following example shows a custom resource definition with all of the configuration options. The required fields are dependent on the endpoint type. Review the sections for each endpoint type for configuration guidance.
20
20
```yaml
@@ -69,7 +69,7 @@ spec:
69
69
| mqttSettings.sharedSubscription | Shared subscription settings. No shared subscription if omitted. |
70
70
| mqttSettings.sharedSubscription.groupMinimumShareNumber | Number of clients to use for shared subscription. |
71
71
| mqttSettings.sharedSubscription.groupName | Shared subscription group name. |
72
-
| mqttSettings.clientIdPrefix | Client ID prefix. Client ID generated by the dataflow is \<prefix\>-id. No prefix if omitted.|
72
+
| mqttSettings.clientIdPrefix | Client ID prefix. Client ID generated by the data flow is \<prefix\>-id. No prefix if omitted.|
73
73
| mqttSettings.retain | Whether or not to keep the retain setting. Values: *keep* or *never*. Defaults to *keep*. |
74
74
| mqttSettings.sessionExpirySeconds | Session expiry in seconds. Defaults to *3600*.|
75
75
| mqttSettings.qos | Quality of service. Values: *0* or *1*. Defaults to 1.|
@@ -155,7 +155,7 @@ authentication:
155
155
> - Select **Certificate Subject Name** the dropdown.
156
156
> - Set **Maximum client sessions per authentication name** to **3** or more
157
157
>
158
-
> The alternative client authentication and max client sessions options allows dataflows to use client certificate subject name for authentication instead of MQTT CONNECT Username. This is important so that dataflows can spawn multiple instances and still be able to connect. To learn more, see [Event Grid MQTT client certificate authentication](../../event-grid/mqtt-client-certificate-authentication.md) [multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
158
+
> The alternative client authentication and max client sessions options allows data flows to use client certificate subject name for authentication instead of MQTT CONNECT Username. This is important so that data flows can spawn multiple instances and still be able to connect. To learn more, see [Event Grid MQTT client certificate authentication](../../event-grid/mqtt-client-certificate-authentication.md) [multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
By default, when you deploy Azure IoT Operations, a dataflow profile is created with default settings. You can configure the dataflow profile to suit your needs.
17
+
By default, when you deploy Azure IoT Operations, a data flow profile is created with default settings. You can configure the data flow profile to suit your needs.
| `maxInstances` | Number of instances to spread the dataflows across. Optional; automatically determined if not set. |
50
+
| `maxInstances` | Number of instances to spread the data flow across. Optional; automatically determined if not set. |
51
51
| `tolerations` | Node tolerations. Optional; see [Kubernetes Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). |
52
52
| `diagnostics` | Diagnostics settings. |
53
53
| `diagnostics.logFormat` | Format of the logs. For example, `text`. |
@@ -70,7 +70,7 @@ spec:
70
70
71
71
## Default settings
72
72
73
-
The default settings for a dataflow profile are:
73
+
The default settings for a data flow profile are:
74
74
75
75
* Instances: (null)
76
76
* Log level: Info
@@ -79,14 +79,14 @@ The default settings for a dataflow profile are:
79
79
80
80
## Scaling
81
81
82
-
To manually scale the dataflow profile, specify the maximum number of instances you want to run.
82
+
To manually scale the data flow profile, specify the maximum number of instances you want to run.
83
83
84
84
```yaml
85
85
spec:
86
86
maxInstances: 3
87
87
```
88
88
89
-
If not specified, Azure IoT Operations automatically scales the dataflow profile based on the dataflow configuration. The number of instances is determined by the number of dataflows and the shared subscription configuration.
89
+
If not specified, Azure IoT Operations automatically scales the data flow profile based on the data flow configuration. The number of instances is determined by the number of data flows and the shared subscription configuration.
90
90
91
91
## Configure log level, node tolerations, diagnostic settings, and other deployment-wide settings
0 commit comments