You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/concept-dataflow-conversions.md
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,7 +95,7 @@ The mapper is designed to be flexible by converting internal types into output t
95
95
96
96
### Explicit type conversions
97
97
98
-
While the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and ensure that data integrity is maintained even when automatic methods fall short.
98
+
Although the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and help maintain data integrity even when automatic methods fall short.
99
99
100
100
### Use a conversion formula with types
101
101
@@ -112,11 +112,11 @@ If a formula is specified, the data types available for use in formulas are limi
112
112
113
113
`Map`and `byte` can't participate in formulas.
114
114
115
-
Types related to time (`datetime`, `time`, and `duration`) are converted into integer values representing time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, `datetime` converted to seconds remains an integer. If the value is to be used in `datetime` fields, an explicit conversion method must be applied. An example is converting the value into an ISO8601 string that's automatically converted to the `datetime` type of the output serialization format.
115
+
Types related to time (`datetime`, `time`, and `duration`) are converted into integer values that represent time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, `datetime` converted to seconds remains an integer. If the value will be used in `datetime` fields, an explicit conversion method must be applied. An example is converting the value into an ISO8601 string that's automatically converted to the `datetime` type of the output serialization format.
116
116
117
117
### Use irregular types
118
118
119
-
Special considerations apply to types like arrays and *missing value*:
119
+
Special considerations apply to types like arrays and *missing value*.
120
120
121
121
### Arrays
122
122
@@ -169,7 +169,7 @@ Missing value is a special type used in scenarios, such as:
169
169
* Handling missing fields in the input by providing an alternative value.
170
170
* Conditionally removing a field based on its presence.
171
171
172
-
Example mapping using missing value:
172
+
Example mapping that uses a missing value:
173
173
174
174
```json
175
175
{
@@ -191,7 +191,7 @@ The input record contains the `BaseSalary` field, but possibly that's optional.
191
191
}
192
192
```
193
193
194
-
A mapping can check if the field is present in the input record. If found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
194
+
A mapping can check if the field is present in the input record. If the field is found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
195
195
196
196
```yaml
197
197
- inputs:
@@ -223,23 +223,23 @@ Dataflows offer a wide range of out-of-the-box conversion functions that allow u
223
223
224
224
| Conversion | Formula | Function name |
225
225
| --- | --- | --- |
226
-
| Celsius to Fahrenheit | F = (C * 9/5) + 32 | cToF |
These functions are designed to simplify the conversion process. They allow users to input values in one unit and receive the corresponding value in another unit effortlessly.
245
245
@@ -285,7 +285,7 @@ Because `Exponentiation` has the highest precedence, it's executed first unless
285
285
286
286
`Addition` and `Subtraction` are considered weaker operations compared to the operations in the previous group:
287
287
288
-
* `$1 + 2 * 3` results in `$1 + 6` because `2 * 3` is executed first owing to the higher precedence of `multiplication`.
288
+
* `$1 + 2 * 3` results in `$1 + 6` because `2 * 3` is executed first because of the higher precedence of `multiplication`.
289
289
* `($1 + 2) * 3` prioritizes `Addition` before `Multiplication`.
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, these datasets can be queried based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
17
+
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, you can query these datasets based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
18
18
19
19
For example, consider the following dataset with a few records, represented as JSON records:
20
20
@@ -86,7 +86,7 @@ Then use the references mixed:
86
86
- $context(permission).NightShift # - - $2
87
87
```
88
88
89
-
The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, an alias can be defined:
89
+
The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, you can define an alias:
90
90
91
91
```yaml
92
92
datasets:
@@ -97,4 +97,4 @@ datasets:
97
97
expression: $1 == $2
98
98
```
99
99
100
-
Which configuration renames the dataset with key `datasets.parag10.rule42` to `position`.
100
+
The configuration renames the dataset with the key `datasets.parag10.rule42` to `position`.
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,11 +47,11 @@ Compare it with the output record:
47
47
48
48
In the output record, the following changes are made to the input record data:
49
49
50
-
***Fields renamed**: The **Birth Date** field is now **Date of Birth**.
51
-
***Fields restructured**: Both **Name** and **Date of Birth** are grouped under the new **Employee** category.
52
-
***Field deleted**: The **Place of birth** field is removed because it isn't present in the output.
53
-
***Field added**: The **Base Salary** field is a new field in the **Employment** category.
54
-
***Field values changed or merged**: The **Position** field in the output combines the **Position** and **Office** fields from the input.
50
+
***Fields renamed**: The `Birth Date` field is now `Date of Birth`.
51
+
***Fields restructured**: Both `Name` and `Date of Birth` are grouped under the new `Employee` category.
52
+
***Field deleted**: The `Place of birth` field is removed because it isn't present in the output.
53
+
***Field added**: The `Base Salary` field is a new field in the `Employment` category.
54
+
***Field values changed or merged**: The `Position` field in the output combines the `Position` and `Office` fields from the input.
55
55
56
56
The transformations are achieved through *mapping*, which typically involves:
57
57
@@ -144,7 +144,7 @@ The primary function of escaping in a dot-notated path is to accommodate the use
144
144
- 'Payload."Tag.10".Value'
145
145
```
146
146
147
-
In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotation marks (`'`) are necessary because of YAML syntax rules, which allows the inclusion of double quotation marks within the string.
147
+
In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotation marks (`'`) are necessary because of YAML syntax rules, which allow the inclusion of double quotation marks within the string.
148
148
149
149
### Escaping rules in dot notation
150
150
@@ -178,7 +178,7 @@ In the previous example, the path consists of three segments: `Payload`, `Tag.10
178
178
179
179
## Wildcard
180
180
181
-
In many scenarios, the output record closely resembles the input record, with only minor modifications required. When dealing with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
181
+
In many scenarios, the output record closely resembles the input record, with only minor modifications required. When you deal with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
182
182
183
183
Let's consider a basic scenario to understand the use of asterisks in mappings:
184
184
@@ -218,7 +218,7 @@ Original JSON:
218
218
}
219
219
```
220
220
221
-
Mapping configuration using wildcards:
221
+
Mapping configuration that uses wildcards:
222
222
223
223
```yaml
224
224
- inputs:
@@ -247,7 +247,7 @@ Resulting JSON:
247
247
248
248
### Wildcard placement
249
249
250
-
When you place a wildcard, the following rules must be followed:
250
+
When you place a wildcard, you must follow these rules:
251
251
252
252
* **Single asterisk per dataDestination:** Only one asterisk (`*`) is allowed within a single path.
253
253
* **Full segment matching:** The asterisk must always match an entire segment of the path. It can't be used to match only a part of a segment, such as `path1.partial*.path3`.
@@ -277,7 +277,7 @@ Original JSON:
277
277
}
278
278
```
279
279
280
-
Mapping configuration using wildcards:
280
+
Mapping configuration that uses wildcards:
281
281
282
282
```yaml
283
283
- inputs:
@@ -334,7 +334,7 @@ Original JSON:
334
334
}
335
335
```
336
336
337
-
Initial mapping configuration using wildcards:
337
+
Initial mapping configuration that uses wildcards:
This revised mapping accurately captures the necessary fields by correctly specifying the paths to include the nested `Mid` object, ensuring that the asterisks work effectively across different levels of the JSON structure.
372
+
This revised mapping accurately captures the necessary fields. It correctly specifies the paths to include the nested `Mid` object, which ensures that the asterisks work effectively across different levels of the JSON structure.
373
373
374
374
### Second rule vs. specialization
375
375
@@ -454,15 +454,15 @@ Resolution of overlapping mappings by dataflows:
454
454
455
455
* The evaluation progresses from the top rule in the mapping definition.
456
456
* If a new mapping resolves to the same fields as a previous rule, the following conditions apply:
457
-
* A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
457
+
* A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If it's only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
458
458
* If the `Rank` of the latter rule is equal to or higher than the previous rule, a dataflow treats it as a `Second Rule`.
459
-
* Otherwise, it treats the configuration as a `Specialization`.
459
+
* Otherwise, the dataflow treats the configuration as a `Specialization`.
460
460
461
461
For example, the mapping that directs `Opacity.Max` and `Opacity.Min` to an empty output has a `Rank` of 0. Because the second rule has a lower `Rank` than the previous one, it's considered a specialization and overrides the previous rule, which would calculate a value for `Opacity`.
462
462
463
463
### Wildcards in contextualization datasets
464
464
465
-
While a detailed explanation of contextualization datasets is explained later, let's see now how they can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
465
+
Now, let's see how contextualization datasets can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
466
466
467
467
```json
468
468
{
@@ -480,7 +480,7 @@ In an earlier example, we used a specific field from this dataset:
480
480
output: Employment.BaseSalary
481
481
```
482
482
483
-
This mapping copies `BaseSalary` from the context dataset directly into the `Employment` section of the output record. If you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can utilize wildcards:
483
+
This mapping copies `BaseSalary` from the context dataset directly into the `Employment` section of the output record. If you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can use wildcards:
0 commit comments