Skip to content

Commit c3c3e6d

Browse files
committed
edit pass: processing-and-routing-data-batch2
1 parent abdd890 commit c3c3e6d

File tree

3 files changed

+37
-37
lines changed

3 files changed

+37
-37
lines changed

articles/iot-operations/connect-to-cloud/concept-dataflow-conversions.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ The mapper is designed to be flexible by converting internal types into output t
9595

9696
### Explicit type conversions
9797

98-
While the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and ensure that data integrity is maintained even when automatic methods fall short.
98+
Although the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and help maintain data integrity even when automatic methods fall short.
9999

100100
### Use a conversion formula with types
101101

@@ -112,11 +112,11 @@ If a formula is specified, the data types available for use in formulas are limi
112112

113113
`Map` and `byte` can't participate in formulas.
114114

115-
Types related to time (`datetime`, `time`, and `duration`) are converted into integer values representing time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, `datetime` converted to seconds remains an integer. If the value is to be used in `datetime` fields, an explicit conversion method must be applied. An example is converting the value into an ISO8601 string that's automatically converted to the `datetime` type of the output serialization format.
115+
Types related to time (`datetime`, `time`, and `duration`) are converted into integer values that represent time in seconds. After formula evaluation, results are stored in the internal representation and not converted back. For example, `datetime` converted to seconds remains an integer. If the value will be used in `datetime` fields, an explicit conversion method must be applied. An example is converting the value into an ISO8601 string that's automatically converted to the `datetime` type of the output serialization format.
116116

117117
### Use irregular types
118118

119-
Special considerations apply to types like arrays and *missing value*:
119+
Special considerations apply to types like arrays and *missing value*.
120120

121121
### Arrays
122122

@@ -169,7 +169,7 @@ Missing value is a special type used in scenarios, such as:
169169
* Handling missing fields in the input by providing an alternative value.
170170
* Conditionally removing a field based on its presence.
171171

172-
Example mapping using missing value:
172+
Example mapping that uses a missing value:
173173

174174
```json
175175
{
@@ -191,7 +191,7 @@ The input record contains the `BaseSalary` field, but possibly that's optional.
191191
}
192192
```
193193

194-
A mapping can check if the field is present in the input record. If found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
194+
A mapping can check if the field is present in the input record. If the field is found, the output receives that existing value. Otherwise, the output receives the value from the context dataset. For example:
195195

196196
```yaml
197197
- inputs:
@@ -223,23 +223,23 @@ Dataflows offer a wide range of out-of-the-box conversion functions that allow u
223223

224224
| Conversion | Formula | Function name |
225225
| --- | --- | --- |
226-
| Celsius to Fahrenheit | F = (C * 9/5) + 32 | cToF |
227-
| PSI to bar | Bar = PSI * 0.0689476 | psiToBar |
228-
| Inch to cm | Cm = inch * 2.54 | inToCm |
229-
| Foot to meter | Meter = foot * 0.3048 | ftToM |
230-
| Lbs to kg | Kg = lbs * 0.453592 | lbToKg |
231-
| Gallons to liters | Liters = gallons * 3.78541 | galToL |
226+
| Celsius to Fahrenheit | F = (C * 9/5) + 32 | `cToF` |
227+
| PSI to bar | Bar = PSI * 0.0689476 | `psiToBar` |
228+
| Inch to cm | Cm = inch * 2.54 | `inToCm` |
229+
| Foot to meter | Meter = foot * 0.3048 | `ftToM` |
230+
| Lbs to kg | Kg = lbs * 0.453592 | `lbToKg` |
231+
| Gallons to liters | Liters = gallons * 3.78541 | `galToL` |
232232

233233
In addition to these unidirectional conversions, we also support the reverse calculations:
234234

235235
| Conversion | Formula | Function name |
236236
| --- | --- | --- |
237-
| Fahrenheit to Celsius | C = (F - 32) * 5/9 | fToC |
238-
| Bar to PSI | PSI = bar / 0.0689476 | barToPsi |
239-
| Cm to inch | Inch = cm / 2.54 | cmToIn |
240-
| Meter to foot | Foot = meter / 0.3048 | mToFt |
241-
| Kg to lbs | Lbs = kg / 0.453592 | kgToLb |
242-
| Liters to gallons | Gallons = liters / 3.78541 | lToGal |
237+
| Fahrenheit to Celsius | C = (F - 32) * 5/9 | `fToC` |
238+
| Bar to PSI | PSI = bar / 0.0689476 | `barToPsi` |
239+
| Cm to inch | Inch = cm / 2.54 | `cmToIn` |
240+
| Meter to foot | Foot = meter / 0.3048 | `mToFt` |
241+
| Kg to lbs | Lbs = kg / 0.453592 | `kgToLb` |
242+
| Liters to gallons | Gallons = liters / 3.78541 | `lToGal` |
243243

244244
These functions are designed to simplify the conversion process. They allow users to input values in one unit and receive the corresponding value in another unit effortlessly.
245245

@@ -285,7 +285,7 @@ Because `Exponentiation` has the highest precedence, it's executed first unless
285285

286286
`Addition` and `Subtraction` are considered weaker operations compared to the operations in the previous group:
287287

288-
* `$1 + 2 * 3` results in `$1 + 6` because `2 * 3` is executed first owing to the higher precedence of `multiplication`.
288+
* `$1 + 2 * 3` results in `$1 + 6` because `2 * 3` is executed first because of the higher precedence of `multiplication`.
289289
* `($1 + 2) * 3` prioritizes `Addition` before `Multiplication`.
290290

291291
| Operator | Description |

articles/iot-operations/connect-to-cloud/concept-dataflow-enrich.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.date: 08/13/2024
1414

1515
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
1616

17-
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, these datasets can be queried based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
17+
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, you can query these datasets based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
1818

1919
For example, consider the following dataset with a few records, represented as JSON records:
2020

@@ -86,7 +86,7 @@ Then use the references mixed:
8686
- $context(permission).NightShift # - - $2
8787
```
8888

89-
The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, an alias can be defined:
89+
The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, you can define an alias:
9090

9191
```yaml
9292
datasets:
@@ -97,4 +97,4 @@ datasets:
9797
expression: $1 == $2
9898
```
9999

100-
Which configuration renames the dataset with key `datasets.parag10.rule42` to `position`.
100+
The configuration renames the dataset with the key `datasets.parag10.rule42` to `position`.

articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -47,11 +47,11 @@ Compare it with the output record:
4747

4848
In the output record, the following changes are made to the input record data:
4949

50-
* **Fields renamed**: The **Birth Date** field is now **Date of Birth**.
51-
* **Fields restructured**: Both **Name** and **Date of Birth** are grouped under the new **Employee** category.
52-
* **Field deleted**: The **Place of birth** field is removed because it isn't present in the output.
53-
* **Field added**: The **Base Salary** field is a new field in the **Employment** category.
54-
* **Field values changed or merged**: The **Position** field in the output combines the **Position** and **Office** fields from the input.
50+
* **Fields renamed**: The `Birth Date` field is now `Date of Birth`.
51+
* **Fields restructured**: Both `Name` and `Date of Birth` are grouped under the new `Employee` category.
52+
* **Field deleted**: The `Place of birth` field is removed because it isn't present in the output.
53+
* **Field added**: The `Base Salary` field is a new field in the `Employment` category.
54+
* **Field values changed or merged**: The `Position` field in the output combines the `Position` and `Office` fields from the input.
5555

5656
The transformations are achieved through *mapping*, which typically involves:
5757

@@ -144,7 +144,7 @@ The primary function of escaping in a dot-notated path is to accommodate the use
144144
- 'Payload."Tag.10".Value'
145145
```
146146

147-
In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotation marks (`'`) are necessary because of YAML syntax rules, which allows the inclusion of double quotation marks within the string.
147+
In the previous example, the path consists of three segments: `Payload`, `Tag.10`, and `Value`. The outer single quotation marks (`'`) are necessary because of YAML syntax rules, which allow the inclusion of double quotation marks within the string.
148148

149149
### Escaping rules in dot notation
150150

@@ -178,7 +178,7 @@ In the previous example, the path consists of three segments: `Payload`, `Tag.10
178178
179179
## Wildcard
180180
181-
In many scenarios, the output record closely resembles the input record, with only minor modifications required. When dealing with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
181+
In many scenarios, the output record closely resembles the input record, with only minor modifications required. When you deal with records that contain numerous fields, manually specifying mappings for each field can become tedious. Wildcards simplify this process by allowing for generalized mappings that can automatically apply to multiple fields.
182182
183183
Let's consider a basic scenario to understand the use of asterisks in mappings:
184184
@@ -218,7 +218,7 @@ Original JSON:
218218
}
219219
```
220220

221-
Mapping configuration using wildcards:
221+
Mapping configuration that uses wildcards:
222222

223223
```yaml
224224
- inputs:
@@ -247,7 +247,7 @@ Resulting JSON:
247247

248248
### Wildcard placement
249249

250-
When you place a wildcard, the following rules must be followed:
250+
When you place a wildcard, you must follow these rules:
251251

252252
* **Single asterisk per dataDestination:** Only one asterisk (`*`) is allowed within a single path.
253253
* **Full segment matching:** The asterisk must always match an entire segment of the path. It can't be used to match only a part of a segment, such as `path1.partial*.path3`.
@@ -277,7 +277,7 @@ Original JSON:
277277
}
278278
```
279279

280-
Mapping configuration using wildcards:
280+
Mapping configuration that uses wildcards:
281281

282282
```yaml
283283
- inputs:
@@ -334,7 +334,7 @@ Original JSON:
334334
}
335335
```
336336

337-
Initial mapping configuration using wildcards:
337+
Initial mapping configuration that uses wildcards:
338338

339339
```yaml
340340
- inputs:
@@ -369,7 +369,7 @@ Corrected mapping configuration:
369369
expression: ($1, $2, $3, $4)
370370
```
371371

372-
This revised mapping accurately captures the necessary fields by correctly specifying the paths to include the nested `Mid` object, ensuring that the asterisks work effectively across different levels of the JSON structure.
372+
This revised mapping accurately captures the necessary fields. It correctly specifies the paths to include the nested `Mid` object, which ensures that the asterisks work effectively across different levels of the JSON structure.
373373

374374
### Second rule vs. specialization
375375

@@ -454,15 +454,15 @@ Resolution of overlapping mappings by dataflows:
454454

455455
* The evaluation progresses from the top rule in the mapping definition.
456456
* If a new mapping resolves to the same fields as a previous rule, the following conditions apply:
457-
* A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
457+
* A `Rank` is calculated for each resolved input based on the number of segments the wildcard captures. For instance, if the `Captured Segments` are `Properties.Opacity`, the `Rank` is 2. If it's only `Opacity`, the `Rank` is 1. A mapping without wildcards has a `Rank` of 0.
458458
* If the `Rank` of the latter rule is equal to or higher than the previous rule, a dataflow treats it as a `Second Rule`.
459-
* Otherwise, it treats the configuration as a `Specialization`.
459+
* Otherwise, the dataflow treats the configuration as a `Specialization`.
460460

461461
For example, the mapping that directs `Opacity.Max` and `Opacity.Min` to an empty output has a `Rank` of 0. Because the second rule has a lower `Rank` than the previous one, it's considered a specialization and overrides the previous rule, which would calculate a value for `Opacity`.
462462

463463
### Wildcards in contextualization datasets
464464

465-
While a detailed explanation of contextualization datasets is explained later, let's see now how they can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
465+
Now, let's see how contextualization datasets can be used with wildcards through an example. Consider a dataset named `position` that contains the following record:
466466

467467
```json
468468
{
@@ -480,7 +480,7 @@ In an earlier example, we used a specific field from this dataset:
480480
output: Employment.BaseSalary
481481
```
482482

483-
This mapping copies `BaseSalary` from the context dataset directly into the `Employment` section of the output record. If you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can utilize wildcards:
483+
This mapping copies `BaseSalary` from the context dataset directly into the `Employment` section of the output record. If you want to automate the process and include all fields from the `position` dataset into the `Employment` section, you can use wildcards:
484484

485485
```yaml
486486
- inputs:

0 commit comments

Comments
 (0)