You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Observability provides visibility into every layer of your Azure IoT Operations configuration. It gives you insight into the actual behavior of issues, which increases the effectiveness of site reliability engineering. Azure IoT Operations offers observability through custom curated Grafana dashboards that are hosted in Azure. These dashboards are powered by Azure Monitor managed service for Prometheus and by Container Insights. This guide shows you how to set up Azure Managed Prometheus and Grafana and enable monitoring for your Azure Arc cluster.
19
17
20
18
Complete the steps in this article *before* deploying Azure IoT Operations to your cluster.
21
19
22
20
## Prerequisites
23
21
24
22
* An Arc-enabled Kubernetes cluster.
25
-
* Azure CLI installed on your development machine. For instructions, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
26
-
* Helm installed on your development machine. For instructions, see [Install Helm](https://helm.sh/docs/intro/install/).
27
-
* Kubectl installed on your development machine. For instructions, see [Install Kubernetes tools](https://kubernetes.io/docs/tasks/tools/).
23
+
* Azure CLI installed on your cluster machine. For instructions, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
24
+
* Helm installed on your cluster machine. For instructions, see [Install Helm](https://helm.sh/docs/intro/install/).
25
+
* Kubectl installed on your cluster machine. For instructions, see [Install Kubernetes tools](https://kubernetes.io/docs/tasks/tools/).
You can use dataflow conversions to transform data in Azure IoT Operations. The *conversion* element in a dataflow is used to compute values for output fields. You can use input fields, available operations, data types, and type conversions in dataflow conversions.
19
19
@@ -30,7 +30,7 @@ output: 'ColorProperties.*'
30
30
expression: '($1 + $2) / 2'
31
31
```
32
32
33
-
# [Kubernetes](#tab/kubernetes)
33
+
# [Kubernetes (preview)](#tab/kubernetes)
34
34
35
35
```yaml
36
36
- inputs:
@@ -66,7 +66,7 @@ output: 'ColorProperties.*'
66
66
expression: '($1, $2, $3, $4)'
67
67
```
68
68
69
-
# [Kubernetes](#tab/kubernetes)
69
+
# [Kubernetes (preview)](#tab/kubernetes)
70
70
71
71
```yaml
72
72
- inputs:
@@ -84,7 +84,7 @@ In this example, the conversion results in an array containing the values of `[M
84
84
85
85
## Data types
86
86
87
-
Different serialization formats support various data types. For instance, JSON offers a few primitive types: string, number, Boolean, and null. Also included are arrays of these primitive types. In contrast, other serialization formats like Avro have a more complex type system, including integers with multiple bit field lengths and timestamps with different resolutions. Examples are milliseconds and microseconds.
87
+
Different serialization formats support various data types. For instance, JSON offers a few primitive types: string, number, Boolean, and null. It also includes arrays of these primitive types.
88
88
89
89
When the mapper reads an input property, it converts it into an internal type. This conversion is necessary for holding the data in memory until it's written out into an output field. The conversion to an internal type happens regardless of whether the input and output serialization formats are the same.
90
90
@@ -105,11 +105,7 @@ The internal representation utilizes the following data types:
105
105
106
106
### Input record fields
107
107
108
-
When an input record field is read, its underlying type is converted into one of these internal type variants. The internal representation is versatile enough to handle most input types with minimal or no conversion. However, some input types require conversion or are unsupported. Some examples:
109
-
110
-
* **Avro** `UUID` **type**: It's converted to a `string` because there's no specific `UUID` type in the internal representation.
111
-
* **Avro** `decimal` **type**: It isn't supported by the mapper, so fields of this type can't be included in mappings.
112
-
* **Avro** `duration` **type**: Conversion can vary. If the `months` field is set, it's unsupported. If only `days` and `milliseconds` are set, it's converted to the internal `duration` representation.
108
+
When an input record field is read, its underlying type is converted into one of these internal type variants. The internal representation is versatile enough to handle most input types with minimal or no conversion.
113
109
114
110
For some formats, surrogate types are used. For example, JSON doesn't have a `datetime` type and instead stores `datetime` values as strings formatted according to ISO8601. When the mapper reads such a field, the internal representation remains a string.
115
111
@@ -126,10 +122,6 @@ The mapper is designed to be flexible by converting internal types into output t
126
122
* Converted to `0`/`1` if the output field is numerical.
127
123
* Converted to `true`/`false` if the output field is string.
128
124
129
-
### Explicit type conversions
130
-
131
-
Although the automatic conversions operate as you might expect based on common implementation practices, there are instances where the right conversion can't be determined automatically and results in an *unsupported* error. To address these situations, several conversion functions are available to explicitly define how data should be transformed. These functions provide more control over how data is converted and help maintain data integrity even when automatic methods fall short.
132
-
133
125
### Use a conversion formula with types
134
126
135
127
In mappings, an optional formula can specify how data from the input is processed before being written to the output field. If no formula is specified, the mapper copies the input field to the output by using the internal type and conversion rules.
@@ -173,7 +165,7 @@ output: 'Measurement'
173
165
expression: 'min($1)'
174
166
```
175
167
176
-
# [Kubernetes](#tab/kubernetes)
168
+
# [Kubernetes (preview)](#tab/kubernetes)
177
169
178
170
```yaml
179
171
- inputs:
@@ -186,29 +178,6 @@ expression: 'min($1)'
186
178
187
179
This configuration selects the smallest value from the `Measurements` array for the output field.
188
180
189
-
It's also possible to use functions that result in a new array:
190
-
191
-
# [Bicep](#tab/bicep)
192
-
193
-
```bicep
194
-
inputs: [
195
-
'Measurements' // - $1
196
-
]
197
-
output: 'Measurements'
198
-
expression: 'take($1, 10)' // taking at max 10 items
199
-
```
200
-
201
-
# [Kubernetes](#tab/kubernetes)
202
-
203
-
```yaml
204
-
- inputs:
205
-
- Measurements # - $1
206
-
output: Measurements
207
-
expression: take($1, 10) # taking at max 10 items
208
-
```
209
-
210
-
---
211
-
212
181
Arrays can also be created from multiple single values:
213
182
214
183
# [Bicep](#tab/bicep)
@@ -224,7 +193,7 @@ output: 'stats'
224
193
expression: '($1, $2, $3, $4)'
225
194
```
226
195
227
-
# [Kubernetes](#tab/kubernetes)
196
+
# [Kubernetes (preview)](#tab/kubernetes)
228
197
229
198
```yaml
230
199
- inputs:
@@ -282,7 +251,7 @@ output: 'BaseSalary'
282
251
expression: 'if($1 == (), $2, $1)'
283
252
```
284
253
285
-
# [Kubernetes](#tab/kubernetes)
254
+
# [Kubernetes (preview)](#tab/kubernetes)
286
255
287
256
```yaml
288
257
- inputs:
@@ -302,17 +271,22 @@ The `conversion` uses the `if` function that has three parameters:
302
271
303
272
## Available functions
304
273
305
-
Functions can be used in the conversion formula to perform various operations:
274
+
Dataflows provide a set of built-in functions that can be used in conversion formulas. These functions can be used to perform common operations like arithmetic, comparison, and string manipulation. The available functions are:
306
275
307
-
* `min` to select a single item from an array
308
-
* `if` to select between values
309
-
* String manipulation (for example, `uppercase()`)
310
-
* Explicit conversion (for example, `ISO8601_datetime`)
311
-
* Aggregation (for example, `avg()`)
276
+
| Function | Description | Examples |
277
+
|----------|-------------|---------|
278
+
| `min` | Return the minimum value from an array. | `min(2, 3, 1)` returns `1`, `min($1)` returns the minimum value from the array `$1` |
279
+
| `max` | Return the maximum value from an array. | `max(2, 3, 1)` returns `3`, `max($1)` returns the maximum value from the array `$1` |
280
+
| `if` | Return between values based on a condition. | `if($1 > 10, 'High', 'Low')` returns `'High'` if `$1` is greater than `10`, otherwise `'Low'` |
281
+
| `len` | Return the character length of a string or the number of elements in a tuple. | `len("Azure")` returns `5`, `len(1, 2, 3)` returns `3`, `len($1)` returns the number of elements in the array `$1` |
282
+
| `floor` | Return the largest integer less than or equal to a number. | `floor(2.9)` returns `2` |
283
+
| `round` | Return the nearest integer to a number, rounding half-way cases away from 0.0. | `round(2.5)` returns `3` |
284
+
| `ceil` | Return the smallest integer greater than or equal to a number. | `ceil(2.1)` returns `3` |
285
+
| `scale` | Scale a value from one range to another. | `scale($1, 0, 10, 0, 100)` scales the input value from the range 0 to 10 to the range 0 to 100 |
312
286
313
-
## Available operations
287
+
### Conversion functions
314
288
315
-
Dataflows offer a wide range of out-of-the-box conversion functions that allow users to easily perform unit conversions without the need for complex calculations. These predefined functions cover common conversions such as temperature, pressure, length, weight, and volume. The following list shows the available conversion functions, along with their corresponding formulas and function names:
289
+
Dataflows provide several built-in conversion functions for common unit conversions like temperature, pressure, length, weight, and volume. Here are some examples:
316
290
317
291
| Conversion | Formula | Function name |
318
292
| --- | --- | --- |
@@ -323,7 +297,7 @@ Dataflows offer a wide range of out-of-the-box conversion functions that allow u
These functions are designed to simplify the conversion process. They allow users to input values in one unit and receive the corresponding value in another unit effortlessly.
338
-
339
-
We also provide a scaling function to scale the range of value to the user-defined range. For the example `scale($1,0,10,0,100)`, the input value is scaled from the range 0 to 10 to the range 0 to 100.
340
-
341
-
Moreover, users have the flexibility to define their own conversion functions by using simple mathematical formulas. Our system supports basic operators such as addition (`+`), subtraction (`-`), multiplication (`*`), and division (`/`). These operators follow standard rules of precedence. For example, multiplication and division are performed before addition and subtraction. Precedence can be adjusted by using parentheses to ensure the correct order of operations. This capability empowers users to customize their unit conversions to meet specific needs or preferences, enhancing the overall utility and versatility of the system.
342
-
343
-
For more complex calculations, functions like `sqrt` (which finds the square root of a number) are also available.
311
+
Additionally, you can define your own conversion functions using basic mathematical formulas. The system supports operators like addition (`+`), subtraction (`-`), multiplication (`*`), and division (`/`). These operators follow standard rules of precedence, which can be adjusted using parentheses to ensure the correct order of operations. This allows you to customize unit conversions to meet specific needs.
344
312
345
-
### Available arithmetic, comparison, and Boolean operators grouped by precedence
You can enrich data by using the *contextualization datasets* function. When incoming records are processed, you can query these datasets based on conditions that relate to the fields of the incoming record. This capability allows for dynamic interactions. Data from these datasets can be used to supplement information in the output fields and participate in complex calculations during the mapping process.
19
19
20
+
To load sample data into the state store, use the [state store CLI](https://github.com/Azure-Samples/explore-iot-operations/tree/main/tools/state-store-cli).
21
+
20
22
For example, consider the following dataset with a few records, represented as JSON records:
21
23
22
24
```json
@@ -32,7 +34,7 @@ For example, consider the following dataset with a few records, represented as J
32
34
}
33
35
```
34
36
35
-
The mapper accesses the reference dataset stored in the Azure IoT Operations [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md) by using a key value based on a *condition* specified in the mapping configuration. Key names in the DSS correspond to a dataset in the dataflow configuration.
37
+
The mapper accesses the reference dataset stored in the Azure IoT Operations [state store](../create-edge-apps/concept-about-state-store-protocol.md) by using a key value based on a *condition* specified in the mapping configuration. Key names in the state store correspond to a dataset in the dataflow configuration.
36
38
37
39
# [Bicep](#tab/bicep)
38
40
@@ -49,7 +51,7 @@ datasets: [
49
51
]
50
52
```
51
53
52
-
# [Kubernetes](#tab/kubernetes)
54
+
# [Kubernetes (preview)](#tab/kubernetes)
53
55
54
56
```yaml
55
57
datasets:
@@ -64,7 +66,7 @@ datasets:
64
66
65
67
When a new record is being processed, the mapper performs the following steps:
66
68
67
-
* **Data request:** The mapper sends a request to the DSS to retrieve the dataset stored under the key `Position`.
69
+
* **Data request:** The mapper sends a request to the state store to retrieve the dataset stored under the key `Position`.
68
70
* **Record matching:** The mapper then queries this dataset to find the first record where the `Position` field in the dataset matches the `Position` field of the incoming record.
69
71
70
72
# [Bicep](#tab/bicep)
@@ -86,7 +88,7 @@ When a new record is being processed, the mapper performs the following steps:
86
88
}
87
89
```
88
90
89
-
# [Kubernetes](#tab/kubernetes)
91
+
# [Kubernetes (preview)](#tab/kubernetes)
90
92
91
93
```yaml
92
94
- inputs:
@@ -102,7 +104,7 @@ When a new record is being processed, the mapper performs the following steps:
102
104
103
105
---
104
106
105
-
In this example, the `WorkingHours` field is added to the output record, while the `BaseSalary` is used conditionally only when the incoming record doesn't contain the `BaseSalary` field (or the value is `null` if it's a nullable field). The request for the contextualization data doesn't happen with every incoming record. The mapper requests the dataset and then it receives notifications from DSS about the changes, while it uses a cached version of the dataset.
107
+
In this example, the `WorkingHours` field is added to the output record, while the `BaseSalary` is used conditionally only when the incoming record doesn't contain the `BaseSalary` field (or the value is `null` if it's a nullable field). The request for the contextualization data doesn't happen with every incoming record. The mapper requests the dataset and then it receives notifications from the state store about the changes, while it uses a cached version of the dataset.
106
108
107
109
It's possible to use multiple datasets:
108
110
@@ -129,7 +131,7 @@ datasets: [
129
131
]
130
132
```
131
133
132
-
# [Kubernetes](#tab/kubernetes)
134
+
# [Kubernetes (preview)](#tab/kubernetes)
133
135
134
136
```yaml
135
137
datasets:
@@ -159,7 +161,7 @@ inputs: [
159
161
]
160
162
```
161
163
162
-
# [Kubernetes](#tab/kubernetes)
164
+
# [Kubernetes (preview)](#tab/kubernetes)
163
165
164
166
```yaml
165
167
- inputs:
@@ -169,7 +171,7 @@ inputs: [
169
171
170
172
---
171
173
172
-
The input references use the key of the dataset like `position` or `permission`. If the key in DSS is inconvenient to use, you can define an alias:
174
+
The input references use the key of the dataset like `position` or `permission`. If the key in state store is inconvenient to use, you can define an alias:
0 commit comments