You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/concept-schema-registry.md
+20-4Lines changed: 20 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how schema registry handles message schemas to work with Azur
4
4
author: kgremban
5
5
ms.author: kgremban
6
6
ms.topic: conceptual
7
-
ms.date: 10/30/2024
7
+
ms.date: 11/14/2024
8
8
9
9
#CustomerIntent: As an operator, I want to understand how I can use message schemas to filter and transform messages.
10
10
---
@@ -62,7 +62,7 @@ JSON:
62
62
63
63
Delta:
64
64
65
-
```delta
65
+
```json
66
66
{
67
67
"$schema": "Delta/1.0",
68
68
"type": "object",
@@ -87,7 +87,7 @@ Message schemas are used in all three phases of a dataflow: defining the source
87
87
88
88
### Input schema
89
89
90
-
Each dataflow source can optionally specify a message schema. If a schema is defined for a dataflow source, any incoming messages that don't match the schema are dropped.
90
+
Each dataflow source can optionally specify a message schema. Currently, dataflows doesn't perform runtime validation on source message schemas.
91
91
92
92
Asset sources have a predefined message schema that was created by the connector for OPC UA.
93
93
@@ -101,10 +101,19 @@ The operations experience uses the input schema as a starting point for your dat
101
101
102
102
### Output schema
103
103
104
-
Output schemas are associated with dataflow destinations are only used for dataflows that select local storage, Fabric, Azure Storage (ADLS Gen2), or Azure Data Explorer as the destination endpoint. Currently, Azure IoT Operations experience only supports Parquet output for output schemas.
104
+
Output schemas are associated with dataflow destinations.
105
+
106
+
In the operations experience portal, you can configure output schemas for the following destination endpoints that support Parquet output:
107
+
108
+
* local storage
109
+
* Fabric OneLake
110
+
* Azure Storage (ADLS Gen2)
111
+
* Azure Data Explorer
105
112
106
113
Note: The Delta schema format is used for both Parquet and Delta output.
107
114
115
+
If you use Bicep or Kubernetes, you can configure output schemas using JSON output for MQTT and Kafka destination endpoints. MQTT- and Kafka-based destinations don't support Delta format.
116
+
108
117
For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.
109
118
110
119
To upload an output schema, see [Upload schema](#upload-schema).
@@ -131,6 +140,13 @@ The following example creates a schema called `myschema` from inline content and
Once the `create` command is completed, you should see a blob in your storage account container with the schema content. The name for the blob is in the format `schema-namespace/schema/version`.
135
151
136
152
You can see more options with the helper command `az iot ops schema -h`.
0 commit comments