Skip to content

Commit 8764023

Browse files
authored
Merge pull request #290540 from kgremban/ga-schemascrub
Schema registry feedback from bash
2 parents e5964b3 + 3638738 commit 8764023

File tree

1 file changed

+20
-4
lines changed

1 file changed

+20
-4
lines changed

articles/iot-operations/connect-to-cloud/concept-schema-registry.md

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how schema registry handles message schemas to work with Azur
44
author: kgremban
55
ms.author: kgremban
66
ms.topic: conceptual
7-
ms.date: 10/30/2024
7+
ms.date: 11/14/2024
88

99
#CustomerIntent: As an operator, I want to understand how I can use message schemas to filter and transform messages.
1010
---
@@ -62,7 +62,7 @@ JSON:
6262

6363
Delta:
6464

65-
```delta
65+
```json
6666
{
6767
"$schema": "Delta/1.0",
6868
"type": "object",
@@ -87,7 +87,7 @@ Message schemas are used in all three phases of a dataflow: defining the source
8787

8888
### Input schema
8989

90-
Each dataflow source can optionally specify a message schema. If a schema is defined for a dataflow source, any incoming messages that don't match the schema are dropped.
90+
Each dataflow source can optionally specify a message schema. Currently, dataflows doesn't perform runtime validation on source message schemas.
9191

9292
Asset sources have a predefined message schema that was created by the connector for OPC UA.
9393

@@ -101,10 +101,19 @@ The operations experience uses the input schema as a starting point for your dat
101101

102102
### Output schema
103103

104-
Output schemas are associated with dataflow destinations are only used for dataflows that select local storage, Fabric, Azure Storage (ADLS Gen2), or Azure Data Explorer as the destination endpoint. Currently, Azure IoT Operations experience only supports Parquet output for output schemas.
104+
Output schemas are associated with dataflow destinations.
105+
106+
In the operations experience portal, you can configure output schemas for the following destination endpoints that support Parquet output:
107+
108+
* local storage
109+
* Fabric OneLake
110+
* Azure Storage (ADLS Gen2)
111+
* Azure Data Explorer
105112

106113
Note: The Delta schema format is used for both Parquet and Delta output.
107114

115+
If you use Bicep or Kubernetes, you can configure output schemas using JSON output for MQTT and Kafka destination endpoints. MQTT- and Kafka-based destinations don't support Delta format.
116+
108117
For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.
109118

110119
To upload an output schema, see [Upload schema](#upload-schema).
@@ -131,6 +140,13 @@ The following example creates a schema called `myschema` from inline content and
131140
az iot ops schema create -n myschema -g myresourcegroup --registry myregistry --format delta --type message --version-content '{\"hello\": \"world\"}' --ver 14
132141
```
133142

143+
>[!TIP]
144+
>If you don't know your registry name, use the `schema registry list` command to query for it. For example:
145+
>
146+
>```azurecli
147+
>az iot ops schema registry list -g myresourcegroup --query "[].{Name:name}" -o tsv
148+
>```
149+
134150
Once the `create` command is completed, you should see a blob in your storage account container with the schema content. The name for the blob is in the format `schema-namespace/schema/version`.
135151
136152
You can see more options with the helper command `az iot ops schema -h`.

0 commit comments

Comments
 (0)