Skip to content

Commit 99db8a5

Browse files
committed
contd
1 parent b70b4b4 commit 99db8a5

File tree

1 file changed

+12
-3
lines changed

1 file changed

+12
-3
lines changed

articles/iot-operations/connect-to-cloud/concept-schema-registry.md

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/30/2024
1111

1212
# Understand message schemas
1313

14-
Schema registry (preview), a feature provided by Azure Device Registry, is a synchronized repository in the cloud and at the edge. The schema registry stores the definitions of messages coming from edge assets, and then exposes an API to access those schemas at the edge.
14+
Schema registry, a feature provided by Azure Device Registry, is a synchronized repository in the cloud and at the edge. The schema registry stores the definitions of messages coming from edge assets, and then exposes an API to access those schemas at the edge.
1515

1616
The connector for OPC UA can create message schemas and add them to the schema registry or customers can upload schemas to the operations experience web UI or using ARM/Bicep templates.
1717

@@ -62,7 +62,7 @@ JSON:
6262

6363
Delta:
6464

65-
```delta
65+
```json
6666
{
6767
"$schema": "Delta/1.0",
6868
"type": "object",
@@ -101,10 +101,19 @@ The operations experience uses the input schema as a starting point for your dat
101101

102102
### Output schema
103103

104-
Output schemas are associated with dataflow destinations and are only used for dataflows that select local storage, Fabric, Azure Storage (ADLS Gen2), or Azure Data Explorer as the destination endpoint. Currently, Azure IoT Operations experience only supports Parquet output for output schemas.
104+
Output schemas are associated with dataflow destinations.
105+
106+
In the operations experience portal, you can configure output schemas for the following destination endpoints that support Parquet output:
107+
108+
* local storage
109+
* Fabric
110+
* Azure Storage (ADLS Gen2)
111+
* Azure Data Explorer
105112

106113
Note: The Delta schema format is used for both Parquet and Delta output.
107114

115+
If you use Bicep or Kubernetes, you can configure output schemas using JSON output for MQTT and Kafka destination endpoints. MQTT and Kafka destinations don't support Delta format.
116+
108117
For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.
109118

110119
To upload an output schema, see [Upload schema](#upload-schema).

0 commit comments

Comments
 (0)