Skip to content

Commit e42f529

Browse files
committed
Another pass
1 parent b7a5683 commit e42f529

File tree

6 files changed

+710
-431
lines changed

6 files changed

+710
-431
lines changed

articles/iot-operations/connect-to-cloud/concept-schema-registry.md

Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,3 +106,171 @@ Output schemas are associated with dataflow destinations are only used for dataf
106106
Note: The Delta schema format is used for both Parquet and Delta output.
107107

108108
For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.
109+
110+
To upload an output schema, see [Upload schema](#upload-schema).
111+
112+
## Upload schema
113+
114+
Input schema can be uploaded in the operations experience portal as [mentioned previously](#input-schema). You can also upload a schema using a Bicep template.
115+
116+
### Example with Bicep template
117+
118+
Create a Bicep `.bicep` file, and add the schema content to it at the top as a variable. This example is a Delta schema that corresponds to the OPC UA data from [quickstart](../get-started-end-to-end-sample/quickstart-add-assets.md).
119+
120+
```bicep
121+
// Delta schema content matching OPC UA data from quickstart
122+
// For ADLS Gen2, ADX, and Fabric destinations
123+
var opcuaSchemaContent = '''
124+
{
125+
"$schema": "Delta/1.0",
126+
"type": "object",
127+
"properties": {
128+
"type": "struct",
129+
"fields": [
130+
{
131+
"name": "temperature",
132+
"type": {
133+
"type": "struct",
134+
"fields": [
135+
{
136+
"name": "SourceTimestamp",
137+
"type": "string",
138+
"nullable": true,
139+
"metadata": {}
140+
},
141+
{
142+
"name": "Value",
143+
"type": "integer",
144+
"nullable": true,
145+
"metadata": {}
146+
},
147+
{
148+
"name": "StatusCode",
149+
"type": {
150+
"type": "struct",
151+
"fields": [
152+
{
153+
"name": "Code",
154+
"type": "integer",
155+
"nullable": true,
156+
"metadata": {}
157+
},
158+
{
159+
"name": "Symbol",
160+
"type": "string",
161+
"nullable": true,
162+
"metadata": {}
163+
}
164+
]
165+
},
166+
"nullable": true,
167+
"metadata": {}
168+
}
169+
]
170+
},
171+
"nullable": true,
172+
"metadata": {}
173+
},
174+
{
175+
"name": "Tag 10",
176+
"type": {
177+
"type": "struct",
178+
"fields": [
179+
{
180+
"name": "SourceTimestamp",
181+
"type": "string",
182+
"nullable": true,
183+
"metadata": {}
184+
},
185+
{
186+
"name": "Value",
187+
"type": "integer",
188+
"nullable": true,
189+
"metadata": {}
190+
},
191+
{
192+
"name": "StatusCode",
193+
"type": {
194+
"type": "struct",
195+
"fields": [
196+
{
197+
"name": "Code",
198+
"type": "integer",
199+
"nullable": true,
200+
"metadata": {}
201+
},
202+
{
203+
"name": "Symbol",
204+
"type": "string",
205+
"nullable": true,
206+
"metadata": {}
207+
}
208+
]
209+
},
210+
"nullable": true,
211+
"metadata": {}
212+
}
213+
]
214+
},
215+
"nullable": true,
216+
"metadata": {}
217+
}
218+
]
219+
}
220+
}
221+
'''
222+
```
223+
224+
Then, define schema resource along with pointers to the existing Azure IoT Operation instance, custom location, and schema registry resources that you have from deploying Azure IoT Operations.
225+
226+
```bicep
227+
// Replace placeholder values with your actual resource names
228+
param customLocationName string = '<CUSTOM_LOCATION_NAME>'
229+
param aioInstanceName string = '<AIO_INSTANCE_NAME>'
230+
param schemaRegistryName string = '<SCHEMA_REGISTRY_NAME>'
231+
232+
// Pointers to existing resources from AIO deployment
233+
resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
234+
name: customLocationName
235+
}
236+
resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
237+
name: aioInstanceName
238+
}
239+
resource schemaRegistry 'Microsoft.DeviceRegistry/schemaRegistries@2024-09-01-preview' existing = {
240+
name: schemaRegistryName
241+
}
242+
243+
// Name and version of the schema
244+
param opcuaSchemaName string = 'opcua-output-delta'
245+
param opcuaSchemaVer string = '1'
246+
247+
// Define the schema resource to be created and instantiate a version
248+
resource opcSchema 'Microsoft.DeviceRegistry/schemaRegistries/schemas@2024-09-01-preview' = {
249+
parent: schemaRegistry
250+
name: opcuaSchemaName
251+
properties: {
252+
displayName: 'OPC UA Delta Schema'
253+
description: 'This is a OPC UA delta Schema'
254+
format: 'Delta/1.0'
255+
schemaType: 'MessageSchema'
256+
}
257+
}
258+
resource opcuaSchemaVersion 'Microsoft.DeviceRegistry/schemaRegistries/schemas/schemaVersions@2024-09-01-preview' = {
259+
parent: opcSchema
260+
name: opcuaSchemaVer
261+
properties: {
262+
description: 'Schema version'
263+
schemaContent: opcuaSchemaContent
264+
}
265+
}
266+
```
267+
268+
After you've defined the schema content and resources, you can deploy the Bicep template to create the schema in the schema registry.
269+
270+
```azurecli
271+
az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
272+
```
273+
274+
## Next steps
275+
276+
- [Create a dataflow](howto-create-dataflow.md)

articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -219,6 +219,29 @@ fabricOneLakeSettings: {
219219

220220
You can set advanced settings for the Fabric OneLake endpoint, such as the batching latency and message count. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
221221

222+
### OneLake path type
223+
224+
The `oneLakePathType` setting determines the type of path to use in the OneLake path. The default value is `Tables`, which is the recommended path type for the most common use cases. The `Tables` path type is a table in the OneLake lakehouse that is used to store the data. It can also be set as `Files`, which is a file in the OneLake lakehouse that is used to store the data. The `Files` path type is useful when you want to store the data in a file format that is not supported by the `Tables` path type.
225+
226+
# [Kubernetes](#tab/kubernetes)
227+
228+
```yaml
229+
fabricOneLakeSettings:
230+
oneLakePathType: Tables # Or Files
231+
```
232+
233+
# [Bicep](#tab/bicep)
234+
235+
```bicep
236+
fabricOneLakeSettings: {
237+
oneLakePathType: 'Tables'
238+
}
239+
```
240+
241+
---
242+
243+
### Batching
244+
222245
Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination.
223246

224247
| Field | Description | Required |

0 commit comments

Comments
 (0)