You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use the _Azure Blob Storage_ destination to write unstructured data to Azure Blob Storage for storage and analysis.
18
+
19
+
## Prerequisites
20
+
21
+
To configure and use this Azure Blob Storage destination pipeline stage, you need:
22
+
23
+
- A deployed instance of Data Processor.
24
+
- An Azure Blob Storage account.
25
+
26
+
## Configure the destination stage
27
+
28
+
The _Azure Blob Storage_ destination stage JSON configuration defines the details of the stage. To author the stage, you can either interact with the form-based UI, or provide the JSON configuration on the **Advanced** tab:
29
+
30
+
| Field | Type | Description | Required? | Default | Example |
31
+
|--|--|--|--|--|--|
32
+
|`accountName`| string | The name the Azure Blob Storage account. | Yes ||`myBlobStorageAccount`|
33
+
|`containerName`| string | The name of container created in the storage account to store the blobs. | Yes ||`mycontainer`|
34
+
|`authentication`| string | Authentication information to connect to the storage account. One of `servicePrincipal`, `systemAssignedManagedIdentity`, and `accessKey`. | Yes || See the [sample configuration](#sample-configuration). |
35
+
|`format`| Object. | Formatting information for data. All types are supported. | Yes ||`{"type": "json"}`|
36
+
|`blobPath`|[Templates](../process-data/concept-configuration-patterns.md#templates)| Template string that identifies the path to write files to. All the template components shown in the default are required. | No |`{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}`|`{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}.xyz`|
37
+
|`batch`|[Batch](../process-data/concept-configuration-patterns.md#batch)| How to batch data before writing it to Blob Storage. | No |`{"time": "60s"}`|`{"time": "60s"}`|
38
+
|`retry`|[Retry](../process-data/concept-configuration-patterns.md#retry)| The retry mechanism to use when a Blob Storage operation fails. | No | (empty) |`{"type": "fixed"}`|
39
+
40
+
## Sample configuration
41
+
42
+
The following JSON shows a sample configuration for the _Azure Blob Storage_ destination stage:
43
+
44
+
```json
45
+
{
46
+
"displayName": "Sample blobstorage output",
47
+
"description": "An example blobstorage output stage",
Copy file name to clipboardExpand all lines: articles/iot-operations/connect-to-cloud/howto-configure-destination-data-explorer.md
+43-17Lines changed: 43 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,11 @@ To configure and use an Azure Data Explorer destination pipeline stage, you need
28
28
29
29
## Set up Azure Data Explorer
30
30
31
-
Before you can write to Azure Data Explorer from a data pipeline, enable [service principal authentication](/azure/data-explorer/provision-azure-ad-app) in your database. To create a service principal with a client secret:
31
+
Before you can write to Azure Data Explorer from a data pipeline, you need to grant access to the database from the pipeline. You can use either a service principal or a managed identity to authenticate the pipeline to the database. The advantage of using a managed identity is that you don't need to manage the lifecycle of the service principal. The managed identity is automatically managed by Azure and is tied to the lifecycle of the resource it's assigned to.
32
+
33
+
# [Service principal](#tab/serviceprincipal)
34
+
35
+
To create a service principal with a client secret:
For the destination stage to connect to Azure Data Explorer, it needs access to a secret that contains the authentication details. To create a secret:
46
+
47
+
1. Use the following command to add a secret to your Azure Key Vault that contains the client secret you made a note of when you created the service principal:
48
+
49
+
```azurecli
50
+
az keyvault secret set --vault-name <your-key-vault-name> --name AccessADXSecret --value <client-secret>
51
+
```
52
+
53
+
1. Add the secret reference to your Kubernetes cluster by following the steps in [Manage secrets for your Azure IoT Operations deployment](../deploy-iot-ops/howto-manage-secrets.md).
To add the managed identity to the database, navigate to the Azure Data Explorer portal and run the following query on your database. Replace the placeholders with the values you made a note of in the previous step:
Data Processor writes to Azure Data Explorer in batches. While you batch data in data processor before sending it, Azure Data Explorer has its own default [ingestion batching policy](/azure/data-explorer/kusto/management/batchingpolicy). Therefore, you might not see your data in Azure Data Explorer immediately after Data Processor writes it to the Azure Data Explorer destination.
42
70
43
71
To view data in Azure Data Explorer as soon as the pipeline sends it, you can set the ingestion batching policy count to `1`. To edit the ingestion batching policy, run the following command in your database query tab:
@@ -53,18 +81,6 @@ To view data in Azure Data Explorer as soon as the pipeline sends it, you can se
53
81
```
54
82
````
55
83
56
-
## Configure your secret
57
-
58
-
For the destination stage to connect to Azure Data Explorer, it needs access to a secret that contains the authentication details. To create a secret:
59
-
60
-
1. Use the following command to add a secret to your Azure Key Vault that contains the client secret you made a note of when you created the service principal:
61
-
62
-
```azurecli
63
-
az keyvault secret set --vault-name <your-key-vault-name> --name AccessADXSecret --value <client-secret>
64
-
```
65
-
66
-
1. Add the secret reference to your Kubernetes cluster by following the steps in [Manage secrets for your Azure IoT Operations deployment](../deploy-iot-ops/howto-manage-secrets.md).
67
-
68
84
## Configure the destination stage
69
85
70
86
The Azure Data Explorer destination stage JSON configuration defines the details of the stage. To author the stage, you can either interact with the form-based UI, or provide the JSON configuration on the **Advanced** tab:
@@ -77,11 +93,14 @@ The Azure Data Explorer destination stage JSON configuration defines the details
| Table | String | The name of the table to write to. | Yes | - ||
79
95
| Batch |[Batch](../process-data/concept-configuration-patterns.md#batch)| How to [batch](../process-data/concept-configuration-patterns.md#batch) data. | No |`60s`|`10s`|
80
-
| Authentication<sup>1</sup> | The authentication details to connect to Azure Data Explorer. | Service principal | Yes | - |
96
+
| Retry |[Retry](../process-data/concept-configuration-patterns.md#retry)| The retry policy to use. | No |`default`|`fixed`|
97
+
| Authentication<sup>1</sup> | String | The authentication details to connect to Azure Data Explorer. `Service principal` or `Managed identity`| Service principal | Yes | - |
81
98
| Columns > Name | string | The name of the column. | Yes ||`temperature`|
82
99
| Columns > Path |[Path](../process-data/concept-configuration-patterns.md#path)| The location within each record of the data where the value of the column should be read from. | No |`.{{name}}`|`.temperature`|
83
100
84
-
Authentication<sup>1</sup>: Currently, the destination stage supports service principal based authentication when it connects to Azure Data Explorer. In your Azure Data Explorer destination, provide the following values to authenticate. You made a note of these values when you created the service principal and added the secret reference to your cluster.
101
+
<sup>1</sup>Authentication: Currently, the destination stage supports service principal based authentication or managed identity when it connects to Azure Data Explorer.
102
+
103
+
To configure service principal based authentication provide the following values. You made a note of these values when you created the service principal and added the secret reference to your cluster.
85
104
86
105
| Field | Description | Required |
87
106
| --- | --- | --- |
@@ -149,7 +168,12 @@ The following JSON example shows a complete Azure Data Explorer destination stag
149
168
"name": "IsSpare",
150
169
"path": ".IsSpare"
151
170
}
152
-
]
171
+
],
172
+
"retry": {
173
+
"type": "fixed",
174
+
"interval": "20s",
175
+
"maxRetries": 4
176
+
}
153
177
}
154
178
```
155
179
@@ -189,6 +213,8 @@ The following example shows a sample input message to the Azure Data Explorer de
189
213
## Related content
190
214
191
215
-[Send data to Microsoft Fabric](howto-configure-destination-fabric.md)
216
+
-[Send data to Azure Blob Storage](howto-configure-destination-blob.md)
192
217
-[Send data to a gRPC endpoint](../process-data/howto-configure-destination-grpc.md)
218
+
-[Send data to an HTTP endpoint](../process-data/howto-configure-destination-http.md)
193
219
-[Publish data to an MQTT broker](../process-data/howto-configure-destination-mq-broker.md)
194
220
-[Send data to the reference data store](../process-data/howto-configure-destination-reference-store.md)
0 commit comments