Skip to content

Commit a13260a

Browse files
author
Venkat Yalla
authored
close
1 parent a1aa636 commit a13260a

File tree

1 file changed

+3
-333
lines changed

1 file changed

+3
-333
lines changed

articles/iot-operations/connect-to-cloud/howto-configure-fabric-rti.md

Lines changed: 3 additions & 333 deletions
Original file line numberDiff line numberDiff line change
@@ -72,347 +72,17 @@ Azure Key Vault is the recommended way to sync the connection string to the Kube
7272

7373
# [Bicep](#tab/bicep)
7474

75-
Follow [SASL instructions for the Event Hubs endpoint](../connect-to-cloud/howto-configure-kafka-endpoint?tabs=bicep#sasl).
75+
Identical to [SASL instructions for the Event Hubs endpoint](../connect-to-cloud/howto-configure-kafka-endpoint?tabs=bicep#sasl).
7676

7777
# [Kubernetes](#tab/kubernetes)
7878

79-
Create a Kubernetes manifest `.yaml` file with the following content.
80-
81-
```yaml
82-
apiVersion: connectivity.iotoperations.azure.com/v1beta1
83-
kind: DataflowEndpoint
84-
metadata:
85-
name: <ENDPOINT_NAME>
86-
namespace: azure-iot-operations
87-
spec:
88-
endpointType: DataLakeStorage
89-
dataLakeStorageSettings:
90-
host: https://<ACCOUNT>.blob.core.windows.net
91-
authentication:
92-
method: SystemAssignedManagedIdentity
93-
systemAssignedManagedIdentitySettings: {}
94-
```
95-
96-
Then apply the manifest file to the Kubernetes cluster.
97-
98-
```bash
99-
kubectl apply -f <FILE>.yaml
100-
```
101-
102-
---
103-
104-
If you need to override the system-assigned managed identity audience, see the [System-assigned managed identity](#system-assigned-managed-identity) section.
105-
106-
### Use access token authentication
107-
108-
Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
109-
110-
Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token and other placeholder values.
111-
112-
# [Portal](#tab/portal)
113-
114-
1. In the IoT Operations portal, select the **Dataflow endpoints** tab.
115-
1. Under **Create new dataflow endpoint**, select **Azure Data Lake Storage (2nd generation)** > **New**.
116-
1. Enter the following settings for the endpoint:
117-
118-
| Setting | Description |
119-
| --------------------- | ------------------------------------------------------------------------------------------------- |
120-
| Name | The name of the dataflow endpoint. |
121-
| Host | The hostname of the Azure Data Lake Storage Gen2 endpoint in the format `<account>.blob.core.windows.net`. Replace the account placeholder with the endpoint account name. |
122-
| Authentication method | The method used for authentication. Choose *Access token*. |
123-
| Synced secret name | The name of the Kubernetes secret that is synchronized with the ADLSv2 endpoint. |
124-
| Access token secret name | The name of the Kubernetes secret containing the SAS token. |
125-
126-
1. Select **Apply** to provision the endpoint.
127-
128-
# [Bicep](#tab/bicep)
129-
130-
Create a Bicep `.bicep` file with the following content.
131-
132-
```bicep
133-
param aioInstanceName string = '<AIO_INSTANCE_NAME>'
134-
param customLocationName string = '<CUSTOM_LOCATION_NAME>'
135-
param endpointName string = '<ENDPOINT_NAME>'
136-
param host string = 'https://<ACCOUNT>.blob.core.windows.net'
137-
138-
resource aioInstance 'Microsoft.IoTOperations/instances@2024-09-15-preview' existing = {
139-
name: aioInstanceName
140-
}
141-
resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
142-
name: customLocationName
143-
}
144-
resource adlsGen2Endpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-09-15-preview' = {
145-
parent: aioInstance
146-
name: endpointName
147-
extendedLocation: {
148-
name: customLocation.id
149-
type: 'CustomLocation'
150-
}
151-
properties: {
152-
endpointType: 'DataLakeStorage'
153-
dataLakeStorageSettings: {
154-
host: host
155-
authentication: {
156-
method: 'AccessToken'
157-
accessTokenSettings: {
158-
secretRef: '<SAS_SECRET_NAME>'
159-
}
160-
}
161-
}
162-
}
163-
}
164-
```
165-
166-
Then, deploy via Azure CLI.
167-
168-
```azurecli
169-
az deployment group create --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
170-
```
171-
172-
# [Kubernetes](#tab/kubernetes)
173-
174-
Create a Kubernetes manifest `.yaml` file with the following content.
175-
176-
```yaml
177-
apiVersion: connectivity.iotoperations.azure.com/v1beta1
178-
kind: DataflowEndpoint
179-
metadata:
180-
name: <ENDPOINT_NAME>
181-
namespace: azure-iot-operations
182-
spec:
183-
endpointType: DataLakeStorage
184-
dataLakeStorageSettings:
185-
host: https://<ACCOUNT>.blob.core.windows.net
186-
authentication:
187-
method: AccessToken
188-
accessTokenSettings:
189-
secretRef: <SAS_SECRET_NAME>
190-
```
191-
192-
Then apply the manifest file to the Kubernetes cluster.
193-
194-
```bash
195-
kubectl apply -f <FILE>.yaml
196-
```
79+
Identical to [SASL instructions for the Event Hubs endpoint](../connect-to-cloud/howto-configure-kafka-endpoint?tabs=bicep#kubernetes).
19780

19881
---
19982

200-
## Available authentication methods
201-
202-
The following authentication methods are available for Azure Data Lake Storage Gen2 endpoints.
203-
204-
For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
205-
206-
### System-assigned managed identity
207-
208-
Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication.
209-
210-
Before creating the dataflow endpoint, assign a role to the managed identity that has write permission to the storage account. For example, you can assign the *Storage Blob Data Contributor* role. To learn more about assigning roles to blobs, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
211-
212-
1. In Azure portal, go to your Azure IoT Operations instance and select **Overview**.
213-
1. Copy the name of the extension listed after **Azure IoT Operations Arc extension**. For example, *azure-iot-operations-xxxx7*.
214-
1. Search for the managed identity in the Azure portal by using the name of the extension. For example, search for *azure-iot-operations-xxxx7*.
215-
1. Assign a role to the Azure IoT Operations Arc extension managed identity that grants permission to write to the storage account, such as *Storage Blob Data Contributor*. To learn more, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
216-
1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
217-
218-
# [Portal](#tab/portal)
219-
220-
In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**.
221-
222-
In most cases, you don't need to specify a service audience. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
223-
224-
# [Bicep](#tab/bicep)
225-
226-
```bicep
227-
dataLakeStorageSettings: {
228-
authentication: {
229-
method: 'SystemAssignedManagedIdentity'
230-
systemAssignedManagedIdentitySettings: {}
231-
}
232-
}
233-
```
234-
235-
# [Kubernetes](#tab/kubernetes)
236-
237-
```yaml
238-
dataLakeStorageSettings:
239-
authentication:
240-
method: SystemAssignedManagedIdentity
241-
systemAssignedManagedIdentitySettings: {}
242-
```
243-
244-
---
245-
246-
If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
247-
248-
# [Portal](#tab/portal)
249-
250-
In most cases, you don't need to specify a service audience. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
251-
252-
# [Bicep](#tab/bicep)
253-
254-
```bicep
255-
dataLakeStorageSettings: {
256-
authentication: {
257-
method: 'SystemAssignedManagedIdentity'
258-
systemAssignedManagedIdentitySettings: {
259-
audience: 'https://<ACCOUNT>.blob.core.windows.net'
260-
}
261-
}
262-
}
263-
```
264-
265-
# [Kubernetes](#tab/kubernetes)
266-
267-
```yaml
268-
dataLakeStorageSettings:
269-
authentication:
270-
method: SystemAssignedManagedIdentity
271-
systemAssignedManagedIdentitySettings:
272-
audience: https://<ACCOUNT>.blob.core.windows.net
273-
```
274-
275-
---
276-
277-
### Access token
278-
279-
Using an access token is an alternative authentication method. This method requires you to create a Kubernetes secret with the SAS token and reference the secret in the *DataflowEndpoint* resource.
280-
281-
Get a [SAS token](../../storage/common/storage-sas-overview.md) for an Azure Data Lake Storage Gen2 (ADLSv2) account. For example, use the Azure portal to browse to your storage account. On the left menu, choose **Security + networking** > **Shared access signature**. Use the following table to set the required permissions.
282-
283-
| Parameter | Enabled setting |
284-
| ---------------------- | --------------------------- |
285-
| Allowed services | Blob |
286-
| Allowed resource types | Object, Container |
287-
| Allowed permissions | Read, Write, Delete, List, Create |
288-
289-
To enhance security and follow the principle of least privilege, you can generate a SAS token for a specific container. To prevent authentication errors, ensure that the container specified in the SAS token matches the dataflow destination setting in the configuration.
290-
291-
# [Portal](#tab/portal)
292-
293-
In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **Access token**.
294-
295-
Enter the access token secret name you created in **Access token secret name**.
296-
297-
To learn more about secrets, see [Create and manage secrets in Azure IoT Operations Preview](../secure-iot-ops/howto-manage-secrets.md).
298-
299-
# [Bicep](#tab/bicep)
300-
301-
```bicep
302-
dataLakeStorageSettings: {
303-
authentication: {
304-
method: 'AccessToken'
305-
accessTokenSettings: {
306-
secretRef: '<SAS_SECRET_NAME>'
307-
}
308-
}
309-
}
310-
```
311-
312-
# [Kubernetes](#tab/kubernetes)
313-
314-
Create a Kubernetes secret with the SAS token.
315-
316-
```bash
317-
kubectl create secret generic <SAS_SECRET_NAME> -n azure-iot-operations \
318-
--from-literal=accessToken='sv=2022-11-02&ss=b&srt=c&sp=rwdlax&se=2023-07-22T05:47:40Z&st=2023-07-21T21:47:40Z&spr=https&sig=<signature>'
319-
```
320-
321-
```yaml
322-
dataLakeStorageSettings:
323-
authentication:
324-
method: AccessToken
325-
accessTokenSettings:
326-
secretRef: <SAS_SECRET_NAME>
327-
```
328-
329-
---
330-
331-
### User-assigned managed identity
332-
333-
To use user-managed identity for authentication, you must first deploy Azure IoT Operations with secure settings enabled. To learn more, see [Enable secure settings in Azure IoT Operations deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
334-
335-
Then, specify the user-assigned managed identity authentication method along with the client ID, tenant ID, and scope of the managed identity.
336-
337-
# [Portal](#tab/portal)
338-
339-
In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **User assigned managed identity**.
340-
341-
Enter the user assigned managed identity client ID and tenant ID in the appropriate fields.
342-
343-
# [Bicep](#tab/bicep)
344-
345-
```bicep
346-
dataLakeStorageSettings: {
347-
authentication: {
348-
method: 'UserAssignedManagedIdentity'
349-
userAssignedManagedIdentitySettings: {
350-
cliendId: '<ID>'
351-
tenantId: '<ID>'
352-
// Optional, defaults to 'https://storage.azure.com/.default'
353-
// scope: 'https://<SCOPE_URL>'
354-
}
355-
}
356-
}
357-
```
358-
359-
# [Kubernetes](#tab/kubernetes)
360-
361-
```yaml
362-
dataLakeStorageSettings:
363-
authentication:
364-
method: UserAssignedManagedIdentity
365-
userAssignedManagedIdentitySettings:
366-
clientId: <ID>
367-
tenantId: <ID>
368-
# Optional, defaults to 'https://storage.azure.com/.default'
369-
# scope: https://<SCOPE_URL>
370-
```
371-
372-
---
373-
374-
Here, the scope is optional and defaults to `https://storage.azure.com/.default`. If you need to override the default scope, specify the `scope` setting via the Bicep or Kubernetes manifest.
375-
37683
## Advanced settings
37784

378-
You can set advanced settings for the Azure Data Lake Storage Gen2 endpoint, such as the batching latency and message count.
379-
380-
Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination.
381-
382-
| Field | Description | Required |
383-
| ----- | ----------- | -------- |
384-
| `latencySeconds` | The maximum number of seconds to wait before sending the messages to the destination. The default value is 60 seconds. | No |
385-
| `maxMessages` | The maximum number of messages to send to the destination. The default value is 100000 messages. | No |
386-
387-
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
388-
389-
# [Portal](#tab/portal)
390-
391-
In the operations experience, select the **Advanced** tab for the dataflow endpoint.
392-
393-
:::image type="content" source="media/howto-configure-adlsv2-endpoint/adls-advanced.png" alt-text="Screenshot using operations experience to set ADLS V2 advanced settings.":::
394-
395-
# [Bicep](#tab/bicep)
396-
397-
```bicep
398-
dataLakeStorageSettings: {
399-
batching: {
400-
latencySeconds: 100
401-
maxMessages: 1000
402-
}
403-
}
404-
```
405-
406-
# [Kubernetes](#tab/kubernetes)
407-
408-
```yaml
409-
dataLakeStorageSettings:
410-
batching:
411-
latencySeconds: 100
412-
maxMessages: 1000
413-
```
414-
415-
---
85+
The advanced settings for this endpoint are identical to the [advanced settings for Azure Event Hubs endpoints](../connect-to-cloud/howto-configure-kafka-endpoint#advanced-settings).
41686

41787
## Next steps
41888

0 commit comments

Comments
 (0)