Skip to content

Commit fd6fc37

Browse files
authored
Merge pull request #272584 from MicrosoftDocs/release-aio-april-updates
PR for AIO April release
2 parents 1d62d3d + dab0fee commit fd6fc37

File tree

54 files changed

+570
-463
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+570
-463
lines changed

articles/iot-operations/.openpublishing.redirection.iot-operations.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,11 @@
3434
"source_path_from_root": "/articles/iot-operations/send-view-analyze-data/tutorial-event-driven-with-dapr.md",
3535
"redirect_url": "/azure/iot-operations/develop/tutorial-event-driven-with-dapr",
3636
"redirect_document_id": false
37+
},
38+
{
39+
"source_path_from_root": "/articles/iot-operations/get-started/quickstart-process-telemetry.md",
40+
"redirect_url": "/azure/iot-operations/get-started/quickstart-upload-telemetry-to-cloud",
41+
"redirect_document_id": false
3742
}
3843
]
3944
}

articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md

Lines changed: 165 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.author: patricka
77
ms.topic: how-to
88
ms.custom:
99
- ignite-2023
10-
ms.date: 04/15/2024
10+
ms.date: 05/01/2024
1111

1212
#CustomerIntent: As an operator, I want to understand how to configure Azure IoT MQ so that I can send data from Azure IoT MQ to Data Lake Storage.
1313
---
@@ -45,14 +45,16 @@ You can use the data lake connector to send data from Azure IoT MQ Preview broke
4545

4646
- An IoT MQ MQTT broker. For more information on how to deploy an IoT MQ MQTT broker, see [Quickstart: Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](../get-started/quickstart-deploy.md).
4747

48-
## Configure the data lake connector to send data to Microsoft Fabric OneLake using managed identity
48+
## Configure to send data to Microsoft Fabric OneLake using managed identity
4949

5050
Configure a data lake connector to connect to Microsoft Fabric OneLake using managed identity.
5151

5252
1. Ensure that the steps in prerequisites are met, including a Microsoft Fabric workspace and lakehouse. The default *my workspace* can't be used.
5353

5454
1. Ensure that IoT MQ Arc extension is installed and configured with managed identity.
5555

56+
1. In Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, look for your IoT MQ extension name. The name begins with `mq-` followed by five random characters. For example, *mq-4jgjs*.
57+
5658
1. Get the *app ID* associated to the IoT MQ Arc extension managed identity, and note down the GUID value. The *app ID* is different than the object or principal ID. You can use the Azure CLI by finding the object ID of the managed identity and then querying the app ID of the service principal associated to the managed identity. For example:
5759

5860
```bash
@@ -90,7 +92,7 @@ Configure a data lake connector to connect to Microsoft Fabric OneLake using man
9092
protocol: v5
9193
image:
9294
repository: mcr.microsoft.com/azureiotoperations/datalake
93-
tag: 0.1.0-preview
95+
tag: 0.4.0-preview
9496
pullPolicy: IfNotPresent
9597
instances: 2
9698
logLevel: info
@@ -139,7 +141,7 @@ If your data shows in the *Unidentified* table:
139141

140142
The cause might be unsupported characters in the table name. The table name must be a valid Azure Storage container name that means it can contain any English letter, upper or lower case, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
141143

142-
## Configure the data lake connector to send data to Azure Data Lake Storage Gen2 using SAS token
144+
## Configure to send data to Azure Data Lake Storage Gen2 using SAS token
143145

144146
Configure a data lake connector to connect to an Azure Data Lake Storage Gen2 (ADLS Gen2) account using a shared access signature (SAS) token.
145147

@@ -176,7 +178,7 @@ Configure a data lake connector to connect to an Azure Data Lake Storage Gen2 (A
176178
protocol: v5
177179
image:
178180
repository: mcr.microsoft.com/azureiotoperations/datalake
179-
tag: 0.1.0-preview
181+
tag: 0.4.0-preview
180182
pullPolicy: IfNotPresent
181183
instances: 2
182184
logLevel: "debug"
@@ -219,6 +221,157 @@ authentication:
219221
audience: https://my-account.blob.core.windows.net
220222
```
221223
224+
## Configure to send data to Azure Data Explorer using managed identity
225+
226+
Configure the data lake connector to send data to an Azure Data Explorer endpoint using managed identity.
227+
228+
1. To deploy an Azure Data Explorer cluster, follow the **Full cluster** steps in the [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-and-database?tabs=full).
229+
230+
1. After the cluster is created, create a database to store your data.
231+
232+
1. You can create a table for given data via the Azure portal and create columns manually, or you can use [KQL](/azure/data-explorer/kusto/management/create-table-command) in the query tab.
233+
234+
For example:
235+
.create table thermostat (
236+
externalAssetId: string,
237+
assetName: string,
238+
CurrentTemperature: real,
239+
Pressure: real,
240+
MqttTopic: string,
241+
Timestamp: datetime
242+
)
243+
timestamp: datetime
244+
)
245+
```
246+
247+
### Enable streaming ingestion
248+
249+
Enable streaming ingestion on your table and database. In the query tab, run the following command, substituting `<DATABASE_NAME>` with your database name:
250+
251+
```kql
252+
.alter database <DATABASE_NAME> policy streamingingestion enable
253+
```
254+
255+
For example:
256+
257+
```kql
258+
.alter database TestDatabase policy streamingingestion enable
259+
```
260+
261+
### Add the managed identity to the Azure Data Explorer cluster
262+
263+
In order for the connector to authenticate to Azure Data Explorer, you must add the managed identity to the Azure Data Explorer cluster.
264+
265+
1. In Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, look for the name of your IoT MQ extension. The name begins with `mq-` followed by five random characters. For example, *mq-4jgjs*. The IoT MQ extension name is the same as the MQ managed identity name.
266+
1. In your Azure Data Explorer database, select **Permissions** > **Add** > **Ingestor**. Search for the MQ managed identity name and add it.
267+
268+
For more information on adding permissions, see [Manage Azure Data Explorer cluster permissions](/azure/data-explorer/manage-cluster-permissions).
269+
270+
Now, you're ready to deploy the connector and send data to Azure Data Explorer.
271+
272+
### Example deployment file
273+
274+
Example deployment file for the Azure Data Explorer connector. Comments that beginning with `TODO` require you to replace placeholder settings with your information.
275+
276+
```yaml
277+
apiVersion: mq.iotoperations.azure.com/v1beta1
278+
name: my-adx-connector
279+
namespace: azure-iot-operations
280+
name: my-datalake-connector
281+
namespace: mq
282+
spec:
283+
repository: mcr.microsoft.com/azureiotoperations/datalake
284+
tag: 0.4.0-preview
285+
repository: edgebuilds.azurecr.io/datalake
286+
tag: edge
287+
pullPolicy: Always
288+
repository: mcr.microsoft.com/azureiotoperations/datalake
289+
tag: 0.4.0-preview
290+
databaseFormat: "adx"
291+
target:
292+
endpoint: https://<cluster>.<region>.kusto.windows.net
293+
# TODO: insert the ADX cluster endpoint formatted as <cluster>.<region>.kusto.windows.net
294+
endpoint: "<endpoint>"
295+
authentication:
296+
localBrokerConnection:
297+
endpoint: aio-mq-dmqtt-frontend:8883
298+
tls:
299+
tlsEnabled: true
300+
trustedCaCertificateConfigMap: aio-ca-trust-bundle-test-only
301+
authentication:
302+
kubernetes: {}
303+
---
304+
endpoint: aio-mq-dmqtt-frontend:8883
305+
tls:
306+
tlsEnabled: true
307+
name: adx-topicmap
308+
authentication:
309+
kubernetes: {}
310+
dataLakeConnectorRef: my-adx-connector
311+
audience: "https://api.kusto.windows.net"
312+
---
313+
apiVersion: mq.iotoperations.azure.com/v1beta1
314+
kind: DataLakeConnectorTopicMap
315+
metadata:
316+
mqttSourceTopic: "azure-iot-operations/data/thermostat"
317+
namespace: azure-iot-operations
318+
spec:
319+
dataLakeConnectorRef: "my-datalake-connector"
320+
schema:
321+
- name: externalAssetId
322+
format: utf8
323+
optional: false
324+
mapping: $property.externalAssetId
325+
- name: assetName
326+
format: utf8
327+
optional: false
328+
mapping: DataSetWriterName
329+
- name: CurrentTemperature
330+
format: float32
331+
optional: false
332+
mapping: Payload.temperature.Value
333+
- name: Pressure
334+
format: float32
335+
optional: true
336+
mapping: "Payload.Tag 10.Value"
337+
- name: MqttTopic
338+
format: utf8
339+
optional: false
340+
mapping: $topic
341+
- name: Timestamp
342+
format: timestamp
343+
optional: false
344+
mapping: $received_time
345+
format: float32
346+
optional: false
347+
mapping: "$data.pressure"
348+
- name: "mqttTopic"
349+
format: utf8
350+
optional: false
351+
mapping: "$topic"
352+
- name: "timestamp"
353+
format: timestamp
354+
optional: false
355+
mapping: "$received_time"
356+
```
357+
358+
This example accepts data from the `dlc` topic with messages in JSON format such as the following:
359+
360+
```json
361+
{
362+
"data": {
363+
"externalAssetID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
364+
"assetName": "thermostat-de",
365+
"currentTemperature": 5506,
366+
"pressure": 5506,
367+
"mqttTopic": "dlc",
368+
"timestamp": "2024-04-02T22:36:03.1827681Z"
369+
}
370+
}
371+
```
372+
373+
374+
222375
## DataLakeConnector
223376

224377
A *DataLakeConnector* is a Kubernetes custom resource that defines the configuration and properties of a data lake connector instance. A data lake connector ingests data from MQTT topics into Delta tables in a Data Lake Storage account.
@@ -233,7 +386,7 @@ The spec field of a *DataLakeConnector* resource contains the following subfield
233386
- `instances`: The number of replicas of the data lake connector to run.
234387
- `logLevel`: The log level for the data lake connector module. It can be one of `trace`, `debug`, `info`, `warn`, `error`, or `fatal`.
235388
- `databaseFormat`: The format of the data to ingest into the Data Lake Storage. It can be one of `delta` or `parquet`.
236-
- `target`: The target field specifies the destination of the data ingestion. It can be `datalakeStorage`, `fabricOneLake`, or `localStorage`.
389+
- `target`: The target field specifies the destination of the data ingestion. It can be `datalakeStorage`, `fabricOneLake`, `adx`, or `localStorage`.
237390
- `datalakeStorage`: Specifies the configuration and properties of the local storage Storage account. It has the following subfields:
238391
- `endpoint`: The URL of the Data Lake Storage account endpoint. Don't include any trailing slash `/`.
239392
- `authentication`: The authentication field specifies the type and credentials for accessing the Data Lake Storage account. It can be one of the following.
@@ -276,7 +429,8 @@ The specification field of a DataLakeConnectorTopicMap resource contains the fol
276429
- `name`: The name of the column in the Delta table.
277430
- `format`: The data type of the column in the Delta table. It can be one of `boolean`, `int8`, `int16`, `int32`, `int64`, `uInt8`, `uInt16`, `uInt32`, `uInt64`, `float16`, `float32`, `float64`, `date32`, `timestamp`, `binary`, or `utf8`. Unsigned types, like `uInt8`, aren't fully supported, and are treated as signed types if specified here.
278431
- `optional`: A boolean value that indicates whether the column is optional or required. This field is optional and defaults to false.
279-
- `mapping`: JSON path expression that defines how to extract the value of the column from the MQTT message payload. Built-in mappings `$client_id`, `$topic`, and `$received_time` are available to use as columns to enrich the JSON in MQTT message body. This field is required.
432+
- `mapping`: JSON path expression that defines how to extract the value of the column from the MQTT message payload. Built-in mappings `$client_id`, `$topic`, `$properties`, and `$received_time` are available to use as columns to enrich the JSON in MQTT message body. This field is required.
433+
Use $properties for MQTT user properties. For example, $properties.assetId represents the value of the assetId property from the MQTT message.
280434

281435
Here's an example of a *DataLakeConnectorTopicMap* resource:
282436
@@ -320,7 +474,9 @@ spec:
320474
mapping: $received_time
321475
```
322476
323-
Stringified JSON like `"{\"SequenceNumber\": 4697, \"Timestamp\": \"2024-04-02T22:36:03.1827681Z\", \"DataSetWriterName\": \"thermostat-de\", \"MessageType\": \"ua-deltaframe\", \"Payload\": {\"temperature\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949717Z\", \"Value\": 5506}, \"Tag 10\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949888Z\", \"Value\": 5506}}}"` isn't supported and causes the connector to throw a *convertor found a null value* error. An example message for the `dlc` topic that works with this schema:
477+
Stringified JSON like `"{\"SequenceNumber\": 4697, \"Timestamp\": \"2024-04-02T22:36:03.1827681Z\", \"DataSetWriterName\": \"thermostat-de\", \"MessageType\": \"ua-deltaframe\", \"Payload\": {\"temperature\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949717Z\", \"Value\": 5506}, \"Tag 10\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949888Z\", \"Value\": 5506}}}"` isn't supported and causes the connector to throw a *convertor found a null value* error.
478+
479+
An example message for the `dlc` topic that works with this schema:
324480

325481
```json
326482
{
@@ -345,7 +501,7 @@ Which maps to:
345501

346502
| externalAssetId | assetName | CurrentTemperature | Pressure | mqttTopic | timestamp |
347503
| ------------------------------------ | --------------- | ------------------ | -------- | ----------------------------- | ------------------------------ |
348-
| 59ad3b8b-c840-43b5-b79d-7804c6f42172 | thermostat-de | 5506 | 5506 | dlc | 2024-04-02T22:36:03.1827681Z |
504+
| xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | thermostat-de | 5506 | 5506 | dlc | 2024-04-02T22:36:03.1827681Z |
349505

350506
> [!IMPORTANT]
351507
> If the data schema is updated, for example a data type is changed or a name is changed, transformation of incoming data might stop working. You need to change the data table name if a schema change occurs.

articles/iot-operations/connect-to-cloud/howto-configure-kafka.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.subservice: mq
77
ms.topic: how-to
88
ms.custom:
99
- ignite-2023
10-
ms.date: 01/16/2024
10+
ms.date: 04/22/2024
1111

1212
#CustomerIntent: As an operator, I want to understand how to configure Azure IoT MQ to send and receive messages between Azure IoT MQ and Kafka.
1313
---
@@ -127,7 +127,7 @@ spec:
127127
image:
128128
pullPolicy: IfNotPresent
129129
repository: mcr.microsoft.com/azureiotoperations/kafka
130-
tag: 0.1.0-preview
130+
tag: 0.4.0-preview
131131
instances: 2
132132
clientIdPrefix: my-prefix
133133
kafkaConnection:

articles/iot-operations/connect-to-cloud/howto-configure-mqtt-bridge.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.subservice: mq
77
ms.topic: how-to
88
ms.custom:
99
- ignite-2023
10-
ms.date: 11/15/2023
10+
ms.date: 04/22/2024
1111

1212
#CustomerIntent: As an operator, I want to bridge Azure IoT MQ to another MQTT broker so that I can integrate Azure IoT MQ with other messaging systems.
1313
---
@@ -58,7 +58,7 @@ metadata:
5858
spec:
5959
image:
6060
repository: mcr.microsoft.com/azureiotoperations/mqttbridge
61-
tag: 0.1.0-preview
61+
tag: 0.4.0-preview
6262
pullPolicy: IfNotPresent
6363
protocol: v5
6464
bridgeInstances: 1

articles/iot-operations/connect-to-cloud/tutorial-connect-event-grid.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.subservice: mq
66
ms.custom: devx-track-azurecli
77
ms.author: patricka
88
ms.topic: tutorial
9-
ms.date: 02/28/2024
9+
ms.date: 04/22/2024
1010

1111
#CustomerIntent: As an operator, I want to configure IoT MQ to bridge to Azure Event Grid MQTT broker PaaS so that I can process my IoT data at the edge and in the cloud.
1212
---
@@ -159,7 +159,7 @@ metadata:
159159
spec:
160160
image:
161161
repository: mcr.microsoft.com/azureiotoperations/mqttbridge
162-
tag: 0.1.0-preview
162+
tag: 0.4.0-preview
163163
pullPolicy: IfNotPresent
164164
protocol: v5
165165
bridgeInstances: 2

0 commit comments

Comments
 (0)