Skip to content

Commit 9b88bd2

Browse files
Merge pull request #2724 from MicrosoftDocs/main638924473708687664sync_temp
For protected branch, push strategy should use PR and merge to target branch method to work around git push error
2 parents a37934d + b73f3cf commit 9b88bd2

File tree

3 files changed

+55
-44
lines changed

3 files changed

+55
-44
lines changed

data-explorer/connect-odbc.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Connect to Azure Data Explorer with ODBC
33
description: In this article, you learn how to set up an Open Database Connectivity (ODBC) connection to Azure Data Explorer.
44
ms.reviewer: gabil
55
ms.topic: how-to
6-
ms.date: 05/26/2024
6+
ms.date: 09/02/2025
77
---
88

99
# Connect to Azure Data Explorer with ODBC
@@ -12,18 +12,18 @@ Open Database Connectivity ([ODBC](/sql/odbc/reference/odbc-overview)) is a wide
1212

1313
Consequently, you can establish a connection to Azure Data Explorer from any application that is equipped with support for the ODBC driver for SQL Server.
1414

15-
Watch the following video to learn to create an ODBC connection.
15+
Watch the following video to learn how to create an ODBC connection.
1616

1717
> [!VIDEO https://www.youtube.com/embed/qA5wxhrOwog]
1818
1919
Alternatively, follow the steps to [connect to your cluster with ODBC](#connect-to-your-cluster-with-odbc).
2020

2121
> [!NOTE]
22-
> We recommend using dedicated connectors whenever possible. For a list of available connectors, see [Connectors overview](integrate-data-overview.md).
22+
> Use dedicated connectors when possible. For a list of available connectors, see [Connectors overview](integrate-data-overview.md).
2323
2424
## Prerequisites
2525

26-
* [Microsoft ODBC Driver for SQL Server version 17.2.0.1 or later](/sql/connect/odbc/download-odbc-driver-for-sql-server) for your operating system.
26+
* [Microsoft ODBC Driver for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server) version 17.2.0.1 or later for your operating system.
2727

2828
## Connect to your cluster with ODBC
2929

@@ -45,41 +45,41 @@ To configure an ODBC data source using the ODBC driver for SQL Server:
4545

4646
1. Select **Add**.
4747

48-
:::image type="content" source="media/connect-odbc/add-data-source.png" alt-text="Add data source.":::
48+
:::image type="content" source="media/connect-odbc/add-data-source.png" alt-text="Screenshot of the ODBC Data Sources dialog showing the Add Data Source option and fields for creating a new DSN.":::
4949

5050
1. Select **ODBC Driver 17 for SQL Server** then **Finish**.
5151

52-
:::image type="content" source="media/connect-odbc/select-driver.png" alt-text="Select driver.":::
52+
:::image type="content" source="media/connect-odbc/select-driver.png" alt-text="Screenshot of the ODBC driver selection dialog showing ODBC Driver 17 for SQL Server selected.":::
5353

54-
1. Enter a name and description for the connection and the cluster you want to connect to, then select **Next**. The cluster URL should be in the form *\<ClusterName\>.\<Region\>.kusto.windows.net*.
54+
1. Enter a name and description for the connection and the cluster you want to connect to, then select **Next**. The cluster URL should be in the form `\<ClusterName\>.\<Region\>.kusto.windows.net`.
5555

5656
>[!NOTE]
57-
> When entering the cluster URL, do not include the prefix "https://".
57+
> When entering the cluster URL, don't include the prefix `https://`.
5858
59-
:::image type="content" source="media/connect-odbc/select-server.png" alt-text="Select server.":::
59+
:::image type="content" source="media/connect-odbc/select-server.png" alt-text="Screenshot of the Data Source Configuration window showing the Server field and an example cluster URL format.":::
6060

6161
1. Select **Active Directory Integrated** then **Next**.
6262

63-
:::image type="content" source="media/connect-odbc/active-directory-integrated.png" alt-text="Active directory integrated.":::
63+
:::image type="content" source="media/connect-odbc/active-directory-integrated.png" alt-text="Screenshot of the authentication method dropdown showing Active Directory Integrated selected.":::
6464

6565
1. Select the database with the sample data then **Next**.
6666

67-
:::image type="content" source="media/connect-odbc/change-default-database.png" alt-text="Cahnge default database.":::
67+
:::image type="content" source="media/connect-odbc/change-default-database.png" alt-text="Screenshot of the default database selection dialog showing the sample data database chosen.":::
6868

6969
1. On the next screen, leave all options as defaults then select **Finish**.
7070

7171
1. Select **Test Data Source**.
7272

73-
:::image type="content" source="media/connect-odbc/test-data-source.png" alt-text="Test data source.":::
73+
:::image type="content" source="media/connect-odbc/test-data-source.png" alt-text="Screenshot of the Test Data Source dialog showing the Test Data Source button and connection status fields.":::
7474

7575
1. Verify that the test succeeded then select **OK**. If the test didn't succeed, check the values that you specified in previous steps, and ensure you have sufficient permissions to connect to the cluster.
7676

77-
:::image type="content" source="media/connect-odbc/test-succeeded.png" alt-text="Test succeeded.":::
77+
:::image type="content" source="media/connect-odbc/test-succeeded.png" alt-text="Screenshot of the Test Data Source results showing a successful connection confirmation message.":::
7878

7979
---
8080

8181
> [!NOTE]
82-
> Azure Data Explorer considers string values as `NVARCHAR(MAX)`, which may not work well with some ODBC applications. Cast the data to `NVARCHAR(`*n*`)` using the `Language` parameter in the connection string. For example, `Language=any@MaxStringSize:5000` will encode strings as `NVARCHAR(5000)`. For more information, see [tuning options](sql-server-emulation-overview.md#tuning-options).
82+
> Azure Data Explorer treats string values as `NVARCHAR(MAX)`, which can cause issues with some ODBC applications. Cast strings to `NVARCHAR(\<n\>)` by using the `Language` parameter in the connection string. For example, `Language=any@MaxStringSize:5000` encodes strings as `NVARCHAR(5000)`. For more information, see [tuning options](sql-server-emulation-overview.md#tuning-options).
8383
8484
## Application authentication
8585

@@ -120,4 +120,4 @@ Language = any@AadAuthority:<aad_tenant_id>
120120
## Related content
121121

122122
* [SQL Server emulation in Azure Data Explorer](sql-server-emulation-overview.md)
123-
* [Run KQL queries and call stored functions](sql-kql-queries-and-stored-functions.md)
123+
* [Run Kusto Query Language (KQL) queries and call stored functions](sql-kql-queries-and-stored-functions.md)

data-explorer/ingest-json-formats.md

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,20 @@
11
---
2-
title: Ingest JSON formatted data into Azure Data Explorer
3-
description: Learn about how to ingest JSON formatted data into Azure Data Explorer.
2+
title: Ingest JSON Data Into Azure Data Explorer
3+
description: Ingest JSON to Azure Data Explorer with step-by-step KQL, C#, and Python examples for raw, mapped, multiline, and array records. Follow best practices.
4+
#customer intent: As a data engineer, I want to ingest line-separated JSON into Azure Data Explorer so that I can capture raw telemetry in a dynamic column.
45
ms.reviewer: kerend
56
ms.topic: how-to
6-
ms.date: 09/14/2022
7+
ms.date: 09/02/2025
8+
ms.custom:
9+
- ai-gen-docs-bap
10+
- ai-gen-title
11+
- ai-seo-date:09/02/2025
12+
- ai-gen-description
713
---
814

915
# Ingest JSON formatted sample data into Azure Data Explorer
1016

11-
This article shows you how to ingest JSON formatted data into an Azure Data Explorer database. You'll start with simple examples of raw and mapped JSON, continue to multi-lined JSON, and then tackle more complex JSON schemas containing arrays and dictionaries. The examples detail the process of ingesting JSON formatted data using Kusto Query Language (KQL), C#, or Python.
17+
This article shows you how to ingest JSON formatted data into an Azure Data Explorer database. You start with simple examples of raw and mapped JSON, continue to multi-lined JSON, and then tackle more complex JSON schemas containing arrays and dictionaries. The examples detail the process of ingesting JSON formatted data using Kusto Query Language (KQL), C#, or Python.
1218

1319
> [!NOTE]
1420
> We don't recommend using `.ingest` management commands in production scenarios. Instead, use a [data connector](integrate-data-overview.md) or programmatically ingest data using one of the [Kusto client libraries](/kusto/api/client-libraries?view=azure-data-explorer&preserve-view=true).
@@ -26,13 +32,13 @@ Azure Data Explorer supports two JSON file formats:
2632
* `multijson`: Multi-lined JSON. The parser ignores the line separators and reads a record from the previous position to the end of a valid JSON.
2733

2834
> [!NOTE]
29-
> When ingesting using the [get data experience](ingest-data-overview.md), the default format is `multijson`. The format can handle multiline JSON records and arrays of JSON records. When a parsing error is encountered, the entire file is discarded. To ignore invalid JSON records, select the option to "Ignore data format errors.", which will switch the format to `json` (JSON Lines).
35+
> When ingesting using the [Get data experience](ingest-data-overview.md), the default format is `multijson`. The format can handle multiline JSON records and arrays of JSON records. When a parsing error is encountered, the entire file is discarded. To ignore invalid JSON records, select the option to "Ignore data format errors.", which switches the format to `json` (JSON Lines).
3036
>
31-
> If you're using the JSON Line format (`json`), lines that don't represent a valid JSON records are skipped during parsing.
37+
> If you're using the JSON Line format (`json`), lines that don't represent valid JSON records are skipped during parsing.
3238
3339
### Ingest and map JSON formatted data
3440

35-
Ingestion of JSON formatted data requires you to specify the *format* using [ingestion property](/kusto/ingestion-properties?view=azure-data-explorer&preserve-view=true). Ingestion of JSON data requires [mapping](/kusto/management/mappings?view=azure-data-explorer&preserve-view=true), which maps a JSON source entry to its target column. When ingesting data, use the `IngestionMapping` property with its `ingestionMappingReference` (for a pre-defined mapping) ingestion property or its `IngestionMappings` property. This article will use the `ingestionMappingReference` ingestion property, which is pre-defined on the table used for ingestion. In the examples below, we'll start by ingesting JSON records as raw data to a single column table. Then we'll use the mapping to ingest each property to its mapped column.
41+
Ingestion of JSON formatted data requires you to specify the *format* using [ingestion property](/kusto/ingestion-properties?view=azure-data-explorer&preserve-view=true). Ingestion of JSON data requires [mapping](/kusto/management/mappings?view=azure-data-explorer&preserve-view=true), which maps a JSON source entry to its target column. When ingesting data, use the `IngestionMapping` property with its `ingestionMappingReference` (for a predefined mapping) ingestion property or its `IngestionMappings` property. This article uses the `ingestionMappingReference` ingestion property, which is predefined on the table used for ingestion. In the following examples, we start by ingesting JSON records as raw data to a single column table. Then we use the mapping to ingest each property to its mapped column.
3642

3743
### Simple JSON example
3844

@@ -206,7 +212,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
206212
207213
### [KQL](#tab/kusto-query-language)
208214
209-
1. Create a new table, with a similar schema to the JSON input data. We'll use this table for all the following examples and ingest commands.
215+
1. Create a new table, with a similar schema to the JSON input data. We use this table for all the following examples and ingest commands.
210216
211217
```kusto
212218
.create table Events (Time: datetime, Device: string, MessageId: string, Temperature: double, Humidity: double)
@@ -218,7 +224,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
218224
.create table Events ingestion json mapping 'FlatEventMapping' '[{"column":"Time","Properties":{"path":"$.timestamp"}},{"column":"Device","Properties":{"path":"$.deviceId"}},{"column":"MessageId","Properties":{"path":"$.messageId"}},{"column":"Temperature","Properties":{"path":"$.temperature"}},{"column":"Humidity","Properties":{"path":"$.humidity"}}]'
219225
```
220226
221-
In this mapping, as defined by the table schema, the `timestamp` entries will be ingested to the column `Time` as `datetime` data types.
227+
In this mapping, as defined by the table schema, the `timestamp` entries are ingested to the column `Time` as `datetime` data types.
222228
223229
1. Ingest data into the `Events` table.
224230
@@ -230,7 +236,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
230236
231237
### [C#](#tab/c-sharp)
232238
233-
1. Create a new table, with a similar schema to the JSON input data. We'll use this table for all the following examples and ingest commands.
239+
1. Create a new table, with a similar schema to the JSON input data. We use this table for all the following examples and ingest commands.
234240
235241
```csharp
236242
var tableName = "Events";
@@ -268,7 +274,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
268274
await kustoClient.ExecuteControlCommandAsync(command);
269275
```
270276
271-
In this mapping, as defined by the table schema, the `timestamp` entries will be ingested to the column `Time` as `datetime` data types.
277+
In this mapping, as defined by the table schema, the `timestamp` entries are ingested to the column `Time` as `datetime` data types.
272278
273279
1. Ingest data into the `Events` table.
274280
@@ -286,7 +292,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
286292
287293
### [Python](#tab/python)
288294
289-
1. Create a new table, with a similar schema to the JSON input data. We'll use this table for all the following examples and ingest commands.
295+
1. Create a new table, with a similar schema to the JSON input data. We use this table for all the following examples and ingest commands.
290296
291297
```python
292298
TABLE = "Events"
@@ -363,7 +369,7 @@ INGESTION_CLIENT.ingest_from_blob(
363369

364370
## Ingest JSON records containing arrays
365371

366-
Array data types are an ordered collection of values. Ingestion of a JSON array is done by an [update policy](/kusto/management/show-table-update-policy-command?view=azure-data-explorer&preserve-view=true). The JSON is ingested as-is to an intermediate table. An update policy runs a pre-defined function on the `RawEvents` table, reingesting the results to the target table. We'll ingest data with the following structure:
372+
Array data types are an ordered collection of values. Ingestion of a JSON array is done by an [update policy](/kusto/management/show-table-update-policy-command?view=azure-data-explorer&preserve-view=true). The JSON is ingested as-is to an intermediate table. An update policy runs a predefined function on the `RawEvents` table, reingesting the results to the target table. We ingest data with the following structure:
367373

368374
```json
369375
{
@@ -389,7 +395,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
389395

390396
### [KQL](#tab/kusto-query-language)
391397

392-
1. Create an `update policy` function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
398+
1. Create an `update policy` function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use the table `RawEvents` as a source table and `Events` as a target table.
393399

394400
```kusto
395401
.create function EventRecordsExpand() {
@@ -410,7 +416,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
410416
EventRecordsExpand() | getschema
411417
```
412418
413-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest the results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
419+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests the results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
414420
415421
```kusto
416422
.alter table Events policy update @'[{"Source": "RawEvents", "Query": "EventRecordsExpand()", "IsEnabled": "True"}]'
@@ -430,7 +436,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
430436
431437
### [C#](#tab/c-sharp)
432438
433-
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
439+
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use table `RawEvents` as a source table and `Events` as a target table.
434440
435441
```csharp
436442
var command = CslCommandGenerator.GenerateCreateFunctionCommand(
@@ -454,7 +460,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
454460
> [!NOTE]
455461
> The schema received by the function must match the schema of the target table.
456462
457-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
463+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
458464
459465
```csharp
460466
command = ".alter table Events policy update @'[{'Source': 'RawEvents', 'Query': 'EventRecordsExpand()', 'IsEnabled': 'True'}]";
@@ -479,7 +485,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
479485
480486
### [Python](#tab/python)
481487
482-
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
488+
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use the table `RawEvents` as a source table and `Events` as a target table.
483489
484490
```python
485491
CREATE_FUNCTION_COMMAND =
@@ -500,7 +506,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
500506
> [!NOTE]
501507
> The schema received by the function has to match the schema of the target table.
502508
503-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
509+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
504510
505511
```python
506512
CREATE_UPDATE_POLICY_COMMAND =

0 commit comments

Comments
 (0)