You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: data-explorer/connect-odbc.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Connect to Azure Data Explorer with ODBC
3
3
description: In this article, you learn how to set up an Open Database Connectivity (ODBC) connection to Azure Data Explorer.
4
4
ms.reviewer: gabil
5
5
ms.topic: how-to
6
-
ms.date: 05/26/2024
6
+
ms.date: 09/02/2025
7
7
---
8
8
9
9
# Connect to Azure Data Explorer with ODBC
@@ -12,18 +12,18 @@ Open Database Connectivity ([ODBC](/sql/odbc/reference/odbc-overview)) is a wide
12
12
13
13
Consequently, you can establish a connection to Azure Data Explorer from any application that is equipped with support for the ODBC driver for SQL Server.
14
14
15
-
Watch the following video to learn to create an ODBC connection.
15
+
Watch the following video to learn how to create an ODBC connection.
Alternatively, follow the steps to [connect to your cluster with ODBC](#connect-to-your-cluster-with-odbc).
20
20
21
21
> [!NOTE]
22
-
> We recommend using dedicated connectors whenever possible. For a list of available connectors, see [Connectors overview](integrate-data-overview.md).
22
+
> Use dedicated connectors when possible. For a list of available connectors, see [Connectors overview](integrate-data-overview.md).
23
23
24
24
## Prerequisites
25
25
26
-
*[Microsoft ODBC Driver for SQL Server version 17.2.0.1 or later](/sql/connect/odbc/download-odbc-driver-for-sql-server) for your operating system.
26
+
*[Microsoft ODBC Driver for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server) version 17.2.0.1 or later for your operating system.
27
27
28
28
## Connect to your cluster with ODBC
29
29
@@ -45,41 +45,41 @@ To configure an ODBC data source using the ODBC driver for SQL Server:
45
45
46
46
1. Select **Add**.
47
47
48
-
:::image type="content" source="media/connect-odbc/add-data-source.png" alt-text="Add data source.":::
48
+
:::image type="content" source="media/connect-odbc/add-data-source.png" alt-text="Screenshot of the ODBC Data Sources dialog showing the Add Data Source option and fields for creating a new DSN.":::
49
49
50
50
1. Select **ODBC Driver 17 for SQL Server** then **Finish**.
:::image type="content" source="media/connect-odbc/select-driver.png" alt-text="Screenshot of the ODBC driver selection dialog showing ODBC Driver 17 for SQL Server selected.":::
53
53
54
-
1. Enter a name and description for the connection and the cluster you want to connect to, then select **Next**. The cluster URL should be in the form *\<ClusterName\>.\<Region\>.kusto.windows.net*.
54
+
1. Enter a name and description for the connection and the cluster you want to connect to, then select **Next**. The cluster URL should be in the form `\<ClusterName\>.\<Region\>.kusto.windows.net`.
55
55
56
56
>[!NOTE]
57
-
> When entering the cluster URL, do not include the prefix "https://".
57
+
> When entering the cluster URL, don't include the prefix `https://`.
:::image type="content" source="media/connect-odbc/select-server.png" alt-text="Screenshot of the Data Source Configuration window showing the Server field and an example cluster URL format.":::
60
60
61
61
1. Select **Active Directory Integrated** then **Next**.
:::image type="content" source="media/connect-odbc/active-directory-integrated.png" alt-text="Screenshot of the authentication method dropdown showing Active Directory Integrated selected.":::
64
64
65
65
1. Select the database with the sample data then **Next**.
:::image type="content" source="media/connect-odbc/change-default-database.png" alt-text="Screenshot of the default database selection dialog showing the sample data database chosen.":::
68
68
69
69
1. On the next screen, leave all options as defaults then select **Finish**.
70
70
71
71
1. Select **Test Data Source**.
72
72
73
-
:::image type="content" source="media/connect-odbc/test-data-source.png" alt-text="Test data source.":::
73
+
:::image type="content" source="media/connect-odbc/test-data-source.png" alt-text="Screenshot of the Test Data Source dialog showing the Test Data Source button and connection status fields.":::
74
74
75
75
1. Verify that the test succeeded then select **OK**. If the test didn't succeed, check the values that you specified in previous steps, and ensure you have sufficient permissions to connect to the cluster.
:::image type="content" source="media/connect-odbc/test-succeeded.png" alt-text="Screenshot of the Test Data Source results showing a successful connection confirmation message.":::
78
78
79
79
---
80
80
81
81
> [!NOTE]
82
-
> Azure Data Explorer considers string values as `NVARCHAR(MAX)`, which may not work well with some ODBC applications. Cast the data to `NVARCHAR(`*n*`)` using the `Language` parameter in the connection string. For example, `Language=any@MaxStringSize:5000`will encode strings as `NVARCHAR(5000)`. For more information, see [tuning options](sql-server-emulation-overview.md#tuning-options).
82
+
> Azure Data Explorer treats string values as `NVARCHAR(MAX)`, which can cause issues with some ODBC applications. Cast strings to `NVARCHAR(\<n\>)`by using the `Language` parameter in the connection string. For example, `Language=any@MaxStringSize:5000`encodes strings as `NVARCHAR(5000)`. For more information, see [tuning options](sql-server-emulation-overview.md#tuning-options).
83
83
84
84
## Application authentication
85
85
@@ -120,4 +120,4 @@ Language = any@AadAuthority:<aad_tenant_id>
120
120
## Related content
121
121
122
122
*[SQL Server emulation in Azure Data Explorer](sql-server-emulation-overview.md)
123
-
*[Run KQL queries and call stored functions](sql-kql-queries-and-stored-functions.md)
123
+
*[Run Kusto Query Language (KQL) queries and call stored functions](sql-kql-queries-and-stored-functions.md)
Copy file name to clipboardExpand all lines: data-explorer/ingest-json-formats.md
+25-19Lines changed: 25 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,20 @@
1
1
---
2
-
title: Ingest JSON formatted data into Azure Data Explorer
3
-
description: Learn about how to ingest JSON formatted data into Azure Data Explorer.
2
+
title: Ingest JSON Data Into Azure Data Explorer
3
+
description: Ingest JSON to Azure Data Explorer with step-by-step KQL, C#, and Python examples for raw, mapped, multiline, and array records. Follow best practices.
4
+
#customer intent: As a data engineer, I want to ingest line-separated JSON into Azure Data Explorer so that I can capture raw telemetry in a dynamic column.
4
5
ms.reviewer: kerend
5
6
ms.topic: how-to
6
-
ms.date: 09/14/2022
7
+
ms.date: 09/02/2025
8
+
ms.custom:
9
+
- ai-gen-docs-bap
10
+
- ai-gen-title
11
+
- ai-seo-date:09/02/2025
12
+
- ai-gen-description
7
13
---
8
14
9
15
# Ingest JSON formatted sample data into Azure Data Explorer
10
16
11
-
This article shows you how to ingest JSON formatted data into an Azure Data Explorer database. You'll start with simple examples of raw and mapped JSON, continue to multi-lined JSON, and then tackle more complex JSON schemas containing arrays and dictionaries. The examples detail the process of ingesting JSON formatted data using Kusto Query Language (KQL), C#, or Python.
17
+
This article shows you how to ingest JSON formatted data into an Azure Data Explorer database. You start with simple examples of raw and mapped JSON, continue to multi-lined JSON, and then tackle more complex JSON schemas containing arrays and dictionaries. The examples detail the process of ingesting JSON formatted data using Kusto Query Language (KQL), C#, or Python.
12
18
13
19
> [!NOTE]
14
20
> We don't recommend using `.ingest` management commands in production scenarios. Instead, use a [data connector](integrate-data-overview.md) or programmatically ingest data using one of the [Kusto client libraries](/kusto/api/client-libraries?view=azure-data-explorer&preserve-view=true).
@@ -26,13 +32,13 @@ Azure Data Explorer supports two JSON file formats:
26
32
*`multijson`: Multi-lined JSON. The parser ignores the line separators and reads a record from the previous position to the end of a valid JSON.
27
33
28
34
> [!NOTE]
29
-
> When ingesting using the [get data experience](ingest-data-overview.md), the default format is `multijson`. The format can handle multiline JSON records and arrays of JSON records. When a parsing error is encountered, the entire file is discarded. To ignore invalid JSON records, select the option to "Ignore data format errors.", which will switch the format to `json` (JSON Lines).
35
+
> When ingesting using the [Get data experience](ingest-data-overview.md), the default format is `multijson`. The format can handle multiline JSON records and arrays of JSON records. When a parsing error is encountered, the entire file is discarded. To ignore invalid JSON records, select the option to "Ignore data format errors.", which switches the format to `json` (JSON Lines).
30
36
>
31
-
> If you're using the JSON Line format (`json`), lines that don't represent a valid JSON records are skipped during parsing.
37
+
> If you're using the JSON Line format (`json`), lines that don't represent valid JSON records are skipped during parsing.
32
38
33
39
### Ingest and map JSON formatted data
34
40
35
-
Ingestion of JSON formatted data requires you to specify the *format* using [ingestion property](/kusto/ingestion-properties?view=azure-data-explorer&preserve-view=true). Ingestion of JSON data requires [mapping](/kusto/management/mappings?view=azure-data-explorer&preserve-view=true), which maps a JSON source entry to its target column. When ingesting data, use the `IngestionMapping` property with its `ingestionMappingReference` (for a pre-defined mapping) ingestion property or its `IngestionMappings` property. This article will use the `ingestionMappingReference` ingestion property, which is pre-defined on the table used for ingestion. In the examples below, we'll start by ingesting JSON records as raw data to a single column table. Then we'll use the mapping to ingest each property to its mapped column.
41
+
Ingestion of JSON formatted data requires you to specify the *format* using [ingestion property](/kusto/ingestion-properties?view=azure-data-explorer&preserve-view=true). Ingestion of JSON data requires [mapping](/kusto/management/mappings?view=azure-data-explorer&preserve-view=true), which maps a JSON source entry to its target column. When ingesting data, use the `IngestionMapping` property with its `ingestionMappingReference` (for a predefined mapping) ingestion property or its `IngestionMappings` property. This article uses the `ingestionMappingReference` ingestion property, which is predefined on the table used for ingestion. In the following examples, we start by ingesting JSON records as raw data to a single column table. Then we use the mapping to ingest each property to its mapped column.
36
42
37
43
### Simple JSON example
38
44
@@ -206,7 +212,7 @@ In this example, you ingest JSON records data. Each JSON property is mapped to a
206
212
207
213
### [KQL](#tab/kusto-query-language)
208
214
209
-
1. Create a new table, with a similar schema to the JSON input data. We'll use this table for all the following examples and ingest commands.
215
+
1. Create a new table, with a similar schema to the JSON input data. We use this table for all the following examples and ingest commands.
Array data types are an ordered collection of values. Ingestion of a JSON array is done by an [update policy](/kusto/management/show-table-update-policy-command?view=azure-data-explorer&preserve-view=true). The JSON is ingested as-is to an intermediate table. An update policy runs a pre-defined function on the `RawEvents` table, reingesting the results to the target table. We'll ingest data with the following structure:
372
+
Array data types are an ordered collection of values. Ingestion of a JSON array is done by an [update policy](/kusto/management/show-table-update-policy-command?view=azure-data-explorer&preserve-view=true). The JSON is ingested as-is to an intermediate table. An update policy runs a predefined function on the `RawEvents` table, reingesting the results to the target table. We ingest data with the following structure:
367
373
368
374
```json
369
375
{
@@ -389,7 +395,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
389
395
390
396
### [KQL](#tab/kusto-query-language)
391
397
392
-
1. Create an `update policy` function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
398
+
1. Create an `update policy` function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use the table `RawEvents` as a source table and `Events` as a target table.
393
399
394
400
```kusto
395
401
.create function EventRecordsExpand() {
@@ -410,7 +416,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
410
416
EventRecordsExpand() | getschema
411
417
```
412
418
413
-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest the results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
419
+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests the results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
@@ -430,7 +436,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
430
436
431
437
### [C#](#tab/c-sharp)
432
438
433
-
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
439
+
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use table `RawEvents` as a source table and `Events` as a target table.
434
440
435
441
```csharp
436
442
var command = CslCommandGenerator.GenerateCreateFunctionCommand(
@@ -454,7 +460,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
454
460
> [!NOTE]
455
461
> The schema received by the function must match the schema of the target table.
456
462
457
-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
463
+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
@@ -479,7 +485,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
479
485
480
486
### [Python](#tab/python)
481
487
482
-
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We'll use table `RawEvents` as a source table and `Events` as a target table.
488
+
1. Create an update function that expands the collection of `records` so that each value in the collection receives a separate row, using the `mv-expand` operator. We use the table `RawEvents` as a source table and `Events` as a target table.
483
489
484
490
```python
485
491
CREATE_FUNCTION_COMMAND =
@@ -500,7 +506,7 @@ Array data types are an ordered collection of values. Ingestion of a JSON array
500
506
> [!NOTE]
501
507
> The schema received by the function has to match the schema of the target table.
502
508
503
-
1. Add the update policy to the target table. This policy will automatically run the query on any newly ingested data in the `RawEvents` intermediate table and ingest its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
509
+
1. Add the update policy to the target table. This policy automatically runs the query on any newly ingested data in the `RawEvents` intermediate table and ingests its results into the `Events` table. Define a zero-retention policy to avoid persisting the intermediate table.
0 commit comments