diff --git a/data-explorer/kusto/ingestion-properties.md b/data-explorer/kusto/ingestion-properties.md
index 1601e85323..52ac723512 100644
--- a/data-explorer/kusto/ingestion-properties.md
+++ b/data-explorer/kusto/ingestion-properties.md
@@ -1,22 +1,25 @@
---
title: Data ingestion properties
-description: Learn about the various data ingestion properties.
+description: Optimize data ingestion by configuring properties that align with your data formats.
ms.reviewer: tzgitlin
ms.topic: conceptual
-ms.date: 08/11/2024
+ms.date: 09/25/2025
monikerRange: "azure-data-explorer || microsoft-fabric"
---
# Data ingestion properties
> [!INCLUDE [applies](includes/applies-to-version/applies.md)] [!INCLUDE [fabric](includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](includes/applies-to-version/azure-data-explorer.md)]
-Data ingestion is the process by which data is added to a table and is made available for query. You add properties to the ingestion command after the `with` keyword.
+Data ingestion adds data to a table and makes it available for query. Add properties to the ingestion command after the `with` keyword.
[!INCLUDE [ingestion-properties](includes/ingestion-properties.md)]
## Related content
-* Learn more about [supported data formats](ingestion-supported-formats.md)
+::: moniker range="microsoft-fabric"
+* [Supported data formats](ingestion-supported-formats.md)
+::: moniker-end
::: moniker range="azure-data-explorer"
-* Learn more about [data ingestion](/azure/data-explorer/ingest-data-overview)
+* [Supported data formats](ingestion-supported-formats.md)
+* [Data ingestion](/azure/data-explorer/ingest-data-overview)
::: moniker-end
diff --git a/data-explorer/kusto/ingestion-supported-formats.md b/data-explorer/kusto/ingestion-supported-formats.md
index 9ec1aa5860..bb3cdcbfb1 100644
--- a/data-explorer/kusto/ingestion-supported-formats.md
+++ b/data-explorer/kusto/ingestion-supported-formats.md
@@ -1,16 +1,16 @@
---
-title: Data formats supported for ingestion
-description: Learn about the various data and compression formats supported for ingestion.
+title: Data Ingestion - Supported Formats and Compression
+description: Explore the various data formats like CSV, JSON, Parquet, and more, supported for ingestion. Understand compression options and best practices for data preparation.
ms.reviewer: tzgitlin
ms.topic: conceptual
-ms.date: 08/11/2024
+ms.date: 09/21/2025
monikerRange: "azure-data-explorer || microsoft-fabric"
---
# Data formats supported for ingestion
> [!INCLUDE [applies](includes/applies-to-version/applies.md)] [!INCLUDE [fabric](includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](includes/applies-to-version/azure-data-explorer.md)]
-Data ingestion is the process by which data is added to a table and is made available for query. For all ingestion methods, other than ingest-from-query, the data must be in one of the supported formats. The following table lists and describes the formats that is supported for data ingestion.
+Data ingestion adds data to a table and makes it available for query. For all ingestion methods, other than ingest-from-query, the data must be in one of the supported formats. The following table lists and describes the formats that are supported for data ingestion.
> [!NOTE]
> Before you ingest data, make sure that your data is properly formatted and defines the expected fields. We recommend using your preferred validator to confirm the format is valid. For example, you may find the following validators useful to check CSV or JSON files:
@@ -18,22 +18,22 @@ Data ingestion is the process by which data is added to a table and is made avai
> * CSV: http://csvlint.io/
> * JSON: https://jsonlint.com/
-For more information about why ingestion might fail, see [Ingestion failures](management/ingestion-failures.md)
+To learn why ingestion might fail, see [Ingestion failures](management/ingestion-failures.md).
::: moniker range="azure-data-explorer"
-and [Ingestion error codes in Azure Data Explorer](/azure/data-explorer/error-codes).
+and [Ingestion error codes in Azure Data Explorer](/azure/data-explorer/error-codes).
::: moniker-end
|Format |Extension |Description|
|---------|------------|-----------|
-|ApacheAvro|`.avro` |An [AVRO](https://avro.apache.org/docs/current/) format with support for [logical types](https://avro.apache.org/docs/++version++/specification/#Logical+Types). The following compression codecs are supported: `null`, `deflate`, and `snappy`. Reader implementation of the `apacheavro` format is based on the official [Apache Avro library](https://github.com/apache/avro). For information about ingesting Event Hub Capture Avro files, see [Ingesting Event Hub Capture Avro files](/azure/data-explorer/ingest-data-event-hub-overview#schema-mapping-for-event-hub-capture-avro-files). |
-|Avro |`.avro` |A legacy implementation for [AVRO](https://avro.apache.org/docs/current/) format based on [.NET library](https://www.nuget.org/packages/Microsoft.Hadoop.Avro). The following compression codecs are supported: `null`, `deflate` (for `snappy` - use `ApacheAvro` data format). |
+|ApacheAvro|`.avro` |An [Avro](https://avro.apache.org/docs/current/) format that supports [logical types](https://avro.apache.org/docs/++version++/specification/#Logical+Types). Supported compression codecs: `null`, `deflate`, and `snappy`. The reader implementation of the `apacheavro` format is based on the official [Apache Avro library](https://github.com/apache/avro). For details on ingesting Event Hubs Capture Avro files, see [Ingesting Event Hubs Capture Avro files](/azure/data-explorer/ingest-data-event-hub-overview#schema-mapping-for-event-hub-capture-avro-files). |
+|Avro |`.avro` |A legacy implementation of the [Avro](https://avro.apache.org/docs/current/) format based on the [.NET library](https://www.nuget.org/packages/Microsoft.Hadoop.Avro). Supported compression codecs: `null` and `deflate`. To use `snappy`, use the `ApacheAvro` data format. |
|CSV |`.csv` |A text file with comma-separated values (`,`). See [RFC 4180: _Common Format and MIME Type for Comma-Separated Values (CSV) Files_](https://www.ietf.org/rfc/rfc4180.txt).|
|JSON |`.json` |A text file with JSON objects delimited by `\n` or `\r\n`. See [JSON Lines (JSONL)](http://jsonlines.org/).|
-|MultiJSON|`.multijson`|A text file with a JSON array of property bags (each representing a record), or any number of property bags delimited by whitespace, `\n` or `\r\n`. Each property bag can be spread on multiple lines.|
+|MultiJSON|`.multijson`|A text file with a JSON array of property bags (each representing a record), or any number of property bags delimited by whitespace, `\n`, or `\r\n`. Each property bag can span multiple lines.|
|ORC |`.orc` |An [ORC file](https://en.wikipedia.org/wiki/Apache_ORC).|
|Parquet |`.parquet` |A [Parquet file](https://en.wikipedia.org/wiki/Apache_Parquet). |
|PSV |`.psv` |A text file with pipe-separated values (|). |
-|RAW |`.raw` |A text file whose entire contents is a single string value. |
+|RAW |`.raw` |A text file whose entire contents are a single string value. |
|SCsv |`.scsv` |A text file with semicolon-separated values (`;`).|
|SOHsv |`.sohsv` |A text file with SOH-separated values. (SOH is ASCII codepoint 1; this format is used by Hive on HDInsight.)|
|TSV |`.tsv` |A text file with tab-separated values (`\t`).|
@@ -43,42 +43,42 @@ and [Ingestion error codes in Azure Data Explorer](/azure/data-explorer/error-c
> [!NOTE]
>
-> * Ingestion from data storage systems that provide ACID functionality on top of regular Parquet format files (e.g. Apache Iceberg, Apache Hudi, Delta Lake) is not supported.
-> * Schema-less Avro is not supported.
+> * Ingestion from data storage systems that provide ACID functionality on top of regular Parquet format files (for example, Apache Iceberg, Apache Hudi, and Delta Lake) isn't supported.
+> * Schemaless Avro isn't supported.
::: moniker range="azure-data-explorer"
-For more info on ingesting data using `json` or `multijson` formats, see [ingest json formats](/azure/data-explorer/ingest-json-formats).
+For more information about ingesting data by using the `json` or `multijson` formats, see [Ingest JSON formats](/azure/data-explorer/ingest-json-formats).
::: moniker-end
## Supported data compression formats
-Blobs and files can be compressed through any of the following compression algorithms:
+Compress blobs and files with these algorithms:
|Compression|Extension|
|-----------|---------|
|gzip |.gz |
|zip |.zip |
-Indicate compression by appending the extension to the name of the blob or file.
+Indicate compression by appending the extension to the blob or file name.
For example:
-* `MyData.csv.zip` indicates a blob or a file formatted as CSV, compressed with zip (archive or a single file)
-* `MyData.json.gz` indicates a blob or a file formatted as JSON, compressed with gzip.
+* `MyData.csv.zip` indicates a blob or file formatted as CSV, compressed with zip (archive or single file).
+* `MyData.json.gz` indicates a blob or file formatted as JSON, compressed with gzip.
-Blob or file names that don't include the format extensions but just compression (for example, `MyData.zip`) is also supported. In this case, the file format
-must be specified as an ingestion property because it cannot be inferred.
+Blob or file names that include only the compression extension (for example, `MyData.zip`) are also supported. In this case, specify the file format
+as an ingestion property because it can't be inferred.
> [!NOTE]
>
-> * Some compression formats keep track of the original file extension as part of the compressed stream. This extension is generally ignored for determining the file format. If the file format can't be determined from the (compressed) blob or file name, it must be specified through the `format` ingestion property.
-> * Not to be confused with internal (chunk level) compression codec used by `Parquet`, `AVRO` and `ORC` formats. Internal compression name is usually added to a file name before file format extension, for example: `file1.gz.parquet`, `file1.snappy.avro`, etc.
-> * [Deflate64/Enhanced Deflate](https://en.wikipedia.org/wiki/Deflate#Deflate64/Enhanced_Deflate) zip compression method is not supported. Please note that Windows built-in zip compressor may choose to use this compression method on files of size over 2GB.
+> * Some compression formats store the original file extension in the compressed stream. Ignore this extension when you determine the file format. If you can't determine the file format from the compressed blob or file name, specify it with the `format` ingestion property.
+> * Don't confuse these with internal chunk-level compression codecs used by `Parquet`, `AVRO`, and `ORC` formats. The internal compression name is usually added before the file format extension (for example, `file1.gz.parquet`, `file1.snappy.avro`).
+> * The [Deflate64/Enhanced Deflate](https://en.wikipedia.org/wiki/Deflate#Deflate64/Enhanced_Deflate) zip compression method isn't supported. Windows built-in zip compressor can use this method on files larger than 2 GB.
## Related content
-* Learn more about [supported data formats](ingestion-supported-formats.md)
-* Learn more about [Data ingestion properties](ingestion-properties.md)
+* [Supported data formats](ingestion-supported-formats.md)
+* [Data ingestion properties](ingestion-properties.md)
::: moniker range="azure-data-explorer"
-* Learn more about [data ingestion](/azure/data-explorer/ingest-data-overview)
+* [Data ingestion](/azure/data-explorer/ingest-data-overview)
::: moniker-end
diff --git a/data-explorer/kusto/management/data-ingestion/cancel-queued-ingestion-operation-command.md b/data-explorer/kusto/management/data-ingestion/cancel-queued-ingestion-operation-command.md
index a6e4d51939..ff80d5385a 100644
--- a/data-explorer/kusto/management/data-ingestion/cancel-queued-ingestion-operation-command.md
+++ b/data-explorer/kusto/management/data-ingestion/cancel-queued-ingestion-operation-command.md
@@ -3,23 +3,23 @@ title: .cancel queued ingestion operation command
description: Learn how to use the `.cancel queued operation` command to cancel a long-running operation.
ms.reviewer: vplauzon
ms.topic: reference
-ms.date: 03/19/2025
+ms.date: 09/30/2025
---
# .cancel queued ingestion operation command (preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
-The `.cancel queued ingestion operation` command cancels an ingestion operation. This command is useful for aborting an ingestion operation that is taking too long to complete.
+The `.cancel queued ingestion operation` command cancels an ingestion operation. Use this command to stop an ingestion operation that takes too long to finish.
-The cancel operation command is done on a best effort basis. For example, ongoing ingestion processes or in-flight ingestion, might not get canceled.
+The cancel operation command works on a best-effort basis. For example, ongoing ingestion processes or in-flight ingestion might not be canceled.
> [!NOTE]
>
-> Queued ingestion commands are run on the data ingestion URI endpoint `https://ingest-.kusto.windows.net`.
+> Queued ingestion commands run on the data ingestion URI endpoint `https://ingest-.kusto.windows.net`.
## Permissions
-You must have at least [Table Ingestor](../../access-control/role-based-access-control.md) permissions to run this command.
+You need at least [Table Ingestor](../../access-control/role-based-access-control.md) permissions to run this command.
## Syntax
@@ -31,42 +31,39 @@ You must have at least [Table Ingestor](../../access-control/role-based-access-c
| Name | Type | Required | Description |
|--|--|--|--|
-| *IngestionOperationId* | `string` | :heavy_check_mark: | The unique ingestion operation ID returned from the running command.|
+| *IngestionOperationId* | `string` | :heavy_check_mark: | The unique ingestion operation ID returned by the running command.|
## Returns
|Output parameter |Type |Description|
|---|---|---|
|IngestionOperationId | `string` |The unique operation identifier.|
-|StartedOn | `datetime` |Date/time, in UTC, at which the `.ingest-from-storage-queued` was executed.|
-|LastUpdatedOn | `datetime` |Date/time, in UTC, when the status was updated.|
+|StartedOn | `datetime` |Date and time, in UTC, when the `.ingest-from-storage-queued` operation is executed.|
+|LastUpdatedOn | `datetime` |Date and time, in UTC, when the status is updated.|
|State | `string` |The state of the operation.|
|Discovered | `long` |Count of the blobs that were listed from storage and queued for ingestion.|
|Pending | `long` |Count of the blobs to be ingested.|
-|Canceled | `long` |Count of the blobs that were canceled due to a call to the [.cancel queued ingestion operation](cancel-queued-ingestion-operation-command.md) command.|
+|Canceled | `long` |Count of the blobs that are canceled due to a call to the [.cancel queued ingestion operation](cancel-queued-ingestion-operation-command.md) command.|
|Ingested | `long` |Count of the blobs that have been ingested.|
-|Failed | `long` |Count of the blobs that failed **permanently**.|
-|SampleFailedReasons | `string` |A sample of reasons for blob ingestion failures.|
+|Failed | `long` |Count of the blobs that fail **permanently**.|
+|SampleFailedReasons | `string` |A sample of reasons for blob ingestion failure.|
|Database | `string` |The database where the ingestion process is occurring.|
|Table | `string` | The table where the ingestion process is occurring.|
->[!NOTE]
-> If the ingestion operation was initiated with tracking disabled, cancellation commands execute on a best‑effort basis. The returned state may indicate: "Cancellation request received – service will attempt best effort cancellation (tracking isn't enabled on operation)"
-
## Example
-The following example cancels the ingestion of operation `00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444`.
+This example cancels the ingestion operation `00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444`.
```Kusto
.cancel queued ingestion operation '00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444'
```
-|IngestionOperationId|Started On |Last Updated On |State |Discovered |Pending| Canceled | Ingested |Failed|SampleFailedReasons|Database|Table|
+|Ingestion operation ID|Started on|Last updated on|State|Discovered|Pending|Canceled|Ingested|Failed|Sample failed reasons|Database|Table|
|--|--|--|--|--|--|--|--|--|--|--|--|
-|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 |2025-03-20 15:03:11.0000000 ||Canceled | 10 |10 |0 |0 |0 | |TestDatabase|Logs|
+|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444|2025-03-20 15:03:11.0000000||Canceled|10|10|0|0|0||TestDatabase|Logs|
## Related content
* [Queued ingestion overview](queued-ingestion-overview.md)
* [Data formats supported for ingestion](../../ingestion-supported-formats.md)
-* [.ingest-from-storage-queued command](ingest-from-storage-queued.md)
+* [.ingest-from-storage-queued into command](ingest-from-storage-queued.md)
diff --git a/data-explorer/kusto/management/data-ingestion/ingest-from-storage-queued.md b/data-explorer/kusto/management/data-ingestion/ingest-from-storage-queued.md
index 2486dd3057..1ced66d0d2 100644
--- a/data-explorer/kusto/management/data-ingestion/ingest-from-storage-queued.md
+++ b/data-explorer/kusto/management/data-ingestion/ingest-from-storage-queued.md
@@ -1,15 +1,16 @@
---
-title: .ingest-from-storage-queued command
+title: .ingest-from-storage-queued into command
description: This article describes the `.ingest-from-storage-queued` `into` command used to ingest a storage folder in Azure Data Explorer.
ms.reviewer: vplauzon
ms.topic: reference
-ms.date: 03/02/2025
+ms.date: 09/30/2025
---
# .ingest-from-storage-queued command (preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
-The `.ingest-from-storage-queued` command is used with the [.list blobs](list-blobs.md) command to queue blobs for ingestion into a table. This command supports bulk ingestion of an entire storage container, a specific folder within a container, or all blobs that match a given prefix and suffix.
+The `.ingest-from-storage-queued` command is used to queue blobs for ingestion into a table. It supports ingestion of individual blobs by URL, multiple blobs from a source file, specific folders, or an entire storage container.
+This command is useful for testing and managing ingestion scenarios, helping ensure that schema, partitioning, and other configurations are correctly applied.
[!INCLUDE [direct-ingestion-note](../../includes/direct-ingestion-note.md)]
@@ -31,13 +32,13 @@ You must have at least [Table Ingestor](../../access-control/role-based-access-c
|Name|Type|Required|Description|
|--|--|--|--|
-|*DatabaseName*| `string` | |The name of the database into which to ingest data. If no database name is provided, the request's context database is used.|
+|*DatabaseName*| `string` | :heavy_check_mark: |The name of the database into which to ingest data. If no database name is provided, the request's context database is used.|
|*TableName*| `string` | :heavy_check_mark: |The name of the table into which to ingest data.|
|*EnableTracking*| `boolean` | | Determines whether to track the blob ingestion. For more information, see [.show queued ingestion operations command](show-queued-ingestion-operations.md). The default is `false`.|
|*SkipBatching*| `boolean` | | If set to `true`, the blobs are ingested individually rather than batched together with other blobs. The default value is `false`.|
|*CompressionFactor*| `real` | |The compression factor (ratio) between the original size and the compressed size of blobs. Compression factor is used to estimate the original size of the data for batching purposes, when blobs are provided in a compressed format.|
|*IngestionPropertyName*, *IngestionPropertyValue* | `string` | |Optional ingestion properties. For more information about ingestion properties, see [Data ingestion properties](../../ingestion-properties.md).|
-|*IngestionSource* | table | :heavy_check_mark: | The ingestion source. The source is a list of blobs returned using the [.list blobs](list-blobs.md) command. |
+|*IngestionSource* | `string` | :heavy_check_mark: | The ingestion source. The source can be a URL, source file, or a list of blobs returned using the [.list blobs](list-blobs.md) command. |
> [!NOTE]
> The `.list blobs` command can be used with the `.ingest-from-storage-queued` command to return the blobs you want to ingest. For detailed information about the command and a full list of its parameters, see [.list blobs command](list-blobs.md).
@@ -51,14 +52,32 @@ The result of the command is a table with one row and one column.
| IngestionOperationId | `string` | A unique ID used to track the set of blobs, whether or not tracking is enabled. |
| ClientRequestId | `string` | The client request ID of the command. |
| OperationInfo | `string` | Displays the command to run to retrieve the current status of the operation. |
-| CancelationInfo | `string` | Displays the command to run to cancel the operation. |
>[!NOTE]
> This command doesn't modify the schema of the target table. If necessary, the data is converted to fit the table's schema during ingestion. Extra columns are ignored and missing columns are treated as null values.
-## Example
+## Examples
-The example in this section shows how to use the syntax to help you get started.
+The examples in this section show how to use the syntax to help you get started.
+
+### Ingest blobs from files
+
+The following example queues blobs from source files.
+
+```kusto
+.ingest-from-storage-queued into table database ('MyDatabase').mytable
+EnableTracking=true
+with (
+ format='csv',
+ ingestionMappingReference='MyMapping'
+)
+<| 'https://https://sample.blob.core.windows.net/sample/test_1.csv?...'
+ 'https://https://sample.blob.core.windows.net/sample/test_2.csv?...'
+ 'https://https://sample.blob.core.windows.net/sample/test_3.csv?...'
+```
+
+>[!NOTE]
+> Make sure to include a SAS token or use a managed identity to grant the service permission to access and download the blob.
### Ingest all blobs in a folder
diff --git a/data-explorer/kusto/management/data-ingestion/list-blobs.md b/data-explorer/kusto/management/data-ingestion/list-blobs.md
index 116cfe6b9d..0f0433d476 100644
--- a/data-explorer/kusto/management/data-ingestion/list-blobs.md
+++ b/data-explorer/kusto/management/data-ingestion/list-blobs.md
@@ -3,11 +3,11 @@ title: .list blobs command (list blobs from storage)
description: Learn how to use the list blobs from storage command.
ms.reviewer: vplauzon
ms.topic: reference
-ms.date: 07/07/2025
+ms.date: 09/30/2025
---
# .list blobs command (preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
The `.list blobs` command lists blobs under a specified container path.
@@ -36,11 +36,11 @@ You must have at least [Table Ingestor](../../access-control/role-based-access-c
|*SourceDataLocators*| `string` | :heavy_check_mark:|One or many [storage connection strings](../../api/connection-strings/storage-connection-strings.md) separated by a comma character. Each connection string can refer to a storage container or a file prefix within a container. Currently, only one storage connection string is supported. |
|*SuffixValue*| `string` | |The suffix that enables blob filtering.|
|*MaxFilesValue*| `integer` | | The maximum number of blobs to return. |
-|*PathFormatValue*| `string` | | The pattern in the blob’s path that can be used to retrieve the creation time as an output field. For more information, see [Path format](#path-format). |
+|*PathFormatValue*| `string` | | The pattern in the blob's path that can be used to retrieve the creation time as an output field. For more information, see [Path format](#path-format). |
> [!NOTE]
>
-> * To safeguard sensitive information, use [obfuscated string literals](../../query/scalar-data-types/string.md#obfuscated-string-literals) for *SourceDataLocators*.
+> * We recommend using [obfuscated string literals](../../query/scalar-data-types/string.md#obfuscated-string-literals) for *SourceDataLocators* to scrub credentials in internal traces and error messages.
>
> * When used alone, `.list blob` returns up to 1,000 files, regardless of any larger value specified in *MaxFiles*.
@@ -58,10 +58,6 @@ The following table lists the supported authentication methods and the permissio
|[Storage account access key](../../api/connection-strings/storage-connection-strings.md#storage-account-access-key)||This authentication method isn't supported in Gen1.|
|[Managed identity](../../api/connection-strings/storage-connection-strings.md#managed-identity)|Storage Blob Data Reader|Reader|
-> [!NOTE]
->
-> To safeguard sensitive information when using SAS tokens or storage account access keys, use [obfuscated string literals](../../query/scalar-data-types/string.md#obfuscated-string-literals). See the [example](#list-blobs-with-obfuscated-sas-token) for more information.
-
The primary use of `.list blobs` is for queued ingestion which is done asynchronously with no user context. Therefore, [Impersonation](../../api/connection-strings/storage-connection-strings.md#impersonation) isn't supported.
### Path format
@@ -110,17 +106,6 @@ The following command lists a maximum of 20 blobs from the `myfolder` folder usi
MaxFiles=20
```
-### List blobs with obfuscated SAS token
-
-The following command lists a maximum of 20 blobs from the `myfolder` folder using a [Shared Access Signature (SAS) token](../../api/connection-strings/storage-connection-strings.md#shared-access-sas-token) for authentication. The SAS token is obfuscated to safeguard sensitive information.
-
-```kusto
-.list blobs (
- h"https://mystorageaccount.blob.core.windows.net/datasets/myfolder?sv=..."
-)
-MaxFiles=20
-```
-
### List Parquet blobs
The following command lists a maximum of 10 blobs of type `.parquet` from a folder, using [system-assigned managed identity](../../api/connection-strings/storage-connection-strings.md#managed-identity) authentication.
@@ -139,33 +124,22 @@ The following command lists a maximum of 10 blobs of type `.parquet` from a fold
```kusto
.list blobs (
- "https://mystorageaccount.blob.core.windows.net/spark/myfolder;managed_identity=system"
+ "https://mystorageaccount.blob.core.windows.net/datasets/myfolder;managed_identity=system"
)
Suffix=".parquet"
MaxFiles=10
PathFormat=("myfolder/year=" datetime_pattern("yyyy'/month='MM'/day='dd", creationTime) "/")
```
-The `PathFormat` parameter can extract dates from various folder hierarchies, such as:
-
-* *Spark* folder paths, for example: `https://mystorageaccount.blob.core.windows.net/spark/myfolder/year=2024/month=03/day=16/myblob.parquet`
-
-* Common folder paths, for example: `https://mystorageaccount.blob.core.windows.net/datasets/export/2024/03/16/03/myblob.parquet` where the hour `03` is included in the path.
+The `PathFormat` in the example can extract dates from a path such as the following path:
-You can extract the creation time with the following command:
-
-```kusto
-.list blobs (
- "https://mystorageaccount.blob.core.windows.net/datasets/export;managed_identity=system"
-)
-Suffix=".parquet"
-MaxFiles=10
-PathFormat=("datasets/export/" datetime_pattern("yyyy'/'MM'/'dd'/'HH", creationTime) "/")
+```
+https://mystorageaccount.blob.core.windows.net/datasets/myfolder/year=2024/month=03/day=16/myblob.parquet
```
## Related content
* [Queued ingestion overview](queued-ingestion-overview.md)
* [Data formats supported for ingestion](../../ingestion-supported-formats.md)
-* [.ingest-from-storage-queued](ingest-from-storage-queued.md)
+* [.ingest-from-storage-queued into](ingest-from-storage-queued.md)
* [.show queued ingestion operations command](show-queued-ingestion-operations.md)
diff --git a/data-explorer/kusto/management/data-ingestion/queued-ingest-use-http.md b/data-explorer/kusto/management/data-ingestion/queued-ingest-use-http.md
new file mode 100644
index 0000000000..b7d8e4dab6
--- /dev/null
+++ b/data-explorer/kusto/management/data-ingestion/queued-ingest-use-http.md
@@ -0,0 +1,132 @@
+---
+title: Queued Ingestion via REST API
+description: Learn how to use the REST API to submit blobs for ingestion into Azure Data Explorer tables.
+ms.reviewer:
+ms.topic: reference
+ms.date: 09/15/2025
+---
+
+# Queued ingestion via REST API
+
+The queued ingestion REST API allows you to programmatically submit one or more blobs for ingestion into a specified database and table. This method is ideal for automated workflows and external systems that need to trigger ingestion dynamically.
+
+## Permissions
+
+To use the REST API for queued ingestion, you need:
+
+- **Table Ingestor** role to ingest data into an existing table.
+- **Database User** role to access the target database.
+- **Storage Blob Data Reader** role on the blob storage container.
+
+For more information, see [Role-based access control](../../access-control/role-based-access-control.md).
+
+## HTTP Endpoint
+
+```http
+URL: /v1/rest/ingestion/queued/{database}/{table}
+Method: POST
+```
+
+|Parameter|Type|Required|Description|
+|--|--|--|--|
+|`database`|`string`|:heavy_check_mark:|The name of the target database.|
+|`table`|`string`|:heavy_check_mark:|The name of the target table.|
+
+## Request body parameters
+
+The request must be a JSON object with the following structure.
+
+### Top-level fields
+
+|Field|Type|Required|Description|
+|--|--|--|--|
+|`blobs`|`array`|:heavy_check_mark:|A list of blob objects to be ingested. See [Blob object](#blob-object) for details.|
+|`properties`|`object`|:heavy_check_mark:|An object containing ingestion properties. See [Supported ingestion properties](#supported-ingestion-properties).|
+|`timestamp`|`datetime`|No|Optional timestamp indicating when the ingestion request was created.|
+
+### Blob object
+
+Each item in the `blobs` array must follow this structure:
+
+|Field|Type|Required|Description|
+|--|--|--|--|
+|`url`|`string`|:heavy_check_mark:|The URL of the blob to ingest. The service performs light validation on this field.|
+|`sourceId`|`Guid`|No|An identifier for the source blob.|
+|`rawSize`|`integer`|No|The size of the blob before compression (nullable).|
+
+### Supported ingestion properties
+
+|Property|Type|Description|
+|--|--|--|
+|`format`|`string`|Data format (for example, `csv`, `json`).|
+|`enableTracking`|`bool`|If `true`, returns an `ingestionOperationId` for status tracking.|
+|`tags`|`array`|List of tags to associate with the ingested data.|
+|`skipBatching`|`bool`|If `true`, disables batching of blobs.|
+|`deleteAfterDownload`|`bool`|If `true`, deletes the blob after ingestion.|
+|`ingestionMappingReference`|`string`|Reference to a predefined ingestion mapping.|
+|`creationTime`|`string`|ISO8601 timestamp for the ingested data extents.|
+|`ingestIfNotExists`|`array`|Prevents ingestion if data with matching tags already exists.|
+|`ignoreFirstRecord`|`bool`|If `true`, skips the first record (for example, header row).|
+|`validationPolicy`|`string`|JSON string defining validation behavior.|
+|`zipPattern`|`string`|Regex pattern for extracting files from zipped blobs.|
+
+## Example
+
+```http
+POST /v1/rest/ingestion/queued/MyDatabase/MyTable
+Content-Type: application/json
+Authorization: Bearer
+```
+
+```json
+{
+ "timestamp": "2025-10-01T12:00:00Z",
+ "blobs": [
+ {
+ "url": "https://example.com/blob1.csv.gz",
+ "sourceId": "123a6999-411e-4226-a333-w79992dd9b95",
+ "rawSize": 1048576
+ }
+ ],
+ "properties": {
+ "format": "csv",
+ "enableTracking": true,
+ "tags": ["ingest-by:rest"],
+ "ingestionMappingReference": "csv_mapping",
+ "creationTime": "2025-10-01T11:00:00Z"
+ }
+}
+```
+
+> [!NOTE]
+> Setting `"enableTracking": true` will return a non-empty `ingestionOperationId` in the response, which can be used to monitor ingestion status via the rest-api-status.md.
+
+## Response
+
+|Condition|Response|
+|--|--|
+|Tracking enabled (`enableTracking: true`)|Returns a nonempty `ingestionOperationId`.|
+|Tracking disabled or omitted|Returns an empty `ingestionOperationId`.|
+
+### Tracking enabled
+
+```json
+{
+ "ingestionOperationId": "ingest_op_12345"
+}
+```
+
+### Tracking disabled
+
+```json
+{
+ "ingestionOperationId": ""
+}
+```
+
+## Performance tips
+
+- Submit up to **20 blobs** per request.
+- Submitting more than 20 blobs in a single request is not supported.
+- Use `enableTracking` to monitor ingestion status via the status endpoint.
+- Avoid setting `skipBatching` unless ingestion latency is critical.
diff --git a/data-explorer/kusto/management/data-ingestion/queued-ingestion-overview.md b/data-explorer/kusto/management/data-ingestion/queued-ingestion-overview.md
index b4e9c7ca1b..eea211dd60 100644
--- a/data-explorer/kusto/management/data-ingestion/queued-ingestion-overview.md
+++ b/data-explorer/kusto/management/data-ingestion/queued-ingestion-overview.md
@@ -3,11 +3,11 @@ title: Queued ingestion overview commands
description: Learn about queued ingestion and its commands.
ms.reviewer: vplauzon
ms.topic: reference
-ms.date: 04/25/2025
+ms.date: 09/30/2025
---
# Queued ingestion commands overview (preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
Queued ingestion commands allow you to ingest specific folders, or an entire container and manage the operations related to queued ingestion. You can also ingest multiple or individual blobs by URL and from a source file. The ingestion commands are useful for preparing and testing distinct ingestion scenarios before the final ingestion. Using them helps to ensure that fields, columns, partitioning, and other needs are handled properly during ingestion.
@@ -15,8 +15,8 @@ Queued ingestion commands allow you to ingest specific folders, or an entire con
Queued storage commands include:
+* [.ingest-from-storage-queued into command](ingest-from-storage-queued.md) queues blobs for ingestion into a table.
* [.show queued ingestion operations command](show-queued-ingestion-operations.md) shows the queued ingestion operations.
-* [.ingest-from-storage-queued command](ingest-from-storage-queued.md) queues blobs for ingestion into a table.
* [.cancel queued ingestion operation command](cancel-queued-ingestion-operation-command.md)
cancels a queued ingestion operation.
* [.list blobs command](list-blobs.md) lists the blobs for ingestion.
diff --git a/data-explorer/kusto/management/data-ingestion/queued-ingestion-use-case.md b/data-explorer/kusto/management/data-ingestion/queued-ingestion-use-case.md
index a6c702b691..c1f7b26ae9 100644
--- a/data-explorer/kusto/management/data-ingestion/queued-ingestion-use-case.md
+++ b/data-explorer/kusto/management/data-ingestion/queued-ingestion-use-case.md
@@ -3,29 +3,48 @@ title: Queued ingestion commands use case
description: Learn how to ingest historical data using the queued ingestion commands.
ms.reviewer: vplauzon
ms.topic: how-to
-ms.date: 07/22/2025
+ms.date: 09/30/2025
---
# Queued ingestion commands use case (preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
-Queued ingestion commands let you test how historical data is ingested and fix problems before you ingest that data. This article describes how to use queued ingestion commands to fine-tune historical data ingestion. Complete the following tasks to fine-tune historical data queued ingestion:
+The queued ingestion commands allow you to ingest individual blobs by URL or ingest batches of data by listing folders or containers. This article walks through a common use case: fine-tuning the ingestion of historical data. You can use these commands to test how historical data is ingested and resolve any issues before performing full ingestion. The following tasks demonstrate how to use queued ingestion commands effectively:
-1. [List blobs in a folder](#list-blobs-in-a-folder)
-1. [Ingest folder](#ingest-folder)
-1. [Track ingestion status](#track-ingestion-status)
-1. [Filter queued files for ingestion](#filter-queued-files-for-ingestion)
-1. [Capture the creation time](#capture-the-creation-time)
-1. [Ingest 20 files](#ingest-20-files)
-1. [Track follow up ingestion status](#track-follow-up-ingestion-status)
-1. [Perform your full ingestion](#perform-your-full-ingestion)
-1. [Cancel ingestion](#cancel-ingestion)
+* [Ingest single blobs](#ingest-single-blobs)
+* [List blobs in a folder](#list-blobs-in-a-folder)
+* [Ingest folder](#ingest-folder)
+* [Track ingestion status](#track-ingestion-status)
+* [Filter queued files for ingestion](#filter-queued-files-for-ingestion)
+* [Capture the creation time](#capture-the-creation-time)
+* [Ingest 20 files](#ingest-20-files)
+* [Track follow up ingestion status](#track-follow-up-ingestion-status)
+* [Perform your full ingestion](#perform-your-full-ingestion)
+* [Cancel ingestion](#cancel-ingestion)
> [!NOTE]
>
> Queued ingestion commands are run on the data ingestion URI endpoint `https://ingest-.kusto.windows.net`.
+### Ingest single blobs
+You can start by ingesting a single blob directly using its URL.
+Make sure to include a SAS token or use a managed identity to grant the service permission to access and download the blob.
+
+```kusto
+.ingest-from-storage-queued into table database('TestDatabase').Logs
+EnableTracking=true
+with (format='csv')
+<|
+'https://https://sample.blob.core.windows.net/sample/test_*csv?...'
+ ```
+
+**Output**
+
+| IngestionOperationId | ClientRequestId | OperationInfo |
+|----------------------|-----------------|---------------|
+|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444|Kusto.Web.KWE,Query;11112222;11112222;22223333-bbbb-3333-cccc-4444cccc5555|.show queued ingestion operations "00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444" |
+
### List blobs in a folder
To understand the historical data better, you list a maximum of 10 blobs from the Azure blob storage container.
@@ -77,8 +96,6 @@ with (format='parquet')
The `OperationInfo`, which includes the `IngestionOperationId`, is then used to [track the ingestion status](#track-ingestion-status).
-The `CancelationInfo`, which includes the `IngestionOperationId`, is then used to [cancel the ingestion operation](#cancel-ingestion).
-
### Track ingestion status
You run the `.show queued ingestion operations` command to check whether the ingestion is complete or if there are any errors.
@@ -171,9 +188,9 @@ with (format='parquet')
**Output**
-| IngestionOperationId | ClientRequestId | OperationInfo | CancelationInfo |
-|----------------------|-----------------|---------------|---------------|
-|22223333;22223333;11110000-bbbb-2222-cccc-4444dddd5555|Kusto.Web.KWE,Query;22223333;22223333;33334444-dddd-4444-eeee-5555eeee5555|.show queued ingestion operations "22223333;22223333;11110000-bbbb-2222-cccc-4444dddd5555" |.cancel queued ingestion operations "22223333;22223333;11110000-bbbb-2222-cccc-4444dddd5555" |
+| IngestionOperationId | ClientRequestId | OperationInfo |
+|----------------------|-----------------|---------------|
+|22223333;22223333;11110000-bbbb-2222-cccc-4444dddd5555|Kusto.Web.KWE,Query;22223333;22223333;33334444-dddd-4444-eeee-5555eeee5555|.show queued ingestion operations "22223333;22223333;11110000-bbbb-2222-cccc-4444dddd5555" |
The `OperationInfo` is then used to [track the ingestion status](#track-ingestion-status).
diff --git a/data-explorer/kusto/management/data-ingestion/show-queued-ingestion-operations.md b/data-explorer/kusto/management/data-ingestion/show-queued-ingestion-operations.md
index 20d06bb70c..8ceda831ac 100644
--- a/data-explorer/kusto/management/data-ingestion/show-queued-ingestion-operations.md
+++ b/data-explorer/kusto/management/data-ingestion/show-queued-ingestion-operations.md
@@ -3,28 +3,30 @@ title: .show queued ingestion operations command
description: Learn how to use the `.show queued ingestion operations` command to view a log of the queued ingestion operations that are currently running or completed.
ms.reviewer: vplauzon
ms.topic: reference
-ms.date: 06/10/2025
+ms.date: 09/30/2025
---
-# .show queued ingestion operations command (preview)
+# .show queued ingestion operations command (Preview)
-> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
+> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)]
-Displays the queued ingestion operations. Ingestion operations are tracked once the [.ingest-from-storage-queued](ingest-from-storage-queued.md) command begins.
+Shows the queued ingestion operations. Ingestion operations are tracked after the [.ingest-from-storage-queued](ingest-from-storage-queued.md) command starts.
> [!NOTE]
>
-> Queued ingestion commands are run on the data ingestion URI endpoint `https://ingest-.kusto.windows.net`.
+> Queued ingestion commands run on the data ingestion URI endpoint `https://ingest-.kusto.windows.net`.
## Permissions
-You must have at least [Table Ingestor](../../access-control/role-based-access-control.md) permissions on the table that the `IngestionOperationId` or IDs belong to.
+You need [Table Ingestor](../../access-control/role-based-access-control.md) permissions on the table associated with the `IngestionOperationId` or IDs.
## Syntax
`.show queued ingestion operations` `"`*IngestionOperationId*`"`
-`.show queued ingestion operations` `(` `"`*IngestionOperationId*`"` [`,` ... ] `)`
+`.show queued ingestion operations` `("`*IngestionOperationId*`"` [`,` ... ])`
+
+`.show queued ingestion operations` `"`*IngestionOperationId*`"` `details`
## Parameters
@@ -34,6 +36,19 @@ You must have at least [Table Ingestor](../../access-control/role-based-access-c
## Returns
+### Returns for queued ingestion operations with details
+
+The command returns a table with details about the ingestion status for each blob ingested in the operation.
+
+|Output parameter |Type |Description|
+|---|---|---|
+|IngestionOperationId | `string` |The unique operation identifier.|
+|BlobURL| `string` | The URL of the blob file, when the ingestion operation includes blob files. |
+|IngestionStatus|`string` |The status of the ingestion.|
+|StartedAt | `datetime` |Date/time, in UTC, at which the `.ingest-from-storage-queued` was executed.|
+|CompletedAt | `datetime` |Date/time, in UTC, at which the `.ingest-from-storage-queued` was completed.|
+|FailedReasons | `string` | Reasons for the ingestion failure.|
+
### Returns for queued ingestion operations
The command returns a table with the latest update information for each ID.
@@ -45,7 +60,6 @@ The command returns a table with the latest update information for each ID.
|LastUpdatedOn | `datetime` |Date/time, in UTC, when the status was updated.|
|State | `string` |The state of the operation.|
|Discovered | `long` |Count of the blobs that were listed from storage and queued for ingestion.|
-|DiscoveredSize | `long` |The total data size in bytes of all blobs that were listed from storage and queued for ingestion.|
|Pending | `long` |Count of the blobs to be ingested.|
|Canceled | `long` |Count of the blobs that were canceled due to a call to the [.cancel queued ingestion operation](cancel-queued-ingestion-operation-command.md) command.|
|Ingested | `long` |Count of the blobs that have been ingested.|
@@ -79,9 +93,9 @@ The following example shows the queued ingestion operations for a specific opera
**Output**
-|IngestionOperationId|Started On |Last Updated On |State |Discovered |DiscoveredSize |InProgress|Ingested |Failed|Canceled |SampleFailedReasons|Database|Table|
+|IngestionOperationId|Started On |Last Updated On |State |Discovered |InProgress|Ingested |Failed|Canceled |SampleFailedReasons|Database|Table|
|--|--|--|--|--|--|--|--|--|--|--|--|
-|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 |2025-01-10 14:57:41.0000000 |2025-01-10 14:57:41.0000000|InProgress | 10387 | 100547 |9391 |995 |1 |0 | Stream with ID '*****.csv' has a malformed CSV format*|MyDatabase|MyTable|
+|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 |2025-01-10 14:57:41.0000000 |2025-01-10 14:57:41.0000000|InProgress | 10387 |9391 |995 |1 |0 | Stream with ID '*****.csv' has a malformed CSV format*|MyDatabase|MyTable|
### Multiple operation IDs
@@ -93,15 +107,31 @@ The following example shows the queued ingestion operations for multiple operati
**Output**
-|IngestionOperationId|Started On |Last Updated On |State |Discovered |DiscoveredSize |InProgress|Ingested |Failed|Canceled |SampleFailedReasons|Database|Table|
+|IngestionOperationId|Started On |Last Updated On |State |Discovered |InProgress|Ingested |Failed|Canceled |SampleFailedReasons|Database|Table|
|--|--|--|--|--|--|--|--|--|--|--|--|
-|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 |2025-01-10 14:57:41.0000000 |2025-01-10 15:15:04.0000000|InProgress | 10387 | 100547 |9391 |995 |1 |0 | Stream with ID '*****.csv' has a malformed CSV format*|MyDatabase|MyTable|
-|11112222;22223333;11110000-bbbb-2222-cccc-3333dddd4444 |2025-01-10 15:12:23.0000000 |2025-01-10 15:15:16.0000000|InProgress | 25635 | 3545613 |25489 |145 |1 |0 | Unknown error occurred: Exception of type 'System.Exception' was thrown|MyDatabase|MyOtherTable|
+|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 |2025-01-10 14:57:41.0000000 |2025-01-10 15:15:04.0000000|InProgress | 10387 |9391 |995 |1 |0 | Stream with ID '*****.csv' has a malformed CSV format*|MyDatabase|MyTable|
+|11112222;22223333;11110000-bbbb-2222-cccc-3333dddd4444 |2025-01-10 15:12:23.0000000 |2025-01-10 15:15:16.0000000|InProgress | 25635 |25489 |145 |1 |0 | Unknown error occurred: Exception type 'System.Exception' was thrown|MyDatabase|MyOtherTable|
+
+### Show details
+
+The following example shows details for each blob in the queued ingestion operation.
+
+```kusto
+.show queued ingestion operations '00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444' details
+```
+
+**Output**
+
+| IngestionOperationId | BlobUrl | IngestionStatus | StartedAt | CompletedAt | FailedReason |
+|--|--|--|--|--|--|
+| 00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 | https://\/100.csv.gz | Pending | 2025-02-09T14:56:08.8708746Z | | |
+| 00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 | https://\/102.csv.gz | Succeeded | 2025-02-09T14:56:09.0800631Z | 2024-02-09T15:02:06.5529901Z | |
+|00001111;11112222;00001111-aaaa-2222-bbbb-3333cccc4444 | https://\/103.csv.gz | Failed | 2025-02-09T14:56:09.3026602Z | | Failed to download |
## Related content
* [Queued ingestion overview](queued-ingestion-overview.md)
* [Data formats supported for ingestion](../../ingestion-supported-formats.md)
-* [.ingest-from-storage-queued](ingest-from-storage-queued.md)
+* [.ingest-from-storage-queued into](ingest-from-storage-queued.md)
* [.cancel queued ingestion operation command](cancel-queued-ingestion-operation-command.md)
* [.list blobs command](list-blobs.md)
diff --git a/data-explorer/kusto/management/toc.yml b/data-explorer/kusto/management/toc.yml
index 6c44e4ea33..1c79e167e2 100644
--- a/data-explorer/kusto/management/toc.yml
+++ b/data-explorer/kusto/management/toc.yml
@@ -131,8 +131,6 @@ items:
href: external-tables-azure-storage.md
- name: Create or alter delta external table
href: external-tables-delta-lake.md
- - name: .show external table details
- href: show-external-tables-details.md
- name: Manage external table mappings
items:
- name: .alter external table mapping command
@@ -224,32 +222,6 @@ items:
- name: .show materialized-view(s) details
displayName: show materialized view details
href: materialized-views/materialized-view-show-details-command.md
- - name: Graphs
- items:
- - name: Persistent graph overview
- href: graph/graph-persistent-overview.md
- - name: Graph models overview
- href: graph/graph-model-overview.md
- - name: .create-or-alter graph_model
- href: graph/graph-model-create-or-alter.md
- - name: .drop graph_model
- href: graph/graph-model-drop.md
- - name: .show graph_model
- href: graph/graph-model-show.md
- - name: .show graph_models
- href: graph/graph-models-show.md
- - name: Graph snapshot overview
- href: graph/graph-snapshot-overview.md
- - name: .make graph_snapshot
- href: graph/graph-snapshot-make.md
- - name: .drop graph_snapshot
- href: graph/graph-snapshot-drop.md
- - name: .drop graph_snapshots
- href: graph/graph-snapshots-drop.md
- - name : .show graph_snapshot
- href: graph/graph-snapshot-show.md
- - name: .show graph_snapshots
- href: graph/graph-snapshots-show.md
- name: Stored query results
items:
- name: Stored query results
@@ -542,11 +514,9 @@ items:
- name: .show table policy mirroring command
href: show-table-mirroring-policy-command.md
displayName: .show table mirroring policy
- - name: .show database operations mirroring-statistics
- href: show-database-operations-mirroring-statistics.md
- - name: .show table operations mirroring-statistics
- href: show-table-operations-mirroring-statistics.md
-
+ - name: .show table operations mirroring-status command
+ href: show-table-operations-mirroring-status-command.md
+ displayName: .show table mirroring status operations
- name: .show table operations mirroring-exported-artifacts command
href: show-table-operations-mirroring-exported-artifacts-command.md
displayName: .show table mirroring operations exported artifacts
@@ -872,23 +842,25 @@ items:
- name: .ingest inline command
displayName: .ingest inline
href: data-ingestion/ingest-inline.md
- - name: Queued ingestion commands
+ - name: Queued ingestion
items:
- name: Queued ingestion commands overview
displayName: Queued ingestion commands, ingest from folders, ingest from files, ingest from storage
href: data-ingestion/queued-ingestion-overview.md
+ - name: .ingest-from-storage-queued into command
+ displayName: .ingest-from-storage-queued, ingest from storage command
+ href: data-ingestion/ingest-from-storage-queued.md
- name: Queued ingestion commands use case
displayName: Queued ingestion historical data ingestion
href: data-ingestion/queued-ingestion-use-case.md
- - name: .ingest-from-storage-queued command
- displayName: .ingest-from-storage-queued, ingest from storage command
- href: data-ingestion/ingest-from-storage-queued.md
- name: .list blobs command
href: data-ingestion/list-blobs.md
- name: .cancel queued ingestion operation command
href: data-ingestion/cancel-queued-ingestion-operation-command.md
- name: .show queued ingestion operation command
- href: data-ingestion/show-queued-ingestion-operations.md
+ href: data-ingestion/show-queued-ingestion-operations.md
+ - name: Queued ingestion via REST API
+ href: data-ingestion/queued-ingest-use-http.md
- name: Streaming ingestion
items:
- name: Streaming ingestion and schema changes
diff --git a/data-explorer/kusto/query/anomaly-detection.md b/data-explorer/kusto/query/anomaly-detection.md
index f8e9a9756d..19f7b0b35a 100644
--- a/data-explorer/kusto/query/anomaly-detection.md
+++ b/data-explorer/kusto/query/anomaly-detection.md
@@ -1,32 +1,32 @@
---
-title: Time series anomaly detection & forecasting
-description: Learn how to analyze time series data for anomaly detection and forecasting.
+title: Detect and Forecast Anomalies Using KQL Time Series
+description: Learn how to analyze time series data for anomaly detection and forecasting using KQL. Explore decomposition models for trend, seasonal, and residual analysis.
ms.reviewer: adieldar
ms.topic: how-to
-ms.date: 08/11/2024
+ms.date: 09/25/2025
---
# Anomaly detection and forecasting
> [!INCLUDE [applies](../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../includes/applies-to-version/azure-data-explorer.md)] [!INCLUDE [monitor](../includes/applies-to-version/monitor.md)] [!INCLUDE [sentinel](../includes/applies-to-version/sentinel.md)]
-Cloud services and IoT devices generate telemetry data that can be used to gain insights such as monitoring service health, physical production processes, and usage trends. Performing time series analysis is one way to identify deviations in the pattern of these metrics compared to their typical baseline pattern.
+Cloud services and IoT devices generate telemetry you use to monitor service health, production processes, and usage trends. Time series analysis helps you spot deviations from each metric's baseline pattern.
-Kusto Query Language (KQL) contains native support for creation, manipulation, and analysis of multiple time series. With KQL, you can create and analyze thousands of time series in seconds, enabling near real time monitoring solutions and workflows.
+Kusto Query Language (KQL) includes native support for creating, manipulating, and analyzing multiple time series. Use KQL to create and analyze thousands of time series in seconds for near real time monitoring.
-This article details time series anomaly detection and forecasting capabilities of KQL. The applicable time series functions are based on a robust well-known decomposition model, where each original time series is decomposed into seasonal, trend, and residual components. Anomalies are detected by outliers on the residual component, while forecasting is done by extrapolating the seasonal and trend components. The KQL implementation significantly enhances the basic decomposition model by automatic seasonality detection, robust outlier analysis, and vectorized implementation to process thousands of time series in seconds.
+This article describes KQL time series anomaly detection and forecasting capabilities. The functions use a robust, well known decomposition model that splits each time series into seasonal, trend, and residual components. Detect anomalies by finding outliers in the residual component. Forecast by extrapolating the seasonal and trend components. KQL adds automatic seasonality detection, robust outlier analysis, and a vectorized implementation that processes thousands of time series in seconds.
## Prerequisites
-* A Microsoft account or a Microsoft Entra user identity. An Azure subscription isn't required.
-* Read [Time series analysis](time-series-analysis.md) for an overview of time series capabilities.
+* Use a Microsoft account or a Microsoft Entra user identity. You don't need an Azure subscription.
+* Read about time series capabilities in [Time series analysis](time-series-analysis.md).
## Time series decomposition model
-The KQL native implementation for time series prediction and anomaly detection uses a well-known decomposition model. This model is applied to time series of metrics expected to manifest periodic and trend behavior, such as service traffic, component heartbeats, and IoT periodic measurements to forecast future metric values and detect anomalous ones. The assumption of this regression process is that other than the previously known seasonal and trend behavior, the time series is randomly distributed. You can then forecast future metric values from the seasonal and trend components, collectively named baseline, and ignore the residual part. You can also detect anomalous values based on outlier analysis using only the residual portion.
-To create a decomposition model, use the function [`series_decompose()`](series-decompose-function.md). The `series_decompose()` function takes a set of time series and automatically decomposes each time series to its seasonal, trend, residual, and baseline components.
+The KQL native implementation for time series prediction and anomaly detection uses a well known decomposition model. Use this model for time series with periodic and trend behavior—like service traffic, component heartbeats, and periodic IoT measurements—to forecast future values and detect anomalies. The regression assumes the remainder is random after removing the seasonal and trend components. Forecast future values from the seasonal and trend components (the baseline) and ignore the residual. Detect anomalies by running outlier analysis on the residual component.
+Use the [`series_decompose()`](series-decompose-function.md) function to create a decomposition model. It decomposes each time series into seasonal, trend, residual, and baseline components.
-For example, you can decompose traffic of an internal web service by using the following query:
+Example: Decompose internal web service traffic:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -39,18 +39,18 @@ let max_t = datetime(2017-02-03 22:00);
let dt = 2h;
demo_make_series2
| make-series num=avg(num) on TimeStamp from min_t to max_t step dt by sid
-| where sid == 'TS1' // select a single time series for a cleaner visualization
-| extend (baseline, seasonal, trend, residual) = series_decompose(num, -1, 'linefit') // decomposition of a set of time series to seasonal, trend, residual, and baseline (seasonal+trend)
-| render timechart with(title='Web app. traffic of a month, decomposition', ysplit=panels)
+| where sid == 'TS1' // Select a single time series for cleaner visualization
+| extend (baseline, seasonal, trend, residual) = series_decompose(num, -1, 'linefit') // Decompose each time series into seasonal, trend, residual, and baseline (seasonal + trend)
+| render timechart with(title='Web app traffic for one month, decomposition', ysplit=panels)
```
-
+
* The original time series is labeled **num** (in red).
-* The process starts by auto detection of the seasonality by using the function [`series_periods_detect()`](series-periods-detect-function.md) and extracts the **seasonal** pattern (in purple).
-* The seasonal pattern is subtracted from the original time series and a linear regression is run using the function [`series_fit_line()`](series-fit-line-function.md) to find the **trend** component (in light blue).
-* The function subtracts the trend and the remainder is the **residual** component (in green).
-* Finally, the function adds the seasonal and trend components to generate the **baseline** (in blue).
+* The process autodetects seasonality using the [`series_periods_detect()`](series-periods-detect-function.md) function and extracts the **seasonal** pattern (purple).
+* Subtract the seasonal pattern from the original time series, then run a linear regression with the [`series_fit_line()`](series-fit-line-function.md) function to find the **trend** component (light blue).
+* The function subtracts the trend, and the remainder is the **residual** component (green).
+* Finally, add the seasonal and trend components to generate the **baseline** (blue).
## Time series anomaly detection
@@ -139,7 +139,7 @@ demo_make_series2
## Summary
-This document details native KQL functions for time series anomaly detection and forecasting. Each original time series is decomposed into seasonal, trend and residual components for detecting anomalies and/or forecasting. These functionalities can be used for near real-time monitoring scenarios, such as fault detection, predictive maintenance, and demand and load forecasting.
+This document details native KQL functions for time series anomaly detection and forecasting. Each original time series is decomposed into seasonal, trend, and residual components for detecting anomalies and/or forecasting. These functionalities can be used for near real-time monitoring scenarios, such as fault detection, predictive maintenance, and demand and load forecasting.
## Related content
diff --git a/data-explorer/kusto/query/media/anomaly-detection/series-anomaly-detection.png b/data-explorer/kusto/query/media/anomaly-detection/series-anomaly-detection.png
index f1c090c6ff..90a6fa5ca5 100644
Binary files a/data-explorer/kusto/query/media/anomaly-detection/series-anomaly-detection.png and b/data-explorer/kusto/query/media/anomaly-detection/series-anomaly-detection.png differ
diff --git a/data-explorer/kusto/query/media/anomaly-detection/series-forecasting.png b/data-explorer/kusto/query/media/anomaly-detection/series-forecasting.png
index c75b1fae7d..fd75852beb 100644
Binary files a/data-explorer/kusto/query/media/anomaly-detection/series-forecasting.png and b/data-explorer/kusto/query/media/anomaly-detection/series-forecasting.png differ
diff --git a/data-explorer/kusto/query/media/anomaly-detection/series-scalability.png b/data-explorer/kusto/query/media/anomaly-detection/series-scalability.png
index b524e6106a..9a8727ec70 100644
Binary files a/data-explorer/kusto/query/media/anomaly-detection/series-scalability.png and b/data-explorer/kusto/query/media/anomaly-detection/series-scalability.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-anomaly-chart.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-anomaly-chart.png
index 7c42191374..bef2c2bf7c 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-anomaly-chart.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-anomaly-chart.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-linestring.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-linestring.png
index 901d5c3d4c..d46488fbc4 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-linestring.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-linestring.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-polygon.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-polygon.png
index 04bab02c60..7fe085365a 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-polygon.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-distance-from-polygon.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-southern-california-polygon.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-southern-california-polygon.png
index 48eb1d1c4e..db53cdbef2 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-southern-california-polygon.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-southern-california-polygon.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-by-type.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-by-type.png
index 635b27f069..a678153b22 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-by-type.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-by-type.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-centered.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-centered.png
index d5b3a834ae..0f70c403ff 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-centered.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-centered.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-scatterchart.png b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-scatterchart.png
index b7e7a211ac..5db0e662ba 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-scatterchart.png and b/data-explorer/kusto/query/media/kql-tutorials/geospatial-storm-events-scatterchart.png differ
diff --git a/data-explorer/kusto/query/media/kql-tutorials/tornado-geospatial-map.png b/data-explorer/kusto/query/media/kql-tutorials/tornado-geospatial-map.png
index 87b0fc92e3..23f5cacdc0 100644
Binary files a/data-explorer/kusto/query/media/kql-tutorials/tornado-geospatial-map.png and b/data-explorer/kusto/query/media/kql-tutorials/tornado-geospatial-map.png differ
diff --git a/data-explorer/kusto/query/media/series-decompose-anomaliesfunction/weekly-seasonality-higher-threshold.png b/data-explorer/kusto/query/media/series-decompose-anomaliesfunction/weekly-seasonality-higher-threshold.png
index f67818a020..b2b0f77aa9 100644
Binary files a/data-explorer/kusto/query/media/series-decompose-anomaliesfunction/weekly-seasonality-higher-threshold.png and b/data-explorer/kusto/query/media/series-decompose-anomaliesfunction/weekly-seasonality-higher-threshold.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-at-scale.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-at-scale.png
index ab46c7505c..81d47f815f 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-at-scale.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-at-scale.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-filtering.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-filtering.png
index 2f209c6113..5039a8adac 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-filtering.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-filtering.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-operations.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-operations.png
index 5ee5377a74..129eb6ac05 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-operations.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-operations.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-partition.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-partition.png
index 3bca30eb63..b89b5e874d 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-partition.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-partition.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-regression.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-regression.png
index 2303927317..5102bea3f0 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-regression.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-regression.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-seasonality.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-seasonality.png
index 024321dcd7..d8a319af75 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-seasonality.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-seasonality.png differ
diff --git a/data-explorer/kusto/query/media/time-series-analysis/time-series-top-2.png b/data-explorer/kusto/query/media/time-series-analysis/time-series-top-2.png
index fcc1e1761b..6ba37f32bc 100644
Binary files a/data-explorer/kusto/query/media/time-series-analysis/time-series-top-2.png and b/data-explorer/kusto/query/media/time-series-analysis/time-series-top-2.png differ
diff --git a/data-explorer/kusto/query/time-series-analysis.md b/data-explorer/kusto/query/time-series-analysis.md
index 5765cc0a5c..7778297c85 100644
--- a/data-explorer/kusto/query/time-series-analysis.md
+++ b/data-explorer/kusto/query/time-series-analysis.md
@@ -1,22 +1,22 @@
---
-title: Analyze time series data
-description: Learn how to analyze time series data.
+title: Time Series Analysis - Trends, Anomalies, and Monitoring
+description: Gain insights into time series analysis with KQL, from creating time series to advanced anomaly detection and trend analysis for monitoring solutions.
ms.reviewer: adieldar
ms.topic: how-to
-ms.date: 08/11/2024
+ms.date: 09/25/2025
---
# Time series analysis
> [!INCLUDE [applies](../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../includes/applies-to-version/azure-data-explorer.md)] [!INCLUDE [monitor](../includes/applies-to-version/monitor.md)] [!INCLUDE [sentinel](../includes/applies-to-version/sentinel.md)]
-Cloud services and IoT devices generate telemetry data that can be used to gain insights such as monitoring service health, physical production processes, and usage trends. Performing time series analysis is one way to identify deviations in the pattern of these metrics compared to their typical baseline pattern.
+Cloud services and IoT devices generate telemetry you can use to gain insights into service health, production processes, and usage trends. Time series analysis helps you identify deviations from typical baseline patterns.
-Kusto Query Language (KQL) contains native support for creation, manipulation, and analysis of multiple time series. In this article, learn how KQL is used to create and analyze thousands of time series in seconds, enabling near real-time monitoring solutions and workflows.
+Kusto Query Language (KQL) has native support for creating, manipulating, and analyzing multiple time series. This article shows how to use KQL to create and analyze thousands of time series in seconds to enable near real-time monitoring solutions and workflows.
## Time series creation
-In this section, we'll create a large set of regular time series simply and intuitively using the `make-series` operator, and fill-in missing values as needed.
-The first step in time series analysis is to partition and transform the original telemetry table to a set of time series. The table usually contains a timestamp column, contextual dimensions, and optional metrics. The dimensions are used to partition the data. The goal is to create thousands of time series per partition at regular time intervals.
+Create a large set of regular time series using the `make-series` operator and fill in missing values as needed.
+Partition and transform the telemetry table into a set of time series. The table usually contains a timestamp column, contextual dimensions, and optional metrics. The dimensions are used to partition the data. The goal is to create thousands of time series per partition at regular time intervals.
The input table *demo_make_series1* contains 600K records of arbitrary web service traffic. Use the following command to sample 10 records:
@@ -29,7 +29,7 @@ The input table *demo_make_series1* contains 600K records of arbitrary web servi
demo_make_series1 | take 10
```
-The resulting table contains a timestamp column, three contextual dimensions columns, and no metrics:
+The resulting table contains a timestamp column, three contextual dimension columns, and no metrics:
| TimeStamp | BrowserVer | OsVer | Country/Region |
| --- | --- | --- | --- |
@@ -44,7 +44,7 @@ The resulting table contains a timestamp column, three contextual dimensions col
| 2016-08-25 09:12:56.4240000 | Chrome 52.0 | Windows 10 | United Kingdom |
| 2016-08-25 09:13:08.7230000 | Chrome 52.0 | Windows 10 | India |
-Since there are no metrics, we can only build a set of time series representing the traffic count itself, partitioned by OS using the following query:
+Because there are no metrics, build time series representing the traffic count, partitioned by OS:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -59,16 +59,16 @@ demo_make_series1
| render timechart
```
-- Use the [`make-series`](make-series-operator.md) operator to create a set of three time series, where:
- - `num=count()`: time series of traffic
- - `from min_t to max_t step 1h`: time series is created in 1-hour bins in the time range (oldest and newest timestamps of table records)
- - `default=0`: specify fill method for missing bins to create regular time series. Alternatively use [`series_fill_const()`](series-fill-const-function.md), [`series_fill_forward()`](series-fill-forward-function.md), [`series_fill_backward()`](series-fill-backward-function.md) and [`series_fill_linear()`](series-fill-linear-function.md) for changes
- - `by OsVer`: partition by OS
-- The actual time series data structure is a numeric array of the aggregated value per each time bin. We use `render timechart` for visualization.
+- Use the [`make-series`](make-series-operator.md) operator to create three time series, where:
+ - `num=count()`: traffic count.
+ - `from min_t to max_t step 1h`: creates the time series in one hour bins from the table's oldest to newest timestamp.
+ - `default=0`: specifies the fill method for missing bins to create regular time series. Alternatively, use [`series_fill_const()`](series-fill-const-function.md), [`series_fill_forward()`](series-fill-forward-function.md), [`series_fill_backward()`](series-fill-backward-function.md), and [`series_fill_linear()`](series-fill-linear-function.md) for different fill behavior.
+ - `by OsVer`: partitions by OS.
+- The time series data structure is a numeric array of aggregated values for each time bin. Use `render timechart` for visualization.
-In the table above, we have three partitions. We can create a separate time series: Windows 10 (red), 7 (blue) and 8.1 (green) for each OS version as seen in the graph:
+The table above has three partitions (Windows 10, Windows 7, and Windows 8.1). The chart shows a separate time series for each OS version:
-:::image type="content" source="media/time-series-analysis/time-series-partition.png" alt-text="Time series partition.":::
+:::image type="content" source="media/time-series-analysis/time-series-partition.png" alt-text="Screenshot of a time series chart with separate lines for Windows 10, Windows 7, and Windows 8.1." lightbox="media/time-series-analysis/time-series-partition.png":::
## Time series analysis functions
@@ -97,7 +97,7 @@ demo_make_series1
| render timechart
```
-:::image type="content" source="media/time-series-analysis/time-series-filtering.png" alt-text="Time series filtering.":::
+:::image type="content" source="media/time-series-analysis/time-series-filtering.png" alt-text="Time series filtering."lightbox="media/time-series-analysis/time-series-filtering.png":::
### Regression analysis
@@ -119,7 +119,7 @@ demo_series2
| render linechart with(xcolumn=x)
```
-:::image type="content" source="media/time-series-analysis/time-series-regression.png" alt-text="Time series regression.":::
+:::image type="content" source="media/time-series-analysis/time-series-regression.png" alt-text="Time series regression."lightbox="media/time-series-analysis/time-series-regression.png":::
- Blue: original time series
- Green: fitted line
@@ -144,14 +144,14 @@ demo_series3
| render timechart
```
-:::image type="content" source="media/time-series-analysis/time-series-seasonality.png" alt-text="Time series seasonality.":::
+:::image type="content" source="media/time-series-analysis/time-series-seasonality.png" alt-text="Time series seasonality."lightbox="media/time-series-analysis/time-series-seasonality.png":::
- Use [series_periods_detect()](series-periods-detect-function.md) to automatically detect the periods in the time series, where:
- `num`: the time series to analyze
- `0.`: the minimum period length in days (0 means no minimum)
- `14d/2h`: the maximum period length in days, which is 14 days divided into 2-hour bins
- `2`: the number of periods to detect
-- Use [series_periods_validate()](series-periods-validate-function.md) if we know that a metric should have specific distinct period(s) and we want to verify that they exist.
+- Use [series_periods_validate()](series-periods-validate-function.md) if we know that a metric should have specific distinct periods and we want to verify that they exist.
> [!NOTE]
> It's an anomaly if specific distinct periods don't exist
@@ -195,7 +195,7 @@ demo_make_series1
| render timechart
```
-:::image type="content" source="media/time-series-analysis/time-series-operations.png" alt-text="Time series operations.":::
+:::image type="content" source="media/time-series-analysis/time-series-operations.png" alt-text="Time series operations." lightbox="media/time-series-analysis/time-series-operations.png":::
- Blue: original time series
- Red: smoothed time series
@@ -203,7 +203,7 @@ demo_make_series1
## Time series workflow at scale
-The example below shows how these functions can run at scale on thousands of time series in seconds for anomaly detection. To see a few sample telemetry records of a DB service's read count metric over four days run the following query:
+This example shows anomaly detection running at scale on thousands of time series in seconds. To see sample telemetry records for a DB service read count metric over four days, run the following query:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -222,7 +222,7 @@ demo_many_series1
| 2016-09-11 21:00:00.0000000 | Loc 9 | -865998331941149874 | 262 | 279862 |
| 2016-09-11 21:00:00.0000000 | Loc 9 | 371921734563783410 | 255 | 0 |
-And simple statistics:
+View simple statistics:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -234,11 +234,11 @@ demo_many_series1
| summarize num=count(), min_t=min(TIMESTAMP), max_t=max(TIMESTAMP)
```
-| num | min\_t | max\_t |
+| num | min_t | max_t |
| --- | --- | --- |
| 2177472 | 2016-09-08 00:00:00.0000000 | 2016-09-11 23:00:00.0000000 |
-Building a time series in 1-hour bins of the read metric (total four days * 24 hours = 96 points), results in normal pattern fluctuation:
+A time series in 1-hour bins of the read metric (four days × 24 hours = 96 points) shows normal hourly fluctuation:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -253,11 +253,11 @@ demo_many_series1
| render timechart with(ymin=0)
```
-:::image type="content" source="media/time-series-analysis/time-series-at-scale.png" alt-text="Time series at scale.":::
+:::image type="content" source="media/time-series-analysis/time-series-at-scale.png" alt-text="Screenshot of a time series chart showing average reads over four days with normal hourly fluctuations." lightbox="media/time-series-analysis/time-series-at-scale.png":::
-The above behavior is misleading, since the single normal time series is aggregated from thousands of different instances that may have abnormal patterns. Therefore, we create a time series per instance. An instance is defined by Loc (location), Op (operation), and DB (specific machine).
+This behavior is misleading because the single normal time series is aggregated from thousands of instances that can have abnormal patterns. Create a time series per instance defined by Loc (location), Op (operation), and DB (specific machine).
-How many time series can we create?
+How many time series can you create?
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -274,7 +274,7 @@ demo_many_series1
| --- |
| 18339 |
-Now, we're going to create a set of 18339 time series of the read count metric. We add the `by` clause to the make-series statement, apply linear regression, and select the top two time series that had the most significant decreasing trend:
+Create 18,339 time series for the read count metric. Add the `by` clause to the make-series statement, apply linear regression, and select the top two time series with the most significant decreasing trend:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -291,7 +291,7 @@ demo_many_series1
| render timechart with(title='Service Traffic Outage for 2 instances (out of 18339)')
```
-:::image type="content" source="media/time-series-analysis/time-series-top-2.png" alt-text="Time series top two.":::
+:::image type="content" source="media/time-series-analysis/time-series-top-2.png" alt-text="Screenshot of two time series with sharply declining read counts compared to normal traffic.":::
Display the instances:
@@ -312,14 +312,14 @@ demo_many_series1
| Loc | Op | DB | slope |
| --- | --- | --- | --- |
-| Loc 15 | 37 | 1151 | -102743.910227889 |
-| Loc 13 | 37 | 1249 | -86303.2334644601 |
+| Loc 15 | 37 | 1151 | -104,498.46510358342|
+| Loc 13 | 37 | 1249 | -86,614.02919932814 |
-In less than two minutes, close to 20,000 time series were analyzed and two abnormal time series in which the read count suddenly dropped were detected.
+In under two minutes, the query analyzes nearly 20,000 time series and detects two with a sudden read count drop.
-These advanced capabilities combined with fast performance supply a unique and powerful solution for time series analysis.
+These capabilities and the platform performance provide a powerful solution for time series analysis.
## Related content
-- Learn about [Anomaly detection and forecasting](anomaly-detection.md) with KQL.
-- Learn about [Machine learning capabilities](anomaly-diagnosis.md) with KQL.
+- [Anomaly detection and forecasting](anomaly-detection.md) with KQL.
+- [Machine learning capabilities](anomaly-diagnosis.md) with KQL.
diff --git a/data-explorer/kusto/query/tutorials/create-geospatial-visualizations.md b/data-explorer/kusto/query/tutorials/create-geospatial-visualizations.md
index f0bc415155..4a73aa08b4 100644
--- a/data-explorer/kusto/query/tutorials/create-geospatial-visualizations.md
+++ b/data-explorer/kusto/query/tutorials/create-geospatial-visualizations.md
@@ -1,8 +1,8 @@
---
-title: 'Tutorial: Create geospatial visualizations'
-description: This tutorial gives examples of geospatial visualizations in the Kusto Query Language.
+title: 'Tutorial: Create Geospatial Visualizations'
+description: Discover how to represent geospatial data with maps, bubbles, and polygons using KQL's powerful visualization tools.
ms.topic: tutorial
-ms.date: 08/11/2024
+ms.date: 09/25/2025
monikerRange: "microsoft-fabric || azure-data-explorer || azure-monitor || microsoft-sentinel"
---
@@ -10,9 +10,9 @@ monikerRange: "microsoft-fabric || azure-data-explorer || azure-monitor || micro
> [!INCLUDE [applies](../../includes/applies-to-version/applies.md)] [!INCLUDE [fabric](../../includes/applies-to-version/fabric.md)] [!INCLUDE [azure-data-explorer](../../includes/applies-to-version/azure-data-explorer.md)] [!INCLUDE [monitor](../../includes/applies-to-version/monitor.md)] [!INCLUDE [sentinel](../../includes/applies-to-version/sentinel.md)]
-This tutorial is for those who want to use [Kusto Query Language (KQL)](../index.md) for geospatial visualization. Geospatial clustering is a way to organize and analyze data based on geographical location. KQL offers multiple methods for performing [geospatial clustering](../geospatial-grid-systems.md) and tools for [geospatial visualizations](../geospatial-visualizations.md).
+Use the [Kusto Query Language (KQL)](../index.md) to create geospatial visualizations. Geospatial clustering organizes data by location. KQL provides multiple [geospatial clustering](../geospatial-grid-systems.md) methods and [geospatial visualization](../geospatial-visualizations.md) tools.
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"]
>
@@ -27,19 +27,25 @@ In this tutorial, you'll learn how to:
## Prerequisites
-To run the following queries, you need a query environment with access to the sample data. You can use one of the following:
+To run the queries, you need a query environment that has access to the sample data. Use one of the following:
:::moniker range="azure-data-explorer"
-* A Microsoft account or Microsoft Entra user identity to sign in to the [help cluster](https://dataexplorer.azure.com/clusters/help)
-::: moniker-end
+* Microsoft account or Microsoft Entra user identity to sign in to the [help cluster](https://dataexplorer.azure.com/clusters/help)
+* Microsoft account or Microsoft Entra user identity to sign in to the [help cluster](https://dataexplorer.azure.com/clusters/help)
+
+::: moniker-end
:::moniker range="microsoft-fabric"
-* A Microsoft account or Microsoft Entra user identity
-* A [Fabric workspace](/fabric/get-started/create-workspaces) with a Microsoft Fabric-enabled [capacity](/fabric/enterprise/licenses#capacity)
+
+* [Fabric workspace](/fabric/get-started/create-workspaces) with a Microsoft Fabric-enabled [capacity](/fabric/enterprise/licenses#capacity)
+* A Microsoft account or Microsoft Entra user identity
+* [Fabric workspace](/fabric/get-started/create-workspaces) with a Microsoft Fabric-enabled [capacity](/fabric/enterprise/licenses#capacity)
+
::: moniker-end
+Use [project](../project-operator.md) to select the longitude column, then the latitude column. Use [render](../render-operator.md) to show the points on a map (scatter chart with `kind` set to `map`).
## Plot points on a map
-To visualize points on a map, use [project](../project-operator.md) to select the column containing the longitude and then the column containing the latitude. Then, use [render](../render-operator.md) to see your results in a scatter chart with `kind` set to `map`.
+Use [project](../project-operator.md) to select the longitude column, then the latitude column. Use [render](../render-operator.md) to show the points on a map (scatter chart with `kind` set to `map`).
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -53,13 +59,13 @@ StormEvents
| render scatterchart with (kind = map)
```
-:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-scatterchart.png" alt-text="Screenshot of sample storm events on a map.":::
+:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-scatterchart.png" alt-text="Screenshot of sample storm events on a map." lightbox="../media/kql-tutorials/geospatial-storm-events-scatterchart.png":::
## Plot multiple series of points
-To visualize multiple series of points, use [project](../project-operator.md) to select the longitude and latitude along with a third column, which defines the series.
+To visualize multiple point series, use [project](../project-operator.md) to select the longitude, latitude, and a third column that defines the series.
-In the following query, the series is `EventType`. The points are colored differently according to their `EventType`, and when selected display the content of the `EventType` column.
+In the following query, the series is `EventType`. The points use different colors by `EventType` and, when selected, display the `EventType` value.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -73,9 +79,9 @@ StormEvents
| render scatterchart with (kind = map)
```
-:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-by-type.png" alt-text="Screenshot of sample storm events on a map by type.":::
+:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-by-type.png" alt-text="Screenshot of sample storm events on a map by type." lightbox="../media/kql-tutorials/geospatial-storm-events-by-type.png":::
-You may also explicitly specify the `xcolumn` (Longitude), `ycolumn` (Latitude), and `series` when performing the `render`. This specification is necessary when there are more columns in the result than just the longitude, latitude, and series columns.
+When the result has more columns than the longitude, latitude, and series columns, you can also explicitly specify the `xcolumn` (longitude), `ycolumn` (latitude), and `series` in the `render` operator.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -90,9 +96,9 @@ StormEvents
## Use GeoJSON values to plot points on a map
-A dynamic GeoJSON value can change or be updated and are often used for real-time mapping applications. Mapping points using dynamic GeoJSON values allows for more flexibility and control over the representation of the data on the map that may not be possible with plain latitude and longitude values.
+Dynamic GeoJSON values update frequently and are used in real-time mapping. Mapping points with dynamic GeoJSON values gives you flexibility and control that plain latitude and longitude can't provide.
-The following query uses the [geo_point_to_s2cell](../geo-point-to-s2cell-function.md) and [geo_s2cell_to_central_point](../geo-s2cell-to-central-point-function.md) to map storm events in a scatter chart.
+The following query uses the [geo_point_to_s2cell](../geo-point-to-s2cell-function.md) and [geo_s2cell_to_central_point](../geo-s2cell-to-central-point-function.md) functions to map storm events on a scatter chart.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -108,13 +114,13 @@ StormEvents
| render scatterchart with (kind = map)
```
-:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-centered.png" alt-text="Screenshot of sample storm events displayed using geojson.":::
+:::image type="content" source="../media/kql-tutorials/geospatial-storm-events-centered.png" alt-text="Screenshot of sample storm events displayed using GeoJSON." lightbox="../media/kql-tutorials/geospatial-storm-events-centered.png":::
-## Represent data points with variable-sized bubbles
+## Represent data points with variable sized bubbles
-Visualize the distribution of data points by performing an aggregation in each cluster and then plotting the central point of the cluster.
+Visualize data distribution by aggregating each cluster and plotting its central point.
-For example, the following query filters for all storm events of the "Tornado" event type. It then groups the events into clusters based on their longitude and latitude, counts the number of events in each cluster, and projects the central point of the cluster, and renders a map to visualize the result. The regions with the most tornados become clearly detected based on their large bubble size.
+For example, the following query filters storm events where EventType is `Tornado`. It groups events into longitude and latitude clusters, counts events in each cluster, projects each cluster's central point, and renders a map. Regions with the most tornadoes stand out by their larger bubble size.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -132,11 +138,10 @@ StormEvents
| render piechart with (kind = map)
```
-:::image type="content" source="../media/kql-tutorials/tornado-geospatial-map.png" alt-text="Screenshot of Azure Data Explorer web UI showing a geospatial map of tornado storms.":::
+:::image type="content" source="../media/kql-tutorials/tornado-geospatial-map.png" alt-text="Screenshot of Azure Data Explorer showing a geospatial map of tornado events." lightbox="../media/kql-tutorials/tornado-geospatial-map.png":::
## Display points within a specific area
-
Use a polygon to define the region and the [geo_point_in_polygon](../geo-point-in-polygon-function.md) function to filter for events that occur within that region.
The following query defines a polygon representing the southern California region and filters for storm events within this region. It then groups the events into clusters, counts the number of events in each cluster, projects the central point of the cluster, and renders a map to visualize the clusters.
@@ -160,7 +165,7 @@ StormEvents
| render piechart with (kind = map)
```
-:::image type="content" source="../media/kql-tutorials/geospatial-southern-california-polygon.png" alt-text="Screenshot of Azure Data Explorer web UI showing a geospatial map of southern California storms.":::
+:::image type="content" source="../media/kql-tutorials/geospatial-southern-california-polygon.png" alt-text="Screenshot of Azure Data Explorer web UI showing a geospatial map of southern California storms." lightbox="../media/kql-tutorials/geospatial-southern-california-polygon.png":::
## Show nearby points on a LineString
diff --git a/data-explorer/kusto/set-timeout-limits.md b/data-explorer/kusto/set-timeout-limits.md
index ba092af2a6..47d459fc30 100644
--- a/data-explorer/kusto/set-timeout-limits.md
+++ b/data-explorer/kusto/set-timeout-limits.md
@@ -1,90 +1,90 @@
---
-title: Set timeouts
-description: Learn how to set the query timeout length in various tools, such as Kusto.Explorer and the Azure Data Explorer web UI.
+title: Set Timeout Limits
+description: Learn how to set the query timeout length and the Admin command timeout, in various tools, such as Kusto.Explorer and the Azure Data Explorer web UI.
ms.topic: how-to
-ms.date: 08/11/2024
+ms.date: 09/25/2025
monikerRange: "azure-data-explorer"
---
# Set timeout limits
> [!INCLUDE [applies](includes/applies-to-version/applies.md)] [!INCLUDE [azure-data-explorer](includes/applies-to-version/azure-data-explorer.md)]
-It's possible to customize the timeout length for your queries and [management commands](management/index.md). In this article, you'll learn how to set a custom timeout in various tools such as the [Azure Data Explorer web UI](/azure/data-explorer/web-query-data), [Kusto.Explorer](tools/kusto-explorer.md), [Kusto.Cli](tools/kusto-cli.md), [Power BI](/azure/data-explorer/power-bi-data-connector), and when using an [SDK](#sdks). Certain tools have their own default timeout values, but it may be helpful to adjust these values based on the complexity and expected runtime of your queries.
+Customize timeouts for queries and [management commands](management/index.md). This article shows how to set custom timeouts in the [Azure Data Explorer web UI](/azure/data-explorer/web-query-data), [Kusto.Explorer](tools/kusto-explorer.md), [Kusto.Cli](tools/kusto-cli.md), [Power BI](/azure/data-explorer/power-bi-data-connector), and an [SDK](#sdks). Each tool has a default timeout, but adjust it based on query complexity and expected runtime.
> [!NOTE]
> Server side policies, such as the [request limits policy](management/request-limits-policy.md), can override the timeout specified by the client.
## Azure Data Explorer web UI
-This section describes how to configure a custom query timeout and admin command timeout in the Azure Data Explorer web UI.
+Configure custom query and admin command timeouts in the Azure Data Explorer web UI.
### Prerequisites
* A Microsoft account or a Microsoft Entra user identity. An Azure subscription isn't required.
* An Azure Data Explorer cluster and database. [Create a cluster and database](/azure/data-explorer/create-cluster-and-database).
-### Set timeout length
+### Set the timeout length
1. Sign in to the [Azure Data Explorer web UI](https://dataexplorer.azure.com/home) with your Microsoft account or Microsoft Entra user identity credentials.
1. In the top menu, select the **Settings** icon.
-1. From the left menu, select **Connection**.
+1. From Settings, select the **Connection** tab.
-1. Under the **Query timeout (in minutes)** setting, use the slider to choose the desired query timeout length.
+1. Under **Query timeout (in minutes)**, move the slider to set the query timeout length.
-1. Under the **Admin command timeout (in minutes)** setting, use the slider to choose the desired admin command timeout length.
+1. Under **Admin command timeout (in minutes)**, move the slider to set the admin command timeout length.
:::image type="content" source="media/set-timeouts/web-ui-set-timeouts.png" alt-text="Screenshot of the settings in the Azure Data Explorer web UI that control timeout length.":::
-1. Close the settings window, and the changes will be saved automatically.
+1. Close the settings window to save your changes.
## Kusto.Explorer
-This section describes how to configure a custom query timeout and admin command timeout in the Kusto.Explorer.
+Set custom query and admin command timeouts in Kusto.Explorer.
### Prerequisites
-* Download and install the [Kusto.Explorer tool](tools/kusto-explorer.md#install-kustoexplorer).
+* Install [Kusto.Explorer](tools/kusto-explorer.md#install-kustoexplorer).
* An Azure Data Explorer cluster and database. [Create a cluster and database](/azure/data-explorer/create-cluster-and-database).
### Set timeout length
-1. Open the Kusto.Explorer tool.
+1. Open Kusto.Explorer.
1. In the top menu, select the **Tools** tab.
-1. On the right-hand side, select **Options**.
+1. In the **Tools** tab, select **Options**.
- :::image type="content" source="media/set-timeouts/kusto-explorer-options-widget.png" alt-text="Screenshot showing the options widget in the Kusto.Explorer tool.":::
+ :::image type="content" source="media/set-timeouts/kusto-explorer-options-widget.png" alt-text="Screenshot of the Options dialog in Kusto.Explorer.":::
-1. In the left menu, select **Connections**.
+1. In the **Options** dialog, select **Connections**.
-1. In the **Query Server Timeout** setting, enter the desired timeout length. The maximum is 1 hour.
+1. For **Query Server Timeout**, enter the timeout length (maximum 1 hour).
-1. Under the **Admin Command Server Timeout** setting, enter the desired timeout length. The maximum is 1 hour.
+1. For **Admin Command Server Timeout**, enter the timeout length (maximum 1 hour).
- :::image type="content" source="media/set-timeouts/kusto-explorer-set-timeouts.png" alt-text="Screenshot showing settings that control the timeout length in Kusto.Explorer.":::
+ :::image type="content" source="media/set-timeouts/kusto-explorer-set-timeouts.png" alt-text="Screenshot of settings for query and admin command timeouts in Kusto.Explorer.":::
-1. Select **OK** to save the changes.
+1. Select **OK** to save.
## Kusto.Cli
-This section describes how to configure a custom server timeout in the Kusto.Cli.
+Configure a custom server timeout in Kusto.Cli.
### Prerequisites
-* Install the [Kusto.Cli](tools/kusto-cli.md) by downloading the package [Microsoft.Azure.Kusto.Tools](https://www.nuget.org/packages/Microsoft.Azure.Kusto.Tools/).
+* Install [Kusto.Cli](tools/kusto-cli.md) by downloading the [Microsoft.Azure.Kusto.Tools](https://www.nuget.org/packages/Microsoft.Azure.Kusto.Tools/) package.
### Set timeout length
-Run the following command to set the *servertimeout* [client request property](api/netfx/client-request-properties.md) with the desired timeout length as a valid [timespan](query/scalar-data-types/timespan.md) value up to 1 hour.
+Run this command to set the *servertimeout* [client request property](api/netfx/client-request-properties.md) to a valid [timespan](query/scalar-data-types/timespan.md) value (up to 1 hour). Replace *``* and *``* with your connection string and timespan value.
```dotnet
Kusto.Cli.exe -execute:"#crp servertimeout=" -execute:"…"
```
-Alternatively, use the following command to set the *norequesttimeout* [client request property](api/netfx/client-request-properties.md), which will set the timeout to the maximum value of 1 hour.
+Or run this command to set the *norequesttimeout* [client request property](api/netfx/client-request-properties.md), which sets the timeout to the maximum of 1 hour. Replace *``* with your connection string.
```dotnet
Kusto.Cli.exe -execute:"#crp norequesttimeout=true" -execute:"…"
@@ -98,7 +98,7 @@ Kusto.Cli.exe -execute:"#crp servertimeout"
## Power BI
-This section describes how to configure a custom server timeout in Power BI.
+Set a custom server timeout in Power BI Desktop.
### Prerequisites
@@ -106,13 +106,13 @@ This section describes how to configure a custom server timeout in Power BI.
### Set timeout length
-1. [Connect to your Azure Data Explorer cluster from Power BI desktop](/azure/data-explorer/power-bi-data-connector).
+1. [Connect to your Azure Data Explorer cluster from Power BI Desktop](/azure/data-explorer/power-bi-data-connector).
-1. In the top menu, select **Transform Data**.
+1. On the ribbon, select **Transform Data**.
:::image type="content" source="media/set-timeouts/power-bi-transform-data.png" alt-text="Screenshot of the transform data option in Power BI Desktop.":::
-1. In the top menu, select **Advanced Query Editor**.
+1. In the query menu, select **Advanced Editor**.
:::image type="content" source="media/set-timeouts/power-bi-advanced-editor.png" alt-text="Screenshot of the Power BI advanced query editor option in Power BI Desktop.":::
@@ -129,7 +129,7 @@ This section describes how to configure a custom server timeout in Power BI.
## SDKs
-To learn how to set timeouts with the SDKs, see [Customize query behavior with client request properties](api/get-started/app-basic-query.md#customize-query-behavior-with-client-request-properties).
+Set SDK timeouts in [Customize query behavior with client request properties](api/get-started/app-basic-query.md#customize-query-behavior-with-client-request-properties).
## Related content