You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: data-explorer/kusto/management/data-export/continuous-data-export.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Continuous data export
3
3
description: This article describes Continuous data export.
4
4
ms.reviewer: yifats
5
5
ms.topic: reference
6
-
ms.date: 12/08/2024
6
+
ms.date: 07/30/2025
7
7
---
8
8
# Continuous data export overview
9
9
@@ -101,12 +101,14 @@ Followed by:
101
101
<| T | where cursor_before_or_at("636751928823156645")
102
102
```
103
103
104
+
::: moniker range="azure-data-explorer"
104
105
## Continuous export from a table with Row Level Security
105
106
106
107
To create a continuous export job with a query that references a table with [Row Level Security policy](../../management/row-level-security-policy.md), you must:
107
108
108
109
* Provide a managed identity as part of the continuous export configuration. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md).
109
110
* Use [impersonation](../../api/connection-strings/storage-connection-strings.md#impersonation) authentication for the external table to which the data is exported.
Copy file name to clipboardExpand all lines: data-explorer/kusto/management/data-export/create-alter-continuous.md
+19-1Lines changed: 19 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: .create or alter continuous-export
3
3
description: This article describes how to create or alter continuous data export.
4
4
ms.reviewer: yifats
5
5
ms.topic: reference
6
-
ms.date: 12/08/2024
6
+
ms.date: 07/30/2025
7
7
---
8
8
# .create or alter continuous-export
9
9
@@ -31,11 +31,15 @@ You must have at least [Database Admin](../../access-control/role-based-access-c
31
31
|*T1*, *T2*|`string`|| A comma-separated list of fact tables in the query. If not specified, all tables referenced in the query are assumed to be fact tables. If specified, tables *not* in this list are treated as dimension tables and aren't scoped, so all records participate in all exports. See [continuous data export overview](continuous-data-export.md) for details. |
32
32
|*propertyName*, *propertyValue*|`string`|| A comma-separated list of optional [properties](#supported-properties).|
33
33
34
+
::: moniker range="azure-data-explorer"
34
35
> [!NOTE]
35
36
> If the target external table uses [impersonation](../../api/connection-strings/storage-connection-strings.md#impersonation) authentication, you must specify a managed identity to run the continuous export. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md).
37
+
::: moniker-end
36
38
37
39
## Supported properties
38
40
41
+
::: moniker range="azure-data-explorer"
42
+
39
43
| Property | Type | Description |
40
44
|--|--|--|
41
45
|`intervalBetweenRuns`|`Timespan`| The time span between continuous export executions. Must be greater than 1 minute. |
@@ -46,6 +50,20 @@ You must have at least [Database Admin](../../access-control/role-based-access-c
46
50
|`managedIdentity`|`string`| The managed identity for which the continuous export job runs. The managed identity can be an object ID, or the `system` reserved word. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md#use-a-managed-identity-to-run-a-continuous-export-job). |
47
51
|`isDisabled`|`bool`| Disable or enable the continuous export. Default is false. |
48
52
53
+
::: moniker-end
54
+
::: moniker range="microsoft-fabric"
55
+
56
+
| Property | Type | Description |
57
+
|--|--|--|
58
+
|`intervalBetweenRuns`|`Timespan`| The time span between continuous export executions. Must be greater than 1 minute. |
59
+
|`forcedLatency`|`Timespan`| An optional period of time to limit the query to records ingested before a specified period relative to the current time. This property is useful if, for example, the query performs some aggregations or joins, and you want to make sure all relevant records have been ingested before running the export. |
60
+
|`sizeLimit`|`long`| The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 1 GB. |
61
+
|`distributed`|`bool`| Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
62
+
|`parquetRowGroupSize`|`int`| Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
63
+
|`isDisabled`|`bool`| Disable or enable the continuous export. Default is false. |
64
+
65
+
::: moniker-end
66
+
49
67
## Example
50
68
51
69
The following example creates or alters a continuous export `MyExport` that exports data from the `T` table to `ExternalBlob`. The data exports occur every hour, and have a defined forced latency and size limit per storage artifact.
Copy file name to clipboardExpand all lines: data-explorer/kusto/management/external-tables-azure-storage.md
+36-13Lines changed: 36 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Create and alter Azure Storage external tables
3
3
description: This article describes how to create and alter external tables based on Azure Blob Storage or Azure Data Lake
4
4
ms.reviewer: orspodek
5
5
ms.topic: reference
6
-
ms.date: 08/11/2024
6
+
ms.date: 07/30/2025
7
7
---
8
8
9
9
# Create and alter Azure Storage external tables
@@ -13,13 +13,15 @@ ms.date: 08/11/2024
13
13
The commands in this article can be used to create or alter an Azure Storage [external table](../query/schema-entities/external-tables.md) in the database from which the command is executed. An Azure Storage external table references data located in Azure Blob Storage, Azure Data Lake Store Gen1, or Azure Data Lake Store Gen2.
14
14
15
15
> [!NOTE]
16
-
> If the table exists, the `.create` command will fail with an error. Use `.create-or-alter` or `.alter` to modify existing tables.
16
+
> If the table exists, the `.create` command fails with an error. Use `.create-or-alter` or `.alter` to modify existing tables.
17
17
18
18
## Permissions
19
19
20
20
To `.create` requires at least [Database User](../access-control/role-based-access-control.md) permissions, and to `.alter` requires at least [Table Admin](../access-control/role-based-access-control.md) permissions.
21
21
22
+
:::moniker range="azure-data-explorer"
22
23
To `.create-or-alter` an external table using managed identity authentication requires [AllDatabasesAdmin](../access-control/role-based-access-control.md) permissions.
24
+
:::moniker-end
23
25
24
26
## Syntax
25
27
@@ -38,22 +40,24 @@ To `.create-or-alter` an external table using managed identity authentication re
38
40
|*Schema*|`string`|:heavy_check_mark:|The external data schema is a comma-separated list of one or more column names and [data types](../query/scalar-data-types/index.md), where each item follows the format: *ColumnName*`:`*ColumnType*. If the schema is unknown, use [infer\_storage\_schema](../query/infer-storage-schema-plugin.md) to infer the schema based on external file contents.|
39
41
|*Partitions*|`string`|| A comma-separated list of columns by which the external table is partitioned. Partition column can exist in the data file itself, or as part of the file path. See [partitions formatting](#partitions-formatting) to learn how this value should look.|
40
42
|*PathFormat*|`string`||An external data folder URI path format to use with partitions. See [path format](#path-format).|
41
-
|*DataFormat*|`string`|:heavy_check_mark:|The data format, which can be any of the [ingestion formats](../ingestion-supported-formats.md). We recommend using the `Parquet` format for external tables to improve query and export performance, unless you use `JSON` paths mapping. When using an external table for [export scenario](data-export/export-data-to-an-external-table.md), you're limited to the following formats: `CSV`, `TSV`, `JSON` and `Parquet`.|
42
-
|*StorageConnectionString*|`string`|:heavy_check_mark:|One or more comma-separated paths to Azure Blob Storage blob containers, Azure Data Lake Gen 2 file systems or Azure Data Lake Gen 1 containers, including credentials. The external table storage type is determined by the provided connection strings. See [storage connection strings](../api/connection-strings/storage-connection-strings.md).|
43
+
|*DataFormat*|`string`|:heavy_check_mark:|The data format, which can be any of the [ingestion formats](../ingestion-supported-formats.md). We recommend using the `Parquet` format for external tables to improve query and export performance, unless you use `JSON` paths mapping. When using an external table for [export scenario](data-export/export-data-to-an-external-table.md), you're limited to the following formats: `CSV`, `TSV`, `JSON`, and `Parquet`.|
44
+
|*StorageConnectionString*|`string`|:heavy_check_mark:|One or more comma-separated paths to Azure Blob Storage blob containers, Azure Data Lake Gen 2 file systems or Azure Data Lake Gen 1 containers, including credentials. The provided connection string determines the external table storage type. See [storage connection strings](../api/connection-strings/storage-connection-strings.md).|
43
45
|*Property*|`string`||A key-value property pair in the format *PropertyName*`=`*PropertyValue*. See [optional properties](#optional-properties).|
44
46
45
47
> [!NOTE]
46
-
> CSV files with non-identical schema might result in data appearing shifted or missing. We recommend separating CSV files with distinct schemas to separate storage containers and defining an external table for each storage container with the proper schema.
48
+
> CSV files with nonidentical schema might result in data appearing shifted or missing. We recommend separating CSV files with distinct schemas to separate storage containers and defining an external table for each storage container with the proper schema.
47
49
48
50
> [!TIP]
49
-
> Provide more than a single storage account to avoid storage throttling while [exporting](data-export/export-data-to-an-external-table.md) large amounts of data to the external table. Export will distribute the writes between all accounts provided.
51
+
> Provide more than a single storage account to avoid storage throttling while [exporting](data-export/export-data-to-an-external-table.md) large amounts of data to the external table. Export distributes the writes between all accounts provided.
50
52
51
53
## Authentication and authorization
52
54
53
55
The authentication method to access an external table is based on the connection string provided during its creation, and the permissions required to access the table vary depending on the authentication method.
54
56
55
57
The following table lists the supported authentication methods for Azure Storage external tables and the permissions needed to read or write to the table.
56
58
59
+
::: moniker range="azure-data-explorer"
60
+
57
61
| Authentication method | Azure Blob Storage / Data Lake Storage Gen2 | Data Lake Storage Gen1 |
58
62
|--|--|--|
59
63
|[Impersonation](../api/connection-strings/storage-connection-strings.md#impersonation)|**Read permissions:** Storage Blob Data Reader<br/>**Write permissions:** Storage Blob Data Contributor|**Read permissions:** Reader<br/>**Write permissions:** Contributor|
@@ -62,6 +66,18 @@ The following table lists the supported authentication methods for Azure Storage
|`compressed`|`bool`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
126
142
|`compressionType`|`string`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
127
143
|`includeHeaders`|`string`| For delimited text formats (CSV, TSV, ...), specifies whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
128
-
|`namePrefix`|`string`| If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
129
-
|`fileExtension`|`string`| If set, specifies the extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
144
+
|`namePrefix`|`string`| If set, specifies the prefix of the files. On write operations, all files are written with this prefix. On read operations, only files with this prefix are read. |
145
+
|`fileExtension`|`string`| If set, specifies the extension of the files. On write, files names end with this suffix. On read, only files with this file extension are read. |
130
146
|`encoding`|`string`| Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
131
147
|`sampleUris`|`bool`| If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
132
148
|`filesPreview`|`bool`| If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
133
-
|`validateNotEmpty`|`bool`| If set, the connection strings are validated for having content in them. The command will fail if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
149
+
|`validateNotEmpty`|`bool`| If set, the connection strings are validated for having content in them. The command fails if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
134
150
|`dryRun`|`bool`| If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
When querying an external table, performance is improved by filtering out irrelevant external storage files. The process of iterating files and deciding whether a file should be processed is as follows:
160
+
When you query an external table, performance is improved by filtering out irrelevant external storage files. The process of iterating files and deciding whether a file should be processed is as follows:
145
161
146
162
1. Build a URI pattern that represents a place where files are found. Initially, the URI pattern equals a connection string provided as part of the external table definition. If there are any partitions defined, they're rendered using *PathFormat*, then appended to the URI pattern.
147
163
@@ -158,9 +174,9 @@ Once all the conditions are met, the file is fetched and processed.
158
174
159
175
## Examples
160
176
161
-
### Non-partitioned external table
177
+
### Nonpartitioned external table
162
178
163
-
In the following non-partitioned external table, the files are expected to be placed directly under the container(s) defined:
179
+
In the following nonpartitioned external table, the files are expected to be placed directly under the container(s) defined:
0 commit comments