You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: data-explorer/kusto/management/data-export/export-data-to-storage.md
+20-20Lines changed: 20 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,32 +23,32 @@ You must have at least [Database Viewer](../../access-control/role-based-access-
23
23
24
24
## Parameters
25
25
26
-
| Name | Type | Required | Description |
27
-
|--|--|--|--|
28
-
|`async`|`string`|| If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). |
29
-
|`compressed`|`string`|| If specified, the output storage artifacts are compressed as `.gz` files. See the `compressionType`[supported property](#supported-properties) for compressing Parquet files as snappy. |
30
-
|*OutputDataFormat*|`string`|:heavy_check_mark:|Indicates the data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. |
31
-
|*StorageConnectionString*|`string`|| One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that indicate which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must indicate the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. |
32
-
|*PropertyName*, *PropertyValue*|`string`|| A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).|
26
+
| Name | Type | Required| Description |
27
+
|--|--|--|--|
28
+
|*async*|`string`|| If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). |
29
+
|*compressed*|`bool`|| If specified, the output storage artifacts are compressed in the format specified by the `compressionType`[supported property](#supported-properties). |
30
+
|*OutputDataFormat*|`string`|:heavy_check_mark:|The data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. |
31
+
|*StorageConnectionString*|`string`|| One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that specify which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must specify the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. |
32
+
|*PropertyName*, *PropertyValue*|`string`|| A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).|
33
33
34
34
> [!NOTE]
35
35
> We highly recommended exporting data to storage that is colocated in the same region as the database itself. This includes data that is exported so it can be transferred to another cloud service in other regions. Writes should be done locally, while reads can happen remotely.
36
36
37
37
## Supported properties
38
38
39
-
| Property | Type | Description |
40
-
|--|--|--|
41
-
|`includeHeaders`|`string`| For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). |
42
-
|`fileExtension`|`string`|Indicates the "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. |
43
-
|`namePrefix`|`string`|Indicates a prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. |
44
-
|`encoding`|`string`|Indicates how to encode the text: `UTF8NoBOM` (default) or `UTF8BOM`. |
45
-
|`compressionType`|`string`|Indicates the type of compression to use. Possible values are `gzip`or `snappy`. Default is `gzip`.`snappy` can (optionally) be used for `parquet` format. |
46
-
|`distribution`|`string`| Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. |
47
-
|`persistDetails`|`bool`| Indicates that the command should persist its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results). Defaults to `false` in synchronous executions, but can be turned on in those as well. |
48
-
|`sizeLimit`|`long`| The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. |
49
-
|`parquetRowGroupSize`|`int`| Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
50
-
|`distributed`|`bool`| Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
51
-
|`parquetDatetimePrecision`|`string`|Specifies the precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. |
39
+
| Property | Type| Description |
40
+
|--|--|--|
41
+
|`includeHeaders`|`string`| For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). |
42
+
|`fileExtension`|`string`|The "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. |
43
+
|`namePrefix`|`string`|The prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. |
44
+
|`encoding`|`string`|The encoding for text. Possible values include: `UTF8NoBOM` (default) or `UTF8BOM`. |
45
+
|`compressionType`|`string`|The type of compression to use. For non-Parquet files, only `gzip`is allowed. For Parquet files, possible values include `gzip`,`snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. |
46
+
|`distribution`|`string`| Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. |
47
+
|`persistDetails`|`bool`| If `true`, the command persists its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results. Defaults to `false` in synchronous executions, but can be turned on in those as well. |
48
+
|`sizeLimit`|`long`| The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. |
49
+
|`parquetRowGroupSize`|`int`| Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
50
+
|`distributed`|`bool`| Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
51
+
|`parquetDatetimePrecision`|`string`|The precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. |
|`docString`|`string`| String documenting the table |
125
-
|`compressed`|`bool`| If set, indicates whether the files are compressed as `.gz` files (used in [export scenario](data-export/export-data-to-an-external-table.md) only) |
126
-
|`includeHeaders`|`string`| For delimited text formats (CSV, TSV, ...), indicates whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
127
-
|`namePrefix`|`string`| If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
128
-
|`fileExtension`|`string`| If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
129
-
|`encoding`|`string`| Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
130
-
|`sampleUris`|`bool`| If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
131
-
|`filesPreview`|`bool`| If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
|`docString`|`string`| String documenting the table |
125
+
|`compressed`|`bool`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
126
+
|`compressionType`|`string`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
127
+
|`includeHeaders`|`string`| For delimited text formats (CSV, TSV, ...), specifies whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
128
+
|`namePrefix`|`string`| If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
129
+
|`fileExtension`|`string`| If set, specifies the extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
130
+
|`encoding`|`string`| Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
131
+
|`sampleUris`|`bool`| If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
132
+
|`filesPreview`|`bool`| If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
132
133
|`validateNotEmpty`|`bool`| If set, the connection strings are validated for having content in them. The command will fail if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
133
-
|`dryRun`|`bool`| If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
134
+
|`dryRun`|`bool`| If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
134
135
135
136
> [!NOTE]
136
137
> The external table isn't accessed during creation, only during query and export. Use the `validateNotEmpty` optional property during creation to make sure the table definition is valid and the storage is accessible.
|`docString`|`string`| String documenting the table |
61
-
|`namePrefix`|`string`| If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
62
-
|`fileExtension`|`string`| If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
63
-
|`encoding`|`string`| Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
64
-
|`dryRun`|`bool`| If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
|`docString`|`string`| String documenting the table |
61
+
|`compressed`|`bool`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
62
+
|`compressionType`|`string`| Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
63
+
|`namePrefix`|`string`| If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
64
+
|`fileExtension`|`string`| If set, specifies extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
65
+
|`encoding`|`string`| Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
66
+
|`dryRun`|`bool`| If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
65
67
66
68
> [!NOTE]
67
69
> The external delta table is accessed during creation, to infer the partitioning information and, optionally, the schema. Make sure that the table definition is valid and that the storage is accessible.
Copy file name to clipboardExpand all lines: data-explorer/kusto/management/show-external-tables.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ You must have at least Database User, Database Viewer, Database Monitor to run t
39
39
| TableType |`string`| Type of external table |
40
40
| Folder |`string`| Table's folder |
41
41
| DocString |`string`| String documenting the table |
42
-
| Properties |`string`| Table's JSON serialized properties (specific to the type of table)|
42
+
| Properties |`string`| Table's JSON serialized properties (specific to the type of table; For more information, see [Create and alter Azure Storage external tables](external-tables-azure-storage.md) or [Create and alter delta external tables on Azure Storage](external-tables-delta-lake.md).|
Copy file name to clipboardExpand all lines: data-explorer/kusto/query/project-rename-operator.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: project-rename operator
3
3
description: Learn how to use the project-rename operator to rename columns in the output table.
4
4
ms.reviewer: alexans
5
5
ms.topic: reference
6
-
ms.date: 08/11/2024
6
+
ms.date: 01/20/2025
7
7
---
8
8
# project-rename operator
9
9
@@ -29,7 +29,9 @@ Renames columns in the output table.
29
29
30
30
A table that has the columns in the same order as in an existing table, with columns renamed.
31
31
32
-
## Examples
32
+
## Example
33
+
34
+
If you have a table with columns a, b, and c, and you want to rename a to new_a and b to new_b while keeping the same order, the query would look like this:
0 commit comments