Skip to content

Commit 86cd802

Browse files
Merge pull request #2543 from MicrosoftDocs/main638747859075178455sync_temp
For protected branch, push strategy should use PR and merge to target branch method to work around git push error
2 parents cc88443 + 4bef368 commit 86cd802

13 files changed

+110
-65
lines changed

data-explorer/kusto/management/data-export/export-data-to-storage.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -23,32 +23,32 @@ You must have at least [Database Viewer](../../access-control/role-based-access-
2323

2424
## Parameters
2525

26-
| Name | Type | Required | Description |
27-
|--|--|--|--|
28-
| `async` | `string` | | If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). |
29-
| `compressed` | `string` | | If specified, the output storage artifacts are compressed as `.gz` files. See the `compressionType` [supported property](#supported-properties) for compressing Parquet files as snappy. |
30-
| *OutputDataFormat* | `string` | :heavy_check_mark: | Indicates the data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. |
31-
| *StorageConnectionString* | `string` | | One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that indicate which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must indicate the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. |
32-
| *PropertyName*, *PropertyValue* | `string` | | A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).|
26+
| Name | Type | Required | Description |
27+
|-- |-- |-- |-- |
28+
| *async* | `string` | | If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). |
29+
| *compressed* | `bool` | | If specified, the output storage artifacts are compressed in the format specified by the `compressionType` [supported property](#supported-properties). |
30+
| *OutputDataFormat* | `string` | :heavy_check_mark: | The data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. |
31+
| *StorageConnectionString* | `string` | | One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that specify which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must specify the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. |
32+
| *PropertyName*, *PropertyValue* | `string` | | A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).|
3333

3434
> [!NOTE]
3535
> We highly recommended exporting data to storage that is colocated in the same region as the database itself. This includes data that is exported so it can be transferred to another cloud service in other regions. Writes should be done locally, while reads can happen remotely.
3636
3737
## Supported properties
3838

39-
| Property | Type | Description |
40-
|--|--|--|
41-
| `includeHeaders` | `string` | For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). |
42-
| `fileExtension` | `string` | Indicates the "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. |
43-
| `namePrefix` | `string` | Indicates a prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. |
44-
| `encoding` | `string` | Indicates how to encode the text: `UTF8NoBOM` (default) or `UTF8BOM`. |
45-
| `compressionType` | `string` | Indicates the type of compression to use. Possible values are `gzip` or `snappy`. Default is `gzip`. `snappy` can (optionally) be used for `parquet` format. |
46-
| `distribution` | `string` | Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. |
47-
| `persistDetails` | `bool` | Indicates that the command should persist its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results). Defaults to `false` in synchronous executions, but can be turned on in those as well. |
48-
| `sizeLimit` | `long` | The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. |
49-
| `parquetRowGroupSize` | `int` | Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
50-
| `distributed` | `bool` | Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
51-
| `parquetDatetimePrecision` | `string` | Specifies the precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. |
39+
| Property | Type | Description |
40+
|-- |-- |-- |
41+
| `includeHeaders` | `string` | For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). |
42+
| `fileExtension` | `string` | The "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. |
43+
| `namePrefix` | `string` | The prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. |
44+
| `encoding` | `string` | The encoding for text. Possible values include: `UTF8NoBOM` (default) or `UTF8BOM`. |
45+
| `compressionType` | `string` | The type of compression to use. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. |
46+
| `distribution` | `string` | Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. |
47+
| `persistDetails` | `bool` | If `true`, the command persists its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results. Defaults to `false` in synchronous executions, but can be turned on in those as well. |
48+
| `sizeLimit` | `long` | The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. |
49+
| `parquetRowGroupSize` | `int` | Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
50+
| `distributed` | `bool` | Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
51+
| `parquetDatetimePrecision` | `string` | The precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. |
5252

5353
## Authentication and authorization
5454

data-explorer/kusto/management/external-tables-azure-storage.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -118,19 +118,20 @@ external_table("ExternalTable")
118118

119119
## Optional properties
120120

121-
| Property | Type | Description |
122-
|------------------|----------|-------------------------------------------------------------------------------------|
123-
| `folder` | `string` | Table's folder |
124-
| `docString` | `string` | String documenting the table |
125-
| `compressed` | `bool` | If set, indicates whether the files are compressed as `.gz` files (used in [export scenario](data-export/export-data-to-an-external-table.md) only) |
126-
| `includeHeaders` | `string` | For delimited text formats (CSV, TSV, ...), indicates whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
127-
| `namePrefix` | `string` | If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
128-
| `fileExtension` | `string` | If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
129-
| `encoding` | `string` | Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
130-
| `sampleUris` | `bool` | If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
131-
| `filesPreview` | `bool` | If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
121+
| Property | Type | Description |
122+
|------------------ |----------|-------------------------------------------------------------------------------------|
123+
| `folder` | `string` | Table's folder |
124+
| `docString` | `string` | String documenting the table |
125+
| `compressed` | `bool` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
126+
| `compressionType` | `string` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
127+
| `includeHeaders` | `string` | For delimited text formats (CSV, TSV, ...), specifies whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
128+
| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
129+
| `fileExtension` | `string` | If set, specifies the extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
130+
| `encoding` | `string` | Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
131+
| `sampleUris` | `bool` | If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
132+
| `filesPreview` | `bool` | If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
132133
| `validateNotEmpty` | `bool` | If set, the connection strings are validated for having content in them. The command will fail if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
133-
| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
134+
| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
134135

135136
> [!NOTE]
136137
> The external table isn't accessed during creation, only during query and export. Use the `validateNotEmpty` optional property during creation to make sure the table definition is valid and the storage is accessible.

data-explorer/kusto/management/external-tables-delta-lake.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -54,14 +54,16 @@ The supported authentication methods are the same as those supported by [Azure S
5454

5555
## Optional properties
5656

57-
| Property | Type | Description |
58-
|------------------|----------|------------------------------------------------------------------------------------|
59-
| `folder` | `string` | Table's folder |
60-
| `docString` | `string` | String documenting the table |
61-
| `namePrefix` | `string` | If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
62-
| `fileExtension` | `string` | If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
63-
| `encoding` | `string` | Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
64-
| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
57+
| Property | Type | Description |
58+
|------------------ |---------- |------------------------------------------------------------------------------------|
59+
| `folder` | `string` | Table's folder |
60+
| `docString` | `string` | String documenting the table |
61+
| `compressed` | `bool` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
62+
| `compressionType` | `string` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).<br>The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
63+
| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
64+
| `fileExtension` | `string` | If set, specifies extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
65+
| `encoding` | `string` | Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
66+
| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
6567

6668
> [!NOTE]
6769
> The external delta table is accessed during creation, to infer the partitioning information and, optionally, the schema. Make sure that the table definition is valid and that the storage is accessible.

data-explorer/kusto/management/show-external-tables.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ You must have at least Database User, Database Viewer, Database Monitor to run t
3939
| TableType | `string` | Type of external table |
4040
| Folder | `string` | Table's folder |
4141
| DocString | `string` | String documenting the table |
42-
| Properties | `string` | Table's JSON serialized properties (specific to the type of table) |
42+
| Properties | `string` | Table's JSON serialized properties (specific to the type of table; For more information, see [Create and alter Azure Storage external tables](external-tables-azure-storage.md) or [Create and alter delta external tables on Azure Storage](external-tables-delta-lake.md). |
4343

4444
## Example
4545

data-explorer/kusto/query/let-statement.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,8 @@ To optimize multiple uses of the `let` statement within a single query, see [Opt
6565
6666
## Examples
6767

68+
The examples in this section show how to use the syntax to help you get started.
69+
6870
[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
6971

7072
The query examples show the syntax and example usage of the operator, statement, or function.

data-explorer/kusto/query/project-operator.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,10 @@ A table with columns that were named as arguments. Contains same number of rows
4747

4848
## Examples
4949

50+
The examples in this section show how to use the syntax to help you get started.
51+
52+
[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
53+
5054
### Only show specific columns
5155

5256
Only show the `EventId`, `State`, `EventType` of the `StormEvents` table.
@@ -61,7 +65,9 @@ StormEvents
6165
| project EventId, State, EventType
6266
```
6367

64-
The following results table shows only the top 10 results.
68+
**Output**
69+
70+
The table shows the first 10 results.
6571

6672
|EventId|State|EventType|
6773
|--|--|--|
@@ -92,7 +98,9 @@ StormEvents
9298
| where TotalInjuries > 5
9399
```
94100

95-
The following table shows only the first 10 results.
101+
**Output**
102+
103+
The table shows the first 10 results.
96104

97105
|StartLocation| TotalInjuries|
98106
|--|--|
@@ -108,9 +116,7 @@ The following table shows only the first 10 results.
108116
|COLLIERVILLE| 6|
109117
|...|...|
110118

111-
::: moniker range="microsoft-fabric || azure-data-explorer || azure-monitor || microsoft-sentinel"
112119
## Related content
113120

114121
* [`extend`](extend-operator.md)
115122
* [series_stats](series-stats-function.md)
116-
::: moniker-end

data-explorer/kusto/query/project-rename-operator.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: project-rename operator
33
description: Learn how to use the project-rename operator to rename columns in the output table.
44
ms.reviewer: alexans
55
ms.topic: reference
6-
ms.date: 08/11/2024
6+
ms.date: 01/20/2025
77
---
88
# project-rename operator
99

@@ -29,7 +29,9 @@ Renames columns in the output table.
2929

3030
A table that has the columns in the same order as in an existing table, with columns renamed.
3131

32-
## Examples
32+
## Example
33+
34+
If you have a table with columns a, b, and c, and you want to rename a to new_a and b to new_b while keeping the same order, the query would look like this:
3335

3436
:::moniker range="azure-data-explorer"
3537
> [!div class="nextstepaction"]
@@ -43,6 +45,6 @@ print a='a', b='b', c='c'
4345

4446
**Output**
4547

46-
|new_a|new_b|c|
47-
|---|---|---|
48-
|a|b|c|
48+
| new_a | new_b | c |
49+
|--|--|--|
50+
| a | b | c |

0 commit comments

Comments
 (0)