You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Thank you for contributing to Azure Data Explorer documentation
1
+
# Thank you for contributing to Kusto documentation
2
2
3
-
## Fill out these items before submitting your pull request:
3
+
Please add a brief comment outlining the purpose of this PR. Add links to any relevant references such as DevOps work items.
4
4
5
-
If you are working internally at Microsoft:
5
+
## Make sure you've done the following:
6
6
7
-
-**Provide a link to an Azure DevOps Boards work item that tracks this feature/update.**
7
+
1.**Acrolinx:** Make sure your Acrolinx score is **at least 80** (higher is better) and with **0** spelling issues.
8
+
1.**Successful build**: Review the build status to make sure **all files are green** (Succeeded) and there are no errors, warnings, or suggestions.
9
+
1.**Preview the pages:**: Click each **Preview URL** link, scan the entire page looking for formatting issues, in particular the parts you edited.
10
+
1.**Check the Table of Contents:** If you are adding a new markdown file, make sure it is linked from the table of contents.
11
+
1.**Sign off**: Once the PR is finalized, add a comment with `#sign-off` . If you need to cancel the sign-off, add a comment with `#hold-off`.
8
12
9
-
-**Who is your Docs team contact?**\@mention them individually tag them and let them review the PR before signing off.
13
+
**NOTE**: *Signing off means the document can be published at any time.*
10
14
11
-
## For internal Microsoft contributors, check off these quality control items as you go
15
+
## Next steps
12
16
13
-
-[ ]1.**Check the Acrolinx report:** Make sure your Acrolinx Total score is **above 80 minimum** (higher is better) and with no spelling issues. Acrolinx ensures we are providing consistent terminology and using an appropriate voice and tone, and helps with localization.
17
+
- All PRs to this repository are reviewed and merged by a human. Automatic merge is disabled on this repository for PRs, even with the qualifies-for-auto-merge label.
18
+
- Once all feedback on the PR is addressed, the PR will be merged into the main branch.
14
19
15
-
-[ ]2.**Successful build with no warnings or suggestions**: Review the build status to make sure **all files are green** (Succeeded).
16
-
17
-
-[ ]3.**Preview the pages:**: Click each **Preview URL** link to view the rendered HTML pages on the review.learn.microsoft.com site to check the formatting and alignment of the page. Scan the page for overall formatting, and look at the parts you edited in detail.
18
-
19
-
-[ ]4.**Check the Table of Contents:** If you are adding a new markdown file, make sure it is linked from the table of contents.
20
-
21
-
-[ ]5.**#sign-off to request PR review and merge**: Once the pull request is finalized and ready to be merged, indicate so by typing `#sign-off` in a new comment in the Pull Request. If you need to cancel that sign-off, type `#hold-off` instead. *Signing off means the document can be published at any time.* Note, this is a formatting and standards review, not a technical review.
22
-
23
-
## Merge and publish
24
-
25
-
- After you `#sign-off`, there is a separate PR Review team that will review the PR and describe any necessary feedback before merging.
26
-
- The review team will use the comments section in the PR to provide feedback if changes are needed. Address any blocking issues and sign off again to request another review.
27
-
- Once all feedback is resolved, you can `#sign-off` again. The PR Review team reviews and merges the pull request into the specified branch (usually the *main( branch or a *release-branch*).
28
-
- From the *main* branch, the change is merged into the *live* branch several times a day to publish it to the public learn.microsoft.com site.
20
+
[Learn more about how to contribute](https://review.learn.microsoft.com/en-us/help/platform/?branch=main).
> The examples in this article use publicly available tables in the [help cluster](https://dataexplorer.azure.com/clusters/help/), such as the `StormEvents` table in the *Samples* database.
8
+
::: moniker-end
9
+
:::moniker range="microsoft-fabric"
10
+
> The examples in this article use publicly available tables, such as the `StormEvents` table in the Weather analytics [sample data](/fabric/real-time-intelligence/sample-gallery).
|*DatabaseAliasName*|`string`|:heavy_check_mark:|An existing name or new database alias name. You can escape the name with brackets. For example, ["Name with spaces"]. |
40
-
|*DatabaseName*|`string`|:heavy_check_mark:|The name of the database to give an alias.|
26
+
|*DatabaseAliasName*|`string`|:heavy_check_mark:| An existing name or new database alias name. You can escape the name with brackets. For example, ["Name with spaces"]. |
27
+
|*QueryURI*|`string`|:heavy_check_mark:| The URI that can be used to run queries or management commands. |
28
+
|*DatabaseName*|`string`|:heavy_check_mark:| The name of the database to give an alias. |
41
29
42
30
:::moniker range="azure-data-explorer"
43
31
> [!NOTE]
44
-
> The mapped cluster-uri and the mapped database-name must appear inside double-quotes(") or single-quotes(').
32
+
>
33
+
> - To get your Query URI, in the Azure portal, go to your cluster's overview page, and then copy the URI.
34
+
> - The mapped Query and the mapped database-name must appear inside double-quotes(") or single-quotes(').
45
35
::: moniker-end
46
-
47
36
:::moniker range="microsoft-fabric"
48
37
> [!NOTE]
49
-
> The mapped Eventhouse-uri and the mapped database-name must appear inside double-quotes(") or single-quotes(').
38
+
>
39
+
> - To get your Query URI, see [Copy a KQL database URI](/fabric/real-time-intelligence/access-database-copy-uri#copy-uri).
40
+
> - The mapped Query and the mapped database-name must appear inside double-quotes(") or single-quotes(').
Binds a name to the operator's input tabular expression. This allows the query to reference the value of the tabular expression multiple times without breaking the query and binding a name through the [let statement](let-statement.md).
12
+
Binds a name to the operator's input tabular expression. This operator allows the query to reference the value of the tabular expression multiple times without breaking the query and binding a name through the [let statement](let-statement.md).
13
13
14
14
To optimize multiple uses of the `as` operator within a single query, see [Named expressions](named-expressions.md).
15
15
@@ -25,42 +25,57 @@ To optimize multiple uses of the `as` operator within a single query, see [Named
25
25
|--|--|--|--|
26
26
|*T*|`string`|:heavy_check_mark:| The tabular expression to rename.|
27
27
|*Name*|`string`|:heavy_check_mark:| The temporary name for the tabular expression.|
28
-
|*`hint.materialized`*|`bool`|| If *Materialized* is set to `true`, the value of the tabular expression will be as if it was wrapped by a [materialize()](materialize-function.md) function call. Otherwise, the value will be recalculated on every reference.|
28
+
|*`hint.materialized`*|`bool`|| If *Materialized* is set to `true`, the value of the tabular expression output is wrapped by a [materialize()](materialize-function.md) function call. Otherwise, the value is recalculated on every reference.|
29
29
30
30
> [!NOTE]
31
31
>
32
-
> * The name given by `as`will be used in the `withsource=` column of [union](union-operator.md), the `source_` column of [find](find-operator.md), and the `$table` column of [search](search-operator.md).
32
+
> * The name given by `as`is used in the `withsource=` column of [union](union-operator.md), the `source_` column of [find](find-operator.md), and the `$table` column of [search](search-operator.md).
33
33
> * The tabular expression named using the operator in a [join](join-operator.md)'s outer tabular input (`$left`) can also be used in the join's tabular inner input (`$right`).
34
34
35
35
## Examples
36
36
37
-
In the following two examples the union's generated TableName column will consist of 'T1' and 'T2'.
37
+
In the following two examples, the generated TableName column consists of 'T1' and 'T2'.
38
38
39
39
:::moniker range="azure-data-explorer"
40
40
> [!div class="nextstepaction"]
41
-
> <ahref="https://dataexplorer.azure.com/?query=H4sIAAAAAAAAAytKzEtPVahQSCvKz1UwVCjJVzA0UCguSS0AcrhqFBKLFULAjNK8zPw8hfLMkozi/NKi5FTbkMSknFS/xNxUBY0iPGZAjDDSBAAgKK6faAAAAA=="target="_blank">Run the query</a>
41
+
> <ahref="https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAAytKzEtPVahQSCvKz1UwVCjJVzBVKC5JLQCyuWoUEosVQsCM0rzM%2FDyF8sySjOL80qLkVNuQxKScVL%2FE3FQFjSLcRkBMMNIEALyiibJmAAAA"target="_blank">Run the query</a>
42
42
::: moniker-end
43
43
44
44
```kusto
45
-
range x from 1 to 10 step 1
45
+
range x from 1 to 5 step 1
46
46
| as T1
47
-
| union withsource=TableName (range x from 1 to 10 step 1 | as T2)
47
+
| union withsource=TableName (range x from 1 to 5 step 1 | as T2)
48
48
```
49
49
50
50
Alternatively, you can write the same example as follows:
51
51
52
52
:::moniker range="azure-data-explorer"
53
53
> [!div class="nextstepaction"]
54
-
> <ahref="https://dataexplorer.azure.com/?query=H4sIAAAAAAAAAyvNy8zPUyjPLMkozi8tSk61DUlMykn1S8xNVdAoSsxLT1WoUEgrys9VMFQoyVcwNFAoLkktAHJqFBKLFUIMNXWIUWakCQB5tG07ZwAAAA=="target="_blank">Run the query</a>
54
+
> <ahref="https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAAyvNy8zPUyjPLMkozi8tSk61DUlMykn1S8xNVdAoSsxLT1WoUEgrys9VMFQoyVcwVSguSS0AsmsUEosVQgw1dYhQZaQJAJuYIo9lAAAA"target="_blank">Run the query</a>
55
55
::: moniker-end
56
56
57
57
```kusto
58
-
union withsource=TableName (range x from 1 to 10 step 1 | as T1), (range x from 1 to 10 step 1 | as T2)
58
+
union withsource=TableName (range x from 1 to 5 step 1 | as T1), (range x from 1 to 5 step 1 | as T2)
59
59
```
60
60
61
-
In the following example, the 'left side' of the join will be:
61
+
**Output**
62
+
63
+
| TableName| x |
64
+
|--|---|
65
+
| T1 | 1 |
66
+
| T1 | 2 |
67
+
| T1 | 3 |
68
+
| T1 | 4 |
69
+
| T1 | 5 |
70
+
| T2 | 1 |
71
+
| T2 | 2 |
72
+
| T2 | 3 |
73
+
| T2 | 4 |
74
+
| T2 | 5 |
75
+
76
+
In the following example, the 'left side' of the join is:
62
77
`MyLogTable` filtered by `type == "Event"` and `Name == "Start"`
63
-
and the 'right side' of the join will be:
78
+
and the 'right side' of the join is:
64
79
`MyLogTable` filtered by `type == "Event"` and `Name == "Stop"`
When you use the count operator with a table name, like StormEvents, it will return the total number of records in that table.
36
+
35
37
:::moniker range="azure-data-explorer"
36
38
> [!div class="nextstepaction"]
37
39
> <ahref="https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAAwsuyS/KdS1LzSspVqhRSM4vzSsBALU2eHsTAAAA"target="_blank">Run the query</a>
@@ -41,6 +43,12 @@ This function returns a table with a single record and column of type
41
43
StormEvents | count
42
44
```
43
45
46
+
**Output**
47
+
48
+
| Count |
49
+
|-------|
50
+
| 59066 |
51
+
44
52
## Related content
45
53
46
54
For information about the count() aggregation function, see [count() (aggregation function)](count-aggregation-function.md).
Copy file name to clipboardExpand all lines: data-explorer/kusto/query/datatable-operator.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: datatable operator
3
3
description: Learn how to use the datatable operator to define a table with given schema and data.
4
4
ms.reviewer: alexans
5
5
ms.topic: reference
6
-
ms.date: 01/07/2025
6
+
ms.date: 01/21/2025
7
7
---
8
8
# datatable operator
9
9
@@ -37,6 +37,8 @@ This operator returns a data table of the given schema and data.
37
37
38
38
## Example
39
39
40
+
This example creates a table with *Date*, *Event*, and *MoreData* columns, filters rows with Event descriptions longer than 4 characters, and adds a new column *key2* to each row from the MoreData dynamic object.
41
+
40
42
:::moniker range="azure-data-explorer"
41
43
> [!div class="nextstepaction"]
42
44
> <ahref="https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA3XRS4vCMBAA4Lu/YsiphbiY1upa0IPYo8velz2kZtRgTCCNL1z/uxNZd6HYJAQyj++QUTLQrg0mCxmwVHQFvUcO1RFtKJvgtd1wWDqPVCBLdbFyr1cpfPWA1rM+ERMx6A9GfSFSDmzuvGUcfouTK9vhRbCSHaU5oKBMDGTPQMZuKW9zOXGCTuQqG9A3UK2cQfiQ1ISdet7Wh6/0Iv/XPw+10c0WFay1bwLUzu06+aLNj17xk3H8i6yI/EKj6uTGbe79wX33fuC0RY9AAzBok8c0UpjBkDJ4DmgVxDaY/o3mLb7vp72pd88BAAA="target="_blank">Run the query</a>
Copy file name to clipboardExpand all lines: data-explorer/kusto/query/externaldata-operator.md
+12-13Lines changed: 12 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,7 @@ title: externaldata operator
3
3
description: Learn how to use the externaldata operator to return a data table of the given schema whose data was parsed from the specified storage artifact.
The `externaldata` operator returns a table whose schema is defined in the query itself, and whose data is read from an external storage artifact, such as a blob in Azure Blob Storage or a file in Azure Data Lake Storage.
14
13
15
14
> [!NOTE]
16
-
> The `externaldata` operator supports a specific set of storage services, as listed under [Storage connection strings](../api/connection-strings/storage-connection-strings.md).
17
-
18
-
> [!NOTE]
19
-
> The `externaldata` operator supports Shared Access Signature (SAS) key, Access key, and Microsoft Entra Token authentication methods. For more information, see [Storage authentication methods](../api/connection-strings/storage-connection-strings.md).
> * a specific set of storage services, as listed under [Storage connection strings](../api/connection-strings/storage-connection-strings.md).
18
+
> * shared Access Signature (SAS) key, Access key, and Microsoft Entra Token authentication methods. For more information, see [Storage authentication methods](../api/connection-strings/storage-connection-strings.md#storage-authentication-methods).
| format |`string`| The data format. If unspecified, an attempt is made to detect the data format from file extension. The default is `CSV`. All [ingestion data formats](../ingestion-supported-formats.md) are supported. |
46
+
| Property | Type | Description |
47
+
|--|--|--|
48
+
| format |`string`| The data format. If unspecified, an attempt is made to detect the data format from file extension. The default is `CSV`. All [ingestion data formats](../ingestion-supported-formats.md) are supported. |
54
49
| ignoreFirstRecord |`bool`| If set to `true`, the first record in every file is ignored. This property is useful when querying CSV files with headers. |
55
50
| ingestionMapping |`string`| Indicates how to map data from the source file to the actual columns in the operator result set. See [data mappings](../management/mappings.md). |
56
51
@@ -66,6 +61,8 @@ The `externaldata` operator returns a data table of the given schema whose data
66
61
67
62
## Examples
68
63
64
+
The examples query data in an external storage file.
65
+
69
66
### Fetch a list of user IDs stored in Azure Blob Storage
70
67
71
68
The following example shows how to find all records in a table whose `UserID` column falls into a known set of IDs, held (one per line) in an external storage file. Since the data format isn't specified, the detected data format is `TXT`.
0 commit comments