diff --git a/data-explorer/kusto/management/callout-policy.md b/data-explorer/kusto/management/callout-policy.md
index e4c417ea38..3d51c1dc7d 100644
--- a/data-explorer/kusto/management/callout-policy.md
+++ b/data-explorer/kusto/management/callout-policy.md
@@ -38,7 +38,7 @@ Callout policies are managed at cluster-level and are classified into the follow
| sandbox_artifacts | Controls sandboxed plugins ([python](../query/python-plugin.md) and [R](../query/r-plugin.md)). |
| external_data | Controls access to external data through [external tables](../query/schema-entities/external-tables.md) or [externaldata](../query/externaldata-operator.md) operator. |
| webapi | Controls access to http endpoints. |
-| ai_embed_text | Controls the [ai_embed_text plugin)](../query/ai-embed-text-plugin.md). |
+| azure_openai | Controls calls to Azure OpenAI plugins such as the embedding plugin [ai_embed_text plugin](../query/ai-embed-text-plugin.md). |
## Predefined callout policies
diff --git a/data-explorer/kusto/query/parse-kv-operator.md b/data-explorer/kusto/query/parse-kv-operator.md
index 28a6e544a0..cb13e0dc7b 100644
--- a/data-explorer/kusto/query/parse-kv-operator.md
+++ b/data-explorer/kusto/query/parse-kv-operator.md
@@ -3,7 +3,7 @@ title: parse-kv operator
description: Learn how to use the parse-kv operator to represent structured information extracted from a string expression in a key/value form.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 02/06/2025
---
# parse-kv operator
@@ -14,13 +14,13 @@ Extracts structured information from a string expression and represents the info
The following extraction modes are supported:
-* [**Specified delimeter**](#specified-delimeter): Extraction based on specified delimiters that dictate how keys/values and pairs are separated from each other.
-* [**Non-specified delimeter**](#nonspecified-delimiter): Extraction with no need to specify delimiters. Any nonalphanumeric character is considered a delimiter.
+* [**Specified delimiter**](#specified-delimiter): Extraction based on specified delimiters that dictate how keys/values and pairs are separated from each other.
+* [**Non-specified delimiter**](#nonspecified-delimiter): Extraction with no need to specify delimiters. Any nonalphanumeric character is considered a delimiter.
* [**Regex**](#regex): Extraction based on [regular expressions](regex.md).
## Syntax
-### Specified delimeter
+### Specified delimiter
*T* `|` `parse-kv` *Expression* `as` `(` *KeysList* `)` `with` `(` `pair_delimiter` `=` *PairDelimiter* `,` `kv_delimiter` `=` *KvDelimiter* [`,` `quote` `=` *QuoteChars* ... [`,` `escape` `=` *EscapeChar* ...]] [`,` `greedy` `=` `true`] `)`
@@ -52,16 +52,17 @@ The original input tabular expression *T*, extended with columns per specified k
> [!NOTE]
>
-> * If a key doesn't appear in a record, the corresponding column value will either be `null` or an empty string, depending on the column type.
+> * If a key doesn't appear in a record, the corresponding column value is either `null` or an empty string, depending on the column type.
> * Only keys that are listed in the operator are extracted.
> * The first appearance of a key is extracted, and subsequent values are ignored.
-> * When extracting keys and values, leading and trailing white spaces are ignored.
+> * When you extract keys and values, leading and trailing white spaces are ignored.
## Examples
+The examples in this section show how to use the syntax to help you get started.
### Extraction with well-defined delimiters
-In the following example, keys and values are separated by well defined delimiters. These delimeters are comma and colon characters.
+In this query, keys and values are separated by well defined delimiters. These delimeters are comma and colon characters.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -101,7 +102,7 @@ print str='src=10.1.1.123 dst=10.1.1.124 bytes=125 failure="connection aborted"
|--|--|--|--|--|
|2021-01-01 10:00:54.0000000| 10.1.1.123| 10.1.1.124| 125| connection aborted|
-The following example uses different opening and closing quotes:
+This query uses different opening and closing quotes:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -221,7 +222,7 @@ print str="2021-01-01T10:00:34 [INFO] ThreadId:458745723, Machine:Node001, Text:
### Extraction using regex
-When no delimiters define text structure well enough, regular expression-based extraction can be useful.
+When no delimiters define text structure enough, regular expression-based extraction can be useful.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
diff --git a/data-explorer/kusto/query/parse-operator.md b/data-explorer/kusto/query/parse-operator.md
index efade40e5f..2a89a34523 100644
--- a/data-explorer/kusto/query/parse-operator.md
+++ b/data-explorer/kusto/query/parse-operator.md
@@ -3,7 +3,7 @@ title: parse operator
description: Learn how to use the parse operator to parse the value of a string expression into one or more calculated columns.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/22/2025
monikerRange: "microsoft-fabric || azure-data-explorer || azure-monitor || microsoft-sentinel "
---
# parse operator
@@ -37,7 +37,7 @@ Evaluates a string expression and parses its value into one or more calculated c
> * If the parsed *expression* isn't of type `string`, it will be converted to type `string`.
> * Use [`project`](project-operator.md) if you also want to drop or rename some columns.
-### Supported kind values
+### Supported `kind` values
|Text|Description|
|--|--|
@@ -67,9 +67,14 @@ The input table extended according to the list of columns that are provided to t
## Examples
+The examples in this section show how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
The `parse` operator provides a streamlined way to `extend` a table by using multiple `extract` applications on the same `string` expression. This result is useful, when the table has a `string` column that contains several values that you want to break into individual columns. For example, a column that's produced by a developer trace ("`printf`"/"`Console.WriteLine`") statement.
### Parse and extend results
+
In the following example, the column `EventText` of table `Traces` contains
strings of the form `Event: NotifySliceRelease (resourceName={0}, totalSlices={1}, sliceNumber={2}, lockTime={3}, releaseTime={4}, previousLockTime={5})`.
The operation extends the table with six columns: `resourceName`, `totalSlices`, `sliceNumber`, `lockTime`, `releaseTime`, and `previousLockTime`.
diff --git a/data-explorer/kusto/query/parse-where-operator.md b/data-explorer/kusto/query/parse-where-operator.md
index dce21ca740..1ca66e3ed3 100644
--- a/data-explorer/kusto/query/parse-where-operator.md
+++ b/data-explorer/kusto/query/parse-where-operator.md
@@ -3,7 +3,7 @@ title: parse-where operator
description: Learn how to use the parse-where operator to parse the value of a string expression into one or more calculated columns.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/20/2025
---
# parse-where operator
@@ -70,6 +70,10 @@ The input table, which is extended according to the list of columns that are pro
## Examples
+The examples in this section show how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
The `parse-where` operator provides a streamlined way to `extend` a table by using multiple `extract` applications on the same `string` expression. This is most useful when the table has a `string` column that contains several values that you want to break into individual columns. For example, you can break up a column that was produced by a developer trace ("`printf`"/"`Console.WriteLine`") statement.
### Using `parse`
diff --git a/data-explorer/kusto/query/partition-operator.md b/data-explorer/kusto/query/partition-operator.md
index dc4b38805d..9e88f8006a 100644
--- a/data-explorer/kusto/query/partition-operator.md
+++ b/data-explorer/kusto/query/partition-operator.md
@@ -3,7 +3,7 @@ title: partition operator
description: Learn how to use the partition operator to partition the records of the input table into multiple subtables.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/22/2025
---
# partition operator
@@ -11,7 +11,7 @@ ms.date: 08/11/2024
The partition operator partitions the records of its input table into multiple subtables according to values in a key column. The operator runs a subquery on each subtable, and produces a single output table that is the union of the results of all subqueries.
-This operator is useful when you need to perform a subquery only on a subset of rows that belongs to the same partition key, and not query the whole dataset. These subqueries could include aggregate functions, window functions, top *N* and others.
+The partition operator is useful when you need to perform a subquery only on a subset of rows that belong to the same partition key, and not a query of the whole dataset. These subqueries could include aggregate functions, window functions, top *N* and others.
The partition operator supports several strategies of subquery operation:
@@ -36,16 +36,16 @@ The partition operator supports several strategies of subquery operation:
| *Column*| `string` | :heavy_check_mark: | The name of a column in *T* whose values determine how to partition the input tabular source.|
| *TransformationSubQuery*| `string` | :heavy_check_mark: | A tabular transformation expression. The source is implicitly the subtables produced by partitioning the records of *T*. Each subtable is homogenous on the value of *Column*. The expression must provide only one tabular result and shouldn't have other types of statements, such as `let` statements.|
| *SubQueryWithSource*| `string` | :heavy_check_mark: | A tabular expression that includes its own tabular source, such as a table reference. This syntax is only supported with the [legacy strategy](#legacy-strategy). The subquery can only reference the key column, *Column*, from *T*. To reference the column, use the syntax `toscalar(`*Column*`)`. The expression must provide only one tabular result and shouldn't have other types of statements, such as `let` statements.|
-| *Hints*| `string` | | Zero or more space-separated parameters in the form of: *HintName* `=` *Value* that control the behavior of the operator. See the [supported hints](#supported-hints) per strategy type.
+| *Hints*| `string` | | Zero or more space-separated parameters in the form of: *HintName* `=` *Value* that control the behavior of the operator. See the [supported hints](#supported-hints) per strategy type.|
### Supported hints
|Hint name|Type|Strategy|Description|
|--|--|--|--|
|`hint.shufflekey`| `string` | [shuffle](#shuffle-strategy) | The partition key used to run the partition operator with the `shuffle` strategy. |
-|`hint.materialized`| `bool` | [legacy](#legacy-strategy) | If set to `true`, will materialize the source of the `partition` operator. The default value is `false`. |
+|`hint.materialized`| `bool` | [legacy](#legacy-strategy) | If set to `true`, materializes the source of the `partition` operator. The default value is `false`. |
|`hint.concurrency`| `int` | [legacy](#legacy-strategy) | Determines how many partitions to run in parallel. The default value is `16`.|
-|`hint.spread`| `int` | [legacy](#legacy-strategy) | Determines how to distribute the partitions among cluster nodes. The default value is `1`. For example, if there are *N* partitions and the spread hint is set to *P*, then the *N* partitions will be processed by *P* different cluster nodes equally in parallel/sequentially depending on the concurrency hint.|
+|`hint.spread`| `int` | [legacy](#legacy-strategy) | Determines how to distribute the partitions among cluster nodes. The default value is `1`. For example, if there are *N* partitions and the spread hint is set to *P*, then the *N* partitions are processed by *P* different cluster nodes equally, in parallel/sequentially depending on the concurrency hint.|
## Returns
@@ -120,10 +120,14 @@ If the subquery is a tabular transformation without a tabular source, the source
To use this strategy, specify `hint.strategy=legacy` or omit any other strategy indication.
> [!NOTE]
-> An error will occur if the partition column, *Column*, contains more than 64 distinct values.
+> An error occurs if the partition column, *Column*, contains more than 64 distinct values.
## Examples
+The examples in this section show how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Find top values
@@ -144,7 +148,7 @@ StormEvents
)
```
-**Output**
+**Output**
|EventType|State|Events|Injuries|
|---|---|---|---|
@@ -180,7 +184,7 @@ StormEvents
)
```
-**Output**
+**Output**
|EventType|TotalInjueries|
|---|---|
@@ -212,7 +216,7 @@ StormEvents
| count
```
-**Output**
+**Output**
|Count|
|---|
@@ -238,7 +242,7 @@ range x from 1 to 2 step 1
| count
```
-**Output**
+**Output**
|Count|
|---|
diff --git a/data-explorer/kusto/query/pattern-statement.md b/data-explorer/kusto/query/pattern-statement.md
index b50d176806..57aec2b373 100644
--- a/data-explorer/kusto/query/pattern-statement.md
+++ b/data-explorer/kusto/query/pattern-statement.md
@@ -56,13 +56,14 @@ For more information, see [Working with middle-tier applications](#work-with-mid
| *PathArgType* | `string` | | The type of the *PathArgType* argument. Possible values: `string` |
| *ArgValue* | `string` | :heavy_check_mark: | The *ArgName* and optional *PathName* tuple values to be mapped to an *expression*. |
| *PathValue* | `string` | | The value to map for *PathName*. |
-| *expression* | `string` | :heavy_check_mark: | A tabular or lambda expression that references a function returning tabular data. For example: `Logs | where Timestamp > ago(1h)` |
+| *expression* | `string` | :heavy_check_mark: | A tabular or lambda expression that references a function returning tabular data. For example: `Logs | where Timestamp > ago(1h)`|
## Examples
+The examples in this section show how to use the syntax to help you get started.
+
[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
-In these examples, a pattern is defined.
### Define a simple pattern
diff --git a/data-explorer/kusto/query/print-operator.md b/data-explorer/kusto/query/print-operator.md
index 1107e88c82..6c1b280ecc 100644
--- a/data-explorer/kusto/query/print-operator.md
+++ b/data-explorer/kusto/query/print-operator.md
@@ -1,9 +1,9 @@
---
-title: print operator
+title: print operator
description: Learn how to use the print operator to output a single row with one or more scalar expression results as columns.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 11/20/2024
+ms.date: 01/20/2025
---
# print operator
@@ -30,6 +30,10 @@ A table with one or more columns and a single row. Each column returns the corre
## Examples
+The examples in this section show how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Print sum and variable value
The following example outputs a row with two columns. One column contains the sum of a series of numbers and the other column contains the value of the variable, `x`.
diff --git a/data-explorer/kusto/query/query-parameters-statement.md b/data-explorer/kusto/query/query-parameters-statement.md
index 694f38fbc3..7f58a57b53 100644
--- a/data-explorer/kusto/query/query-parameters-statement.md
+++ b/data-explorer/kusto/query/query-parameters-statement.md
@@ -34,7 +34,7 @@ To reference query parameters, the query text, or functions it uses, must first
|Name|Type|Required|Description|
|--|--|--|--|
|*Name1*| `string` | :heavy_check_mark:|The name of a query parameter used in the query.|
-|*Type1*| `string` | :heavy_check_mark:|The corresponding type, such as `string` or `datetime`. The values provided by the user are encoded as strings. The appropriate parse method is applied to the query parameter to get a strongly-typed value.|
+|*Type1*| `string` | :heavy_check_mark:|The corresponding type, such as `string` or `datetime`. The values provided by the user are encoded as strings. The appropriate parse method is applied to the query parameter to get a strongly typed value.|
|*DefaultValue1*| `string` ||A default value for the parameter. This value must be a literal of the appropriate scalar type.|
> [!NOTE]
@@ -44,8 +44,14 @@ To reference query parameters, the query text, or functions it uses, must first
## Example
+The examples in this section show how to use the syntax to help you get started.
+
[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+### Declare query parameters
+
+ This query retrieves storm events from the *StormEvents* table where the total number of direct and indirect injuries exceeds a specified threshold (default is 90). It then projects the *EpisodeId*, *EventType*, and the total number of injuries for each of these events.
+
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
> Run the query
diff --git a/data-explorer/kusto/query/range-operator.md b/data-explorer/kusto/query/range-operator.md
index 0f5418dcf4..4a8af54716 100644
--- a/data-explorer/kusto/query/range-operator.md
+++ b/data-explorer/kusto/query/range-operator.md
@@ -3,7 +3,7 @@ title: range operator
description: Learn how to use the range operator to generate a single-column table of values.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 01/07/2025
+ms.date: 01/22/2025
---
# range operator
@@ -26,7 +26,7 @@ Generates a single-column table of values.
|--|--|--|--|
|*columnName*| `string` | :heavy_check_mark:| The name of the single column in the output table.|
|*start*|int, long, real, datetime, or timespan| :heavy_check_mark:| The smallest value in the output.|
-|*stop*|int, long, real, datetime, or timespan| :heavy_check_mark:| The highest value being generated in the output or a bound on the highest value if *step* steps over this value.|
+|*stop*|int, long, real, datetime, or timespan| :heavy_check_mark:| The highest value being generated in the output or a bound on the highest value if *step* is over this value.|
|*step*|int, long, real, datetime, or timespan| :heavy_check_mark:| The difference between two consecutive values.|
> [!NOTE]
@@ -39,6 +39,10 @@ whose values are *start*, *start* `+` *step*, ... up to and until *stop*.
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Range over the past seven days
The following example creates a table with entries for the current time stamp extended over the past seven days, once a day.
@@ -114,13 +118,13 @@ let MyTimeline = range MyMonthHour from MyMonthStart to now() step StepBy
**Output**
-| MyMonthHour | MyMonthHourinUnixTime | DateOnly | TimeOnly |
-|--------------|------------------------|---------------|------------------------------|
-| 2023-02-01 | 00:00:00.0000000 | 1675209600 | 2023-02-01 00:00:00.0000000 |
-| 2023-02-01 | 04:32:02.4000000 | 1675225922.4 | 2023-02-01 00:00:00.0000000 |
-| 2023-02-01 | 09:04:04.8000000 | 1675242244.8 | 2023-02-01 00:00:00.0000000 |
-| 2023-02-01 | 13:36:07.2000000 | 1675258567.2 | 2023-02-01 00:00:00.0000000 |
-| ... | ... | ... | ... |
+| MyMonthHour | MyMonthHourinUnixTime | DateOnly | TimeOnly |
+|--|--|--|--|
+| 2023-02-01 | 00:00:00.0000000 | 1675209600 | 2023-02-01 00:00:00.0000000 |
+| 2023-02-01 | 04:32:02.4000000 | 1675225922.4 | 2023-02-01 00:00:00.0000000 |
+| 2023-02-01 | 09:04:04.8000000 | 1675242244.8 | 2023-02-01 00:00:00.0000000 |
+| 2023-02-01 | 13:36:07.2000000 | 1675258567.2 | 2023-02-01 00:00:00.0000000 |
+| ... | ... | ... | ... |
### Incremented steps
@@ -134,16 +138,19 @@ whose type is `long` and results in values from one to eight incremented by thre
```kusto
range Steps from 1 to 8 step 3
+```
+
+**Output**
| Steps |
-|-------|
-| 1 |
-| 4 |
-| 7 |
+|--|
+| 1 |
+| 4 |
+| 7 |
### Traces over a time range
-The following example shows how the `range` operator can be used to create a dimension table that is used to introduce zeros where the source data has no values. It takes timestamps from the last four hours and counts traces for each one minute interval. When there are no traces for a specific interval, the count is zero.
+The following example shows how the `range` operator can be used to create a dimension table that is used to introduce zeros where the source data has no values. It takes timestamps from the last four hours and counts traces for each one-minute interval. When there are no traces for a specific interval, the count is zero.
```kusto
range TIMESTAMP from ago(4h) to now() step 1m
diff --git a/data-explorer/kusto/query/reduce-operator.md b/data-explorer/kusto/query/reduce-operator.md
index b507ef2e09..bc72989891 100644
--- a/data-explorer/kusto/query/reduce-operator.md
+++ b/data-explorer/kusto/query/reduce-operator.md
@@ -3,7 +3,7 @@ title: reduce operator
description: Learn how to use the reduce operator to group a set of strings together based on value similarity.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 02/04/2025
---
# reduce operator
@@ -44,8 +44,14 @@ For example, the result of `reduce by city` might include:
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Small threshold value
+This query generates a range of numbers, creates a new column with concatenated strings and random integers, and then groups the rows by the new column with specific reduction parameters.
+
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
> Run the query
@@ -65,6 +71,8 @@ range x from 1 to 1000 step 1
### Large threshold value
+This query generates a range of numbers, creates a new column with concatenated strings and random integers, and then groups the rows by the new column with specific reduction parameters.
+
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
> Run the query
@@ -78,19 +86,21 @@ range x from 1 to 1000 step 1
**Output**
-|Pattern |Count|Representative |
-|----------------|-----|-----------------|
-|MachineLearning*|177|MachineLearningX9|
-|MachineLearning*|102|MachineLearningX0|
-|MachineLearning*|106|MachineLearningX1|
-|MachineLearning*|96|MachineLearningX6|
-|MachineLearning*|110|MachineLearningX4|
-|MachineLearning*|100|MachineLearningX3|
-|MachineLearning*|99|MachineLearningX8|
-|MachineLearning*|104|MachineLearningX7|
-|MachineLearning*|106|MachineLearningX2|
-
-### Behavior of Characters parameter
+The result includes only those groups where the MyText value appears in at least 90% of the rows.
+
+| Pattern | Count | Representative |
+|--|--|--|
+| MachineLearning* | 177 | MachineLearningX9 |
+| MachineLearning* | 102 | MachineLearningX0 |
+| MachineLearning* | 106 | MachineLearningX1 |
+| MachineLearning* | 96 | MachineLearningX6 |
+| MachineLearning* | 110 | MachineLearningX4 |
+| MachineLearning* | 100 | MachineLearningX3 |
+| MachineLearning* | 99 | MachineLearningX8 |
+| MachineLearning* | 104 | MachineLearningX7 |
+| MachineLearning* | 106 | MachineLearningX2 |
+
+### Behavior of `Characters` parameter
If the *Characters* parameter is unspecified, then every non-ascii numeric character becomes a term separator.
@@ -105,11 +115,11 @@ range x from 1 to 10 step 1 | project str = strcat("foo", "Z", tostring(x)) | re
**Output**
-|Pattern|Count|Representative|
+| Pattern | Count | Representative |
|--|--|--|
-|others|10||
+| others | 10 | |
-However, if you specify that "Z" is a separator, then it's as if each value in `str` is 2 terms: `foo` and `tostring(x)`:
+However, if you specify that "Z" is a separator, then it's as if each value in `str` is two terms: `foo` and `tostring(x)`:
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -129,23 +139,25 @@ range x from 1 to 10 step 1 | project str = strcat("foo", "Z", tostring(x)) | re
### Apply `reduce` to sanitized input
The following example shows how one might apply the `reduce` operator to a "sanitized"
-input, in which GUIDs in the column being reduced are replaced prior to reducing
+input, in which GUIDs in the column being reduced are replaced before reducing:
+
+Start with a few records from the Trace table.
+Then reduce the Text column which includes random GUIDs.
+As random GUIDs interfere with the reduce operation, replace them all
+by the string "GUID".
+Now perform the reduce operation. In case there are other "quasi-random" identifiers with embedded '-'
+or '_' characters in them, treat characters as non-term-breakers.
```kusto
-// Start with a few records from the Trace table.
-Trace | take 10000
-// We will reduce the Text column which includes random GUIDs.
-// As random GUIDs interfere with the reduce operation, replace them all
-// by the string "GUID".
-| extend Text=replace_regex(Text, @"[[:xdigit:]]{8}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{12}", @"GUID")
-// Now perform the reduce. In case there are other "quasi-random" identifiers with embedded '-'
-// or '_' characters in them, treat these as non-term-breakers.
+Trace
+| take 10000
+| extend Text = replace(@"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}", "GUID", Text)
| reduce by Text with characters="-_"
```
## Related content
-[autocluster](autocluster-plugin.md)
+* [autocluster](autocluster-plugin.md)
> [!NOTE]
> The implementation of `reduce` operator is largely based on the paper [A Data Clustering Algorithm for Mining Patterns From Event Logs](https://ristov.github.io/publications/slct-ipom03-web.pdf), by Risto Vaarandi.
diff --git a/data-explorer/kusto/query/sample-distinct-operator.md b/data-explorer/kusto/query/sample-distinct-operator.md
index d70b42e966..c72fbf3ee6 100644
--- a/data-explorer/kusto/query/sample-distinct-operator.md
+++ b/data-explorer/kusto/query/sample-distinct-operator.md
@@ -3,7 +3,7 @@ title: sample-distinct operator
description: Learn how to use the sample-distinct operator to return a column that contains up to the specified number of distinct values of the requested columns.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/21/2025
---
# sample-distinct operator
@@ -34,7 +34,11 @@ The operator tries to return an answer as quickly as possible rather than trying
## Examples
-Get 10 distinct values from a population
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
+### Get 10 distinct values from a population
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -45,7 +49,21 @@ Get 10 distinct values from a population
StormEvents | sample-distinct 10 of EpisodeId
```
-Sample a population and do further computation without exceeding the query limits in the summarize
+**Output**
+
+| EpisodeId |
+|--|
+| 11074 |
+| 11078 |
+| 11749 |
+| 12554 |
+| 12561 |
+| 13183 |
+| 11780 |
+| 11781 |
+| 12826 |
+
+### Further compute the sample values
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -58,3 +76,18 @@ StormEvents
| where EpisodeId in (sampleEpisodes)
| summarize totalInjuries=sum(InjuriesDirect) by EpisodeId
```
+
+**Output**
+
+| EpisodeId | totalInjuries |
+|--|--|
+| 11091 | 0 |
+| 11074 | 0 |
+| 11078 | 0 |
+| 11749 | 0 |
+| 12554 | 3 |
+| 12561 | 0 |
+| 13183 | 0 |
+| 11780 | 0 |
+| 11781 | 0 |
+| 12826 | 0 |
diff --git a/data-explorer/kusto/query/sample-operator.md b/data-explorer/kusto/query/sample-operator.md
index acf67124ec..5a1c271a58 100644
--- a/data-explorer/kusto/query/sample-operator.md
+++ b/data-explorer/kusto/query/sample-operator.md
@@ -3,7 +3,7 @@ title: sample operator
description: Learn how to use the sample operator to return up to the specified number of rows from the input table.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/22/2025
---
# sample operator
@@ -14,7 +14,7 @@ Returns up to the specified number of random rows from the input table.
> [!NOTE]
>
> * `sample` is geared for speed rather than even distribution of values. Specifically, it means that it will not produce 'fair' results if used after operators that union 2 datasets of different sizes (such as a `union` or `join` operators). It's recommended to use `sample` right after the table reference and filters.
-> * `sample` is a non-deterministic operator, and will return different result set each time it is evaluated during the query. For example, the following query yields two different rows (even if one would expect to return the same row twice).
+> * `sample` is a non-deterministic operator, and returns a different result set each time it's evaluated during the query. For example, the following query yields two different rows (even if one would expect to return the same row twice).
## Syntax
@@ -31,6 +31,14 @@ Returns up to the specified number of random rows from the input table.
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
+### Generate a sample
+
+This query creates a range of numbers, samples one value, and then duplicates that sample.
+
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
> Run the query
@@ -46,8 +54,8 @@ union (_sample), (_sample)
| x |
| --- |
-| 83 |
-| 3 |
+| 74 |
+| 63 |
To ensure that in example above `_sample` is calculated once, one can use [materialize()](materialize-function.md) function:
@@ -66,8 +74,10 @@ union (_sample), (_sample)
| x |
| --- |
-| 34 |
-| 34 |
+| 24 |
+| 24 |
+
+### Generate a sample of a certain percentage of data
To sample a certain percentage of your data (rather than a specified number of rows), you can use
@@ -80,7 +90,23 @@ To sample a certain percentage of your data (rather than a specified number of r
StormEvents | where rand() < 0.1
```
-To sample keys rather than rows (for example - sample 10 Ids and get all rows for these Ids) you can use [`sample-distinct`](sample-distinct-operator.md) in combination with the `in` operator.
+**Output**
+
+The table contains the first few rows of the output. Run the query to view the full result.
+
+| StartTime | EndTime | EpisodeId | EventId | State | EventType |
+|--|--|--|--|--|--|
+| 2007-01-01T00:00:00Z | 2007-01-20T10:24:00Z | 2403 | 11914 | INDIANA | Flood |
+| 2007-01-01T00:00:00Z | 2007-01-24T18:47:00Z | 2408 | 11930 | INDIANA | Flood |
+| 2007-01-01T00:00:00Z | 2007-01-01T12:00:00Z | 1979 | 12631 | DELAWARE | Heavy Rain |
+| 2007-01-01T00:00:00Z | 2007-01-01T00:00:00Z | 2592 | 13208 | NORTH CAROLINA | Thunderstorm Wind |
+| 2007-01-01T00:00:00Z | 2007-01-31T23:59:00Z | 1492 | 7069 | MINNESOTA | Drought |
+| 2007-01-01T00:00:00Z | 2007-01-31T23:59:00Z | 2240 | 10858 | TEXAS | Drought |
+|...|...|...|...|...|...|
+
+### Generate a sample of keys
+
+To sample keys rather than rows (for example - sample 10 Ids and get all rows for these Ids), you can use [`sample-distinct`](sample-distinct-operator.md) in combination with the `in` operator.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -92,3 +118,17 @@ let sampleEpisodes = StormEvents | sample-distinct 10 of EpisodeId;
StormEvents
| where EpisodeId in (sampleEpisodes)
```
+
+**Output**
+
+The table contains the first few rows of the output. Run the query to view the full result.
+
+| StartTime | EndTime | EpisodeId | EventId | State | EventType |
+|--|--|--|--|--|--|
+| 2007-09-18T20:00:00Z | 2007-09-19T18:00:00Z | 11074 | 60904 | FLORIDA | Heavy Rain |
+| 2007-09-20T21:57:00Z | 2007-09-20T22:05:00Z | 11078 | 60913 | FLORIDA | Tornado |
+| 2007-09-29T08:11:00Z | 2007-09-29T08:11:00Z | 11091 | 61032 | ATLANTIC SOUTH | Waterspout |
+| 2007-12-07T14:00:00Z | 2007-12-08T04:00:00Z | 13183 | 73241 | AMERICAN SAMOA | Flash Flood |
+| 2007-12-11T21:45:00Z | 2007-12-12T16:45:00Z | 12826 | 70787 | KANSAS | Flood |
+| 2007-12-13T09:02:00Z | 2007-12-13T10:30:00Z | 11780 | 64725 | KENTUCKY | Flood |
+|...|...|...|...|...|...|
diff --git a/data-explorer/kusto/query/scan-operator.md b/data-explorer/kusto/query/scan-operator.md
index 77fcff2949..20f570c182 100644
--- a/data-explorer/kusto/query/scan-operator.md
+++ b/data-explorer/kusto/query/scan-operator.md
@@ -3,7 +3,7 @@ title: scan operator
description: Learn how to use the scan operator to scan data, match, and build sequences based on the predicates.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/22/2025
---
# scan operator
@@ -81,6 +81,10 @@ For a detailed example of this logic, see the [scan logic walkthrough](#scan-log
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Cumulative sum
Calculate the cumulative sum for an input column. The result of this example is equivalent to using [row_cumsum()](row-cumsum-function.md).
@@ -334,6 +338,18 @@ Events
)
```
+**Output**
+
+| Ts | Event | m_id |
+|--|--|--|
+| 00:01:00 | Start | 0 |
+| 00:02:00 | B | 0 |
+| 00:03:00 | D | 0 |
+| 00:04:00 | Stop | 0 |
+| 00:08:00 | Start | 1 |
+| 00:11:00 | E | 1 |
+| 00:12:00 | Stop | 1 |
+
### The state
Think of the state of the `scan` operator as a table with a row for each step, in which each step has its own state. This state contains the latest values of the columns and declared variables from all of the previous steps and the current step. To learn more, see [State](#state).
diff --git a/data-explorer/kusto/query/search-operator.md b/data-explorer/kusto/query/search-operator.md
index 838bacfcdb..577edeb832 100644
--- a/data-explorer/kusto/query/search-operator.md
+++ b/data-explorer/kusto/query/search-operator.md
@@ -3,7 +3,7 @@ title: search operator
description: Learn how to use the search operator to search for a text pattern in multiple tables and columns.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/21/2025
---
# search operator
@@ -24,14 +24,17 @@ Searches a text pattern in multiple tables and columns.
| Name | Type | Required | Description |
|--|--|--|--|
-| *T* | `string` | | The tabular data source to be searched over, such as a table name, a [union operator](union-operator.md), or the results of a tabular query. Cannot appear together with *TableSources*.|
+| *T* | `string` | | The tabular data source to be searched over, such as a table name, a [union operator](union-operator.md), or the results of a tabular query. Can't be specified together with *TableSources*.|
| *CaseSensitivity* | `string` | | A flag that controls the behavior of all `string` scalar operators, such as `has`, with respect to case sensitivity. Valid values are `default`, `case_insensitive`, `case_sensitive`. The options `default` and `case_insensitive` are synonymous, since the default behavior is case insensitive.|
-| *TableSources* | `string` | | A comma-separated list of "wildcarded" table names to take part in the search. The list has the same syntax as the list of the [union operator](union-operator.md). Cannot appear together with *TabularSource*.|
+| *TableSources* | `string` | | A comma-separated list of "wildcarded" table names to take part in the search. The list has the same syntax as the list of the [union operator](union-operator.md). Can't be specified together with tabular data source (*T*).|
| *SearchPredicate* | `string` | :heavy_check_mark: | A boolean expression to be evaluated for every record in the input. If it returns `true`, the record is outputted. See [Search predicate syntax](#search-predicate-syntax).|
+> [!NOTE]
+> If both tabular data source (*T*) and *TableSources* are omitted, the search is carried over all unrestricted tables and views of the database in scope.
+
### Search predicate syntax
-The *SearchPredicate* allows you to search for specific terms in all columns of a table. The operator that will be applied to a search term depends on the presence and placement of a wildcard asterisk (`*`) in the term, as shown in the following table.
+The *SearchPredicate* allows you to search for specific terms in all columns of a table. The operator that is applied to a search term depends on the presence and placement of a wildcard asterisk (`*`) in the term, as shown in the following table.
|Literal |Operator |
|----------|-----------|
@@ -51,8 +54,7 @@ You can also restrict the search to a specific column, look for an exact match i
Use boolean expressions to combine conditions and create more complex searches. For example, `"error" and x==123` would result in a search for records that have the term `error` in any columns and the value `123` in the `x` column.
-> [!NOTE]
-> If both *TabularSource* and *TableSources* are omitted, the search is carried over all unrestricted tables and views of the database in scope.
+
### Search predicate syntax examples
@@ -74,18 +76,24 @@ Use boolean expressions to combine conditions and create more complex searches.
## Remarks
-Unlike the [find operator](find-operator.md), the `search` operator does not support the following:
+Unlike the [find operator](find-operator.md), the `search` operator doesn't support the following syntax:
-1. `withsource=`: The output will always include a column called `$table` of type `string` whose value
+1. `withsource=`: The output always includes a column called `$table` of type `string` whose value
is the table name from which each record was retrieved (or some system-generated name if the source
isn't a table but a composite expression).
2. `project=`, `project-smart`: The output schema is equivalent to `project-smart` output schema.
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Global term search
-Search for a term over all unrestricted tables and views of the database in scope.
+Search for the term Green in all the tables of the *ContosoSales* database.
+
+The output finds records with the term *Green* as a last name or a color in the `Customers`, `Products`, and `SalesTable` tables.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -93,14 +101,23 @@ Search for a term over all unrestricted tables and views of the database in scop
::: moniker-end
```kusto
-search "Green"
+ search "Green"
```
-The output contains records from the `Customers`, `Products`, and `SalesTable` tables. The `Customers` records shows all customers with the last name "Green", and the `Products` and `SalesTable` records shows products with some mention of "Green".
+**Output**
+
+| $table | CityName | ContinentName | CustomerKey | Education | FirstName | Gender | LastName |
+|--|--|--|--|--|--|--|--|
+| Customers | Ballard | North America | 16549 | Partial College | Mason | M | Green |
+| Customers | Bellingham | North America | 2070 | High School | Adam | M | Green |
+| Customers | Bellingham | North America | 10658 | Bachelors | Sara | F | Green |
+| Customers | Beverly Hills | North America | 806 | Graduate Degree | Richard | M | Green |
+| Customers | Beverly Hills | North America | 7674 | Graduate Degree | James | M | Green |
+| Customers | Burbank | North America | 5241 | Graduate Degree | Madeline | F | Green |
### Conditional global term search
-Search for records that match both terms over all unrestricted tables and views of the database in scope.
+Search for records that contain the term *Green* and one of either terms *Deluxe* or *Proseware* in the *ContosoSales* database.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -111,9 +128,21 @@ Search for records that match both terms over all unrestricted tables and views
search "Green" and ("Deluxe" or "Proseware")
```
+**Output**
+
+| $table | ProductName | Manufacturer | ColorName | ClassName | ProductCategoryName |
+|--|--|--|--|--|--|
+| Products | Contoso 8GB Clock & Radio MP3 Player X850 Green | Contoso, Ltd | Green | Deluxe | Audio |
+| Products | Proseware Scan Jet Digital Flat Bed Scanner M300 Green | Proseware, Inc. | Green | Regular | Computers |
+| Products | Proseware All-In-One Photo Printer M200 Green | Proseware, Inc. | Green | Regular | Computers |
+| Products | Proseware Ink Jet Wireless All-In-One Printer M400 Green | Proseware, Inc. | Green | Regular | Computers |
+| Products | Proseware Ink Jet Instant PDF Sheet-Fed Scanner M300 Green | Proseware, Inc. | Green | Regular | Computers |
+| Products | Proseware Desk Jet All-in-One Printer, Scanner, Copier M350 Green | Proseware, Inc. | Green | Regular | Computers |
+| Products | Proseware Duplex Scanner M200 Green | Proseware, Inc. | Green | Regular | Computers |
+
### Search a specific table
-Search only in the `Customers` table.
+Search for the term *Green* only in the `Customers` table.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -124,9 +153,20 @@ Search only in the `Customers` table.
search in (Products) "Green"
```
+**Output**
+
+| $table | ProductName | Manufacturer | ColorName |
+|--|--|--|--|
+| Products | Contoso 4G MP3 Player E400 Green | Contoso, Ltd | Green |
+| Products | Contoso 8GB Super-Slim MP3/Video Player M800 Green | Contoso, Ltd | Green |
+| Products | Contoso 16GB Mp5 Player M1600 Green | Contoso, Ltd | Green |
+| Products | Contoso 8GB Clock & Radio MP3 Player X850 Green | Contoso, Ltd | Green |
+| Products | NT Wireless Bluetooth Stereo Headphones M402 Green | Northwind Traders | Green |
+| Products | NT Wireless Transmitter and Bluetooth Headphones M150 Green | Northwind Traders | Green |
+
### Case-sensitive search
-Search for records that match both case-sensitive terms over all unrestricted tables and views of the database in scope.
+Search for records that match the case-sensitive term in the *ContosoSales* database.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -137,9 +177,21 @@ Search for records that match both case-sensitive terms over all unrestricted ta
search kind=case_sensitive "blue"
```
+**Output**
+
+| $table | ProductName | Manufacturer | ColorName | ClassName |
+|--|--|--|--|--|
+| Products | Contoso 16GB New Generation MP5 Player M1650 blue | Contoso, Ltd | blue | Regular |
+| Products | Contoso Bright Light battery E20 blue | Contoso, Ltd | blue | Economy |
+| Products | Litware 120mm Blue LED Case Fan E901 blue | Litware, Inc. | blue | Economy |
+| NewSales | Litware 120mm Blue LED Case Fan E901 blue | Litware, Inc. | blue | Economy |
+| NewSales | Litware 120mm Blue LED Case Fan E901 blue | Litware, Inc. | blue | Economy |
+| NewSales | Litware 120mm Blue LED Case Fan E901 blue | Litware, Inc. | blue | Economy |
+| NewSales | Litware 120mm Blue LED Case Fan E901 blue | Litware, Inc. | blue | Economy |
+
### Search specific columns
-Search for a term in the "FirstName" and "LastName" columns over all unrestricted tables and views of the database in scope.
+Search for the terms *Aaron* and *Hughes*, in the "FirstName" and "LastName" columns respectively, in the *ContosoSales* database.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -150,9 +202,20 @@ Search for a term in the "FirstName" and "LastName" columns over all unrestricte
search FirstName:"Aaron" or LastName:"Hughes"
```
+**Output**
+
+| $table | CustomerKey | Education | FirstName | Gender | LastName |
+|--|--|--|--|--|--|
+| Customers | 18285 | High School | Riley | F | Hughes |
+| Customers | 802 | Graduate Degree | Aaron | M | Sharma |
+| Customers | 986 | Bachelors | Melanie | F | Hughes |
+| Customers | 12669 | High School | Jessica | F | Hughes |
+| Customers | 13436 | Graduate Degree | Mariah | F | Hughes |
+| Customers | 10152 | Graduate Degree | Aaron | M | Campbell |
+
### Limit search by timestamp
-Search for a term over all unrestricted tables and views of the database in scope if the term appears in a record with a date greater than the given date.
+Search for the term *Hughes* in the *ContosoSales* database, if the term appears in a record with a date greater than the given date in 'datetime'.
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -163,6 +226,16 @@ Search for a term over all unrestricted tables and views of the database in scop
search "Hughes" and DateKey > datetime('2009-01-01')
```
+**Output**
+
+| $table | DateKey | SalesAmount_real |
+|--|--|--|
+| SalesTable | 2021-12-13T00:00:00Z | 446.4715 |
+| SalesTable | 2021-12-13T00:00:00Z | 120.555 |
+| SalesTable | 2021-12-13T00:00:00Z | 48.4405 |
+| SalesTable | 2021-12-13T00:00:00Z | 39.6435 |
+| SalesTable | 2021-12-13T00:00:00Z | 56.9905 |
+
## Performance Tips
|#|Tip|Prefer|Over|
diff --git a/data-explorer/kusto/query/serialize-operator.md b/data-explorer/kusto/query/serialize-operator.md
index 1969735169..48d33cb620 100644
--- a/data-explorer/kusto/query/serialize-operator.md
+++ b/data-explorer/kusto/query/serialize-operator.md
@@ -3,7 +3,7 @@ title: serialize operator
description: Learn how to use the serialize operator to mark the input row set as serialized and ready for window functions.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 01/21/2025
---
# serialize operator
@@ -28,8 +28,14 @@ The operator has a declarative meaning. It marks the input row set as serialized
## Examples
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
### Serialize subset of rows by condition
+ This query retrieves all log entries from the *TraceLogs* table that have a specific *ClientRequestId* and preserves the order of these entries during processing.
+
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
> Run the query
@@ -41,6 +47,19 @@ TraceLogs
| serialize
```
+**Output**
+
+This table only shows the top 5 query results.
+
+| Timestamp | Node | Component | ClientRequestId | Message |
+|--|--|--|--|--|
+| 2014-03-08T12:24:55.5464757Z | Engine000000000757 | INGESTOR_GATEWAY | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | $$IngestionCommand table=fogEvents format=json |
+| 2014-03-08T12:24:56.0929514Z | Engine000000000757 | DOWNLOADER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | Downloading file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_0.json.gz"" |
+| 2014-03-08T12:25:40.3574831Z | Engine000000000341 | INGESTOR_EXECUTER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | IngestionCompletionEvent: finished ingestion file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_0.json.gz"" |
+| 2014-03-08T12:25:40.9039588Z | Engine000000000341 | DOWNLOADER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | Downloading file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_1.json.gz"" |
+| 2014-03-08T12:26:25.1684905Z | Engine000000000057 | INGESTOR_EXECUTER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | IngestionCompletionEvent: finished ingestion file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_1.json.gz"" |
+|...|...|...|...|...|
+
### Add row number to the serialized table
To add a row number to the serialized table, use the [row_number()](row-number-function.md) function.
@@ -56,6 +75,19 @@ TraceLogs
| serialize rn = row_number()
```
+**Output**
+
+This table only shows the top 5 query results.
+
+| Timestamp | rn | Node | Component | ClientRequestId | Message |
+|--|--|--|--|--|--|
+| 2014-03-08T13:00:01.6638235Z | 1 | Engine000000000899 | INGESTOR_EXECUTER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | IngestionCompletionEvent: finished ingestion file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_46.json.gz"" |
+| 2014-03-08T13:00:02.2102992Z | 2 | Engine000000000899 | DOWNLOADER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | Downloading file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_47.json.gz"" |
+| 2014-03-08T13:00:46.4748309Z | 3 | Engine000000000584 | INGESTOR_EXECUTER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | IngestionCompletionEvent: finished ingestion file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_47.json.gz"" |
+| 2014-03-08T13:00:47.0213066Z | 4 | Engine000000000584 | DOWNLOADER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | Downloading file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_48.json.gz"" |
+| 2014-03-08T13:01:31.2858383Z | 5 | Engine000000000380 | INGESTOR_EXECUTER | 5a848f70-9996-eb17-15ed-21b8eb94bf0e | IngestionCompletionEvent: finished ingestion file path: ""https://benchmarklogs3.blob.core.windows.net/benchmark/2014/IMAGINEFIRST0_1399_48.json.gz"" |
+|...|...|...|...|...|
+
## Serialization behavior of operators
The output row set of the following operators is marked as serialized.
diff --git a/data-explorer/kusto/query/set-statement.md b/data-explorer/kusto/query/set-statement.md
index 49d2095c28..0d3c42d949 100644
--- a/data-explorer/kusto/query/set-statement.md
+++ b/data-explorer/kusto/query/set-statement.md
@@ -35,6 +35,8 @@ Request properties aren't formally a part of the Kusto Query Language and may be
## Example
+This query enables query tracing and then fetches the first 100 records from the Events table.
+
[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
```kusto
diff --git a/data-explorer/kusto/query/shuffle-query.md b/data-explorer/kusto/query/shuffle-query.md
index d12ee17050..d11cd7dfcf 100644
--- a/data-explorer/kusto/query/shuffle-query.md
+++ b/data-explorer/kusto/query/shuffle-query.md
@@ -69,7 +69,11 @@ In some cases, the `hint.strategy = shuffle` is ignored, and the query won't run
## Examples
-## Use summarize with shuffle
+The example in this section shows how to use the syntax to help you get started.
+
+[!INCLUDE [help-cluster](../includes/help-cluster-note.md)]
+
+### Use summarize with shuffle
The `shuffle` strategy query with `summarize` operator shares the load on all cluster nodes, where each node processes one partition of the data.
@@ -90,7 +94,7 @@ StormEvents
|---|
|67|
-## Use join with shuffle
+### Use join with shuffle
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -117,7 +121,7 @@ StormEvents
|---|
|103|
-## Use make-series with shuffle
+### Use make-series with shuffle
:::moniker range="azure-data-explorer"
> [!div class="nextstepaction"]
@@ -306,7 +310,7 @@ lineitem
| consume
```
-## Use join with shuffle to improve performance
+### Use join with shuffle to improve performance
The following example shows how using `shuffle` strategy with the `join` operator improves performance.