diff --git a/data-explorer/kusto/management/data-export/export-data-to-storage.md b/data-explorer/kusto/management/data-export/export-data-to-storage.md index 775c0238f9..d5976ae2c5 100644 --- a/data-explorer/kusto/management/data-export/export-data-to-storage.md +++ b/data-explorer/kusto/management/data-export/export-data-to-storage.md @@ -23,32 +23,32 @@ You must have at least [Database Viewer](../../access-control/role-based-access- ## Parameters -| Name | Type | Required | Description | -|--|--|--|--| -| `async` | `string` | | If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). | -| `compressed` | `string` | | If specified, the output storage artifacts are compressed as `.gz` files. See the `compressionType` [supported property](#supported-properties) for compressing Parquet files as snappy. | -| *OutputDataFormat* | `string` | :heavy_check_mark: | Indicates the data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. | -| *StorageConnectionString* | `string` | | One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that indicate which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must indicate the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. | -| *PropertyName*, *PropertyValue* | `string` | | A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).| +| Name | Type | Required | Description | +|-- |-- |-- |-- | +| *async* | `string` | | If specified, the command runs in asynchronous mode. See [asynchronous mode](#asynchronous-mode). | +| *compressed* | `bool` | | If specified, the output storage artifacts are compressed in the format specified by the `compressionType` [supported property](#supported-properties). | +| *OutputDataFormat* | `string` | :heavy_check_mark: | The data format of the storage artifacts written by the command. Supported values are: `csv`, `tsv`, `json`, and `parquet`. | +| *StorageConnectionString* | `string` | | One or more [storage connection strings](../../api/connection-strings/storage-connection-strings.md) that specify which storage to write the data to. More than one storage connection string might be specified for scalable writes. Each such connection string must specify the credentials to use when writing to storage. For example, when writing to Azure Blob Storage, the credentials can be the storage account key, or a shared access key (SAS) with the permissions to read, write, and list blobs. | +| *PropertyName*, *PropertyValue* | `string` | | A comma-separated list of key-value property pairs. See [supported properties](#supported-properties).| > [!NOTE] > We highly recommended exporting data to storage that is colocated in the same region as the database itself. This includes data that is exported so it can be transferred to another cloud service in other regions. Writes should be done locally, while reads can happen remotely. ## Supported properties -| Property | Type | Description | -|--|--|--| -| `includeHeaders` | `string` | For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). | -| `fileExtension` | `string` | Indicates the "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. | -| `namePrefix` | `string` | Indicates a prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. | -| `encoding` | `string` | Indicates how to encode the text: `UTF8NoBOM` (default) or `UTF8BOM`. | -| `compressionType` | `string` | Indicates the type of compression to use. Possible values are `gzip` or `snappy`. Default is `gzip`. `snappy` can (optionally) be used for `parquet` format. | -| `distribution` | `string` | Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. | -| `persistDetails` | `bool` | Indicates that the command should persist its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results). Defaults to `false` in synchronous executions, but can be turned on in those as well. | -| `sizeLimit` | `long` | The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. | -| `parquetRowGroupSize` | `int` | Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. | -| `distributed` | `bool` | Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. | -| `parquetDatetimePrecision` | `string` | Specifies the precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. | +| Property | Type | Description | +|-- |-- |-- | +| `includeHeaders` | `string` | For `csv`/`tsv` output, controls the generation of column headers. Can be one of `none` (default; no header lines emitted), `all` (emit a header line into every storage artifact), or `firstFile` (emit a header line into the first storage artifact only). | +| `fileExtension` | `string` | The "extension" part of the storage artifact (for example, `.csv` or `.tsv`). If compression is used, `.gz` is appended as well. | +| `namePrefix` | `string` | The prefix to add to each generated storage artifact name. A random prefix is used if left unspecified. | +| `encoding` | `string` | The encoding for text. Possible values include: `UTF8NoBOM` (default) or `UTF8BOM`. | +| `compressionType` | `string` | The type of compression to use. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. | +| `distribution` | `string` | Distribution hint (`single`, `per_node`, `per_shard`). If value equals `single`, a single thread writes to storage. Otherwise, export writes from all nodes executing the query in parallel. See [evaluate plugin operator](../../query/evaluate-operator.md). Defaults to `per_shard`. | +| `persistDetails` | `bool` | If `true`, the command persists its results (see `async` flag). Defaults to `true` in async runs, but can be turned off if the caller doesn't require the results. Defaults to `false` in synchronous executions, but can be turned on in those as well. | +| `sizeLimit` | `long` | The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 4 GB. | +| `parquetRowGroupSize` | `int` | Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. | +| `distributed` | `bool` | Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. | +| `parquetDatetimePrecision` | `string` | The precision to use when exporting `datetime` values to Parquet. Possible values are millisecond and microsecond. Default is millisecond. | ## Authentication and authorization diff --git a/data-explorer/kusto/management/external-tables-azure-storage.md b/data-explorer/kusto/management/external-tables-azure-storage.md index daaaed866c..0662196fa1 100644 --- a/data-explorer/kusto/management/external-tables-azure-storage.md +++ b/data-explorer/kusto/management/external-tables-azure-storage.md @@ -118,19 +118,20 @@ external_table("ExternalTable") ## Optional properties -| Property | Type | Description | -|------------------|----------|-------------------------------------------------------------------------------------| -| `folder` | `string` | Table's folder | -| `docString` | `string` | String documenting the table | -| `compressed` | `bool` | If set, indicates whether the files are compressed as `.gz` files (used in [export scenario](data-export/export-data-to-an-external-table.md) only) | -| `includeHeaders` | `string` | For delimited text formats (CSV, TSV, ...), indicates whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). | -| `namePrefix` | `string` | If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. | -| `fileExtension` | `string` | If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. | -| `encoding` | `string` | Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. | -| `sampleUris` | `bool` | If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. | -| `filesPreview` | `bool` | If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. | +| Property | Type | Description | +|------------------ |----------|-------------------------------------------------------------------------------------| +| `folder` | `string` | Table's folder | +| `docString` | `string` | String documenting the table | +| `compressed` | `bool` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. | +| `compressionType` | `string` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. | +| `includeHeaders` | `string` | For delimited text formats (CSV, TSV, ...), specifies whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). | +| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. | +| `fileExtension` | `string` | If set, specifies the extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. | +| `encoding` | `string` | Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. | +| `sampleUris` | `bool` | If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. | +| `filesPreview` | `bool` | If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. | | `validateNotEmpty` | `bool` | If set, the connection strings are validated for having content in them. The command will fail if the specified URI location doesn't exist, or if there are insufficient permissions to access it. | -| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. | +| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. | > [!NOTE] > The external table isn't accessed during creation, only during query and export. Use the `validateNotEmpty` optional property during creation to make sure the table definition is valid and the storage is accessible. diff --git a/data-explorer/kusto/management/external-tables-delta-lake.md b/data-explorer/kusto/management/external-tables-delta-lake.md index 85f2090ef6..7946e08e5f 100644 --- a/data-explorer/kusto/management/external-tables-delta-lake.md +++ b/data-explorer/kusto/management/external-tables-delta-lake.md @@ -54,14 +54,16 @@ The supported authentication methods are the same as those supported by [Azure S ## Optional properties -| Property | Type | Description | -|------------------|----------|------------------------------------------------------------------------------------| -| `folder` | `string` | Table's folder | -| `docString` | `string` | String documenting the table | -| `namePrefix` | `string` | If set, indicates the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. | -| `fileExtension` | `string` | If set, indicates file extensions of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. | -| `encoding` | `string` | Indicates how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. | -| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. | +| Property | Type | Description | +|------------------ |---------- |------------------------------------------------------------------------------------| +| `folder` | `string` | Table's folder | +| `docString` | `string` | String documenting the table | +| `compressed` | `bool` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. | +| `compressionType` | `string` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. | +| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. | +| `fileExtension` | `string` | If set, specifies extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. | +| `encoding` | `string` | Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. | +| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. | > [!NOTE] > The external delta table is accessed during creation, to infer the partitioning information and, optionally, the schema. Make sure that the table definition is valid and that the storage is accessible. diff --git a/data-explorer/kusto/management/show-external-tables.md b/data-explorer/kusto/management/show-external-tables.md index 40bc28b1e5..60e8287519 100644 --- a/data-explorer/kusto/management/show-external-tables.md +++ b/data-explorer/kusto/management/show-external-tables.md @@ -39,7 +39,7 @@ You must have at least Database User, Database Viewer, Database Monitor to run t | TableType | `string` | Type of external table | | Folder | `string` | Table's folder | | DocString | `string` | String documenting the table | -| Properties | `string` | Table's JSON serialized properties (specific to the type of table) | +| Properties | `string` | Table's JSON serialized properties (specific to the type of table; For more information, see [Create and alter Azure Storage external tables](external-tables-azure-storage.md) or [Create and alter delta external tables on Azure Storage](external-tables-delta-lake.md). | ## Example diff --git a/data-explorer/kusto/query/let-statement.md b/data-explorer/kusto/query/let-statement.md index 96cc37753a..f46f98e30d 100644 --- a/data-explorer/kusto/query/let-statement.md +++ b/data-explorer/kusto/query/let-statement.md @@ -65,6 +65,8 @@ To optimize multiple uses of the `let` statement within a single query, see [Opt ## Examples +The examples in this section show how to use the syntax to help you get started. + [!INCLUDE [help-cluster](../includes/help-cluster-note.md)] The query examples show the syntax and example usage of the operator, statement, or function. diff --git a/data-explorer/kusto/query/project-operator.md b/data-explorer/kusto/query/project-operator.md index 80588280ca..cd0312c546 100644 --- a/data-explorer/kusto/query/project-operator.md +++ b/data-explorer/kusto/query/project-operator.md @@ -47,6 +47,10 @@ A table with columns that were named as arguments. Contains same number of rows ## Examples +The examples in this section show how to use the syntax to help you get started. + +[!INCLUDE [help-cluster](../includes/help-cluster-note.md)] + ### Only show specific columns Only show the `EventId`, `State`, `EventType` of the `StormEvents` table. @@ -61,7 +65,9 @@ StormEvents | project EventId, State, EventType ``` -The following results table shows only the top 10 results. +**Output** + +The table shows the first 10 results. |EventId|State|EventType| |--|--|--| @@ -92,7 +98,9 @@ StormEvents | where TotalInjuries > 5 ``` -The following table shows only the first 10 results. +**Output** + +The table shows the first 10 results. |StartLocation| TotalInjuries| |--|--| @@ -108,9 +116,7 @@ The following table shows only the first 10 results. |COLLIERVILLE| 6| |...|...| -::: moniker range="microsoft-fabric || azure-data-explorer || azure-monitor || microsoft-sentinel" ## Related content * [`extend`](extend-operator.md) * [series_stats](series-stats-function.md) -::: moniker-end \ No newline at end of file diff --git a/data-explorer/kusto/query/project-rename-operator.md b/data-explorer/kusto/query/project-rename-operator.md index 87f59c0353..6fde356267 100644 --- a/data-explorer/kusto/query/project-rename-operator.md +++ b/data-explorer/kusto/query/project-rename-operator.md @@ -3,7 +3,7 @@ title: project-rename operator description: Learn how to use the project-rename operator to rename columns in the output table. ms.reviewer: alexans ms.topic: reference -ms.date: 08/11/2024 +ms.date: 01/20/2025 --- # project-rename operator @@ -29,7 +29,9 @@ Renames columns in the output table. A table that has the columns in the same order as in an existing table, with columns renamed. -## Examples +## Example + +If you have a table with columns a, b, and c, and you want to rename a to new_a and b to new_b while keeping the same order, the query would look like this: :::moniker range="azure-data-explorer" > [!div class="nextstepaction"] @@ -43,6 +45,6 @@ print a='a', b='b', c='c' **Output** -|new_a|new_b|c| -|---|---|---| -|a|b|c| +| new_a | new_b | c | +|--|--|--| +| a | b | c | diff --git a/data-explorer/kusto/query/project-reorder-operator.md b/data-explorer/kusto/query/project-reorder-operator.md index a20a781de8..8f36b3edd0 100644 --- a/data-explorer/kusto/query/project-reorder-operator.md +++ b/data-explorer/kusto/query/project-reorder-operator.md @@ -40,6 +40,12 @@ A table that contains columns in the order specified by the operator arguments. ## Examples +The examples in this section show how to use the syntax to help you get started. + +[!INCLUDE [help-cluster](../includes/help-cluster-note.md)] + +### Reorder with b first + Reorder a table with three columns (a, b, c) so the second column (b) will appear first. :::moniker range="azure-data-explorer" @@ -54,9 +60,11 @@ print a='a', b='b', c='c' **Output** -|b|a|c| -|---|---|---| -|b|a|c| +| b | a | c | +|--|--|--| +| b | a | c | + +### Reorder with a first Reorder columns of a table so that columns starting with `a` will appear before other columns. @@ -72,6 +80,6 @@ print b = 'b', a2='a2', a3='a3', a1='a1' **Output** -|a1|a2|a3|b| -|---|---|---|---| -|a1|a2|a3|b| +| a1 | a2 | a3 | b | +|--|--|--|--| +| a1 | a2 | a3 | b | diff --git a/data-explorer/kusto/query/restrict-statement.md b/data-explorer/kusto/query/restrict-statement.md index 45f2f484a5..a4b424d07d 100644 --- a/data-explorer/kusto/query/restrict-statement.md +++ b/data-explorer/kusto/query/restrict-statement.md @@ -3,7 +3,7 @@ title: Restrict statement description: Learn how to use the restrict statement to limit tabular views that are visible to subsequent query statements. ms.reviewer: alexans ms.topic: reference -ms.date: 01/13/2025 +ms.date: 01/20/2025 monikerRange: "microsoft-fabric || azure-data-explorer" --- # Restrict statement @@ -14,10 +14,10 @@ The restrict statement limits the set of table/view entities which are visible t The restrict statement's main scenario is for middle-tier applications that accept queries from users and want to apply a row-level security mechanism over those queries. -The middle-tier application can prefix the user's query with a **logical model**, a set of let statements defining views that restrict the user's access to data, for example ( `T | where UserId == "..."`). As the last statement being added, it restricts the user's access to the logical model only. +The middle-tier application can prefix the user's query with a **logical model**, a set of let statements to define views that restrict the user's access to data, for example ( `T | where UserId == "..."`). As the last statement being added, it restricts the user's access to the logical model only. > [!NOTE] -> The restrict statement can be used to restrict access to entities in another database or cluster (wildcards are not supported in cluster names). +> The restrict statement can be used to restrict access to entities in another database or cluster (wildcards aren't supported in cluster names). ## Syntax @@ -34,10 +34,12 @@ The middle-tier application can prefix the user's query with a **logical model** > [!NOTE] > > * All tables, tabular views, or patterns that aren't specified by the restrict statement become "invisible" to the rest of the query. -> * Let, set, and tabular statements are strung together/separated by a semicolon, otherwise they won't be considered part of the same query. +> * Let, set, and tabular statements are strung together/separated by a semicolon, otherwise they aren't considered part of the same query. ## Examples +The examples in this section show how to use the syntax to help you get started. + [!INCLUDE [help-cluster](../includes/help-cluster-note.md)] ### Let statement diff --git a/data-explorer/kusto/query/set-statement.md b/data-explorer/kusto/query/set-statement.md index 0d3c42d949..e8ef1e270a 100644 --- a/data-explorer/kusto/query/set-statement.md +++ b/data-explorer/kusto/query/set-statement.md @@ -17,6 +17,7 @@ Request properties control how a query executes and returns results. They can be Request properties aren't formally a part of the Kusto Query Language and may be modified without being considered as a breaking language change. > [!NOTE] +> > * To set request properties using [T-SQL](t-sql.md), see [Set request properties](t-sql.md#set-request-properties). > * To set request properties using the [Kusto client libraries](../api/client-libraries.md), see [Kusto Data ClientRequestProperties class](../api/netfx/about-kusto-data.md). @@ -35,11 +36,30 @@ Request properties aren't formally a part of the Kusto Query Language and may be ## Example -This query enables query tracing and then fetches the first 100 records from the Events table. +This query enables query tracing and then fetches the first 100 records from the StormEvents table. [!INCLUDE [help-cluster](../includes/help-cluster-note.md)] +:::moniker range="azure-data-explorer" +> [!div class="nextstepaction"] +> Run the query +::: moniker-end + ```kusto set querytrace; -Events | take 100 +StormEvents | take 100 ``` + +**Output** + +The table shows the first few results. + +| StartTime | EndTime |EpisodeId |EventId | State| EventType | +|--|--|--|--|--|--| +|2007-01-15T12:30:00Z | 2007-01-15T16:00:00Z | 1636 | 7821 | OHIO | Flood | +|2007-08-03T01:50:00Z | 2007-08-03T01:50:00Z | 10085 | 56083 | NEW YORK | Thunderstorm Wind | +|2007-08-03T15:33:00Z | 2007-08-03T15:33:00Z | 10086 | 56084 | NEW YORK | Hail | +|2007-08-03T15:40:00Z | 2007-08-03T15:40:00Z | 10086 | 56085 | NEW YORK | Hail | +|2007-08-03T23:15:00Z | 2007-08-05T04:30:00Z | 6569 | 38232 | NEBRASKA | Flood | +|2007-08-06T18:19:00Z | 2007-08-06T18:19:00Z | 6719 | 39781 | IOWA | Thunderstorm Wind | +|...|...|...|...|...|...| diff --git a/data-explorer/kusto/query/shuffle-query.md b/data-explorer/kusto/query/shuffle-query.md index d11cd7dfcf..cf41ce0a35 100644 --- a/data-explorer/kusto/query/shuffle-query.md +++ b/data-explorer/kusto/query/shuffle-query.md @@ -70,7 +70,7 @@ In some cases, the `hint.strategy = shuffle` is ignored, and the query won't run ## Examples The example in this section shows how to use the syntax to help you get started. - + [!INCLUDE [help-cluster](../includes/help-cluster-note.md)] ### Use summarize with shuffle diff --git a/data-explorer/kusto/query/sort-operator.md b/data-explorer/kusto/query/sort-operator.md index b7b746f236..4c67df5197 100644 --- a/data-explorer/kusto/query/sort-operator.md +++ b/data-explorer/kusto/query/sort-operator.md @@ -40,7 +40,7 @@ When the input table contains the special values `null`, `NaN`, `-inf` and `+inf |--|--|--| |**Nulls first**|`null`,`NaN`,`-inf`,`-5`,`0`,`5`,`+inf`|`null`,`NaN`,`+inf`,`5`,`0`,`-5`| |**Nulls last**|`-inf`,`-5`,`0`,`+inf`,`NaN`,`null`|`+inf`,`5`,`0`,`-5`,`NaN`,`null`| - + > [!NOTE] > > * Null and NaN values are always grouped together. diff --git a/data-explorer/kusto/query/tabular-expression-statements.md b/data-explorer/kusto/query/tabular-expression-statements.md index 4afdee8191..1acbe27dd9 100644 --- a/data-explorer/kusto/query/tabular-expression-statements.md +++ b/data-explorer/kusto/query/tabular-expression-statements.md @@ -41,6 +41,8 @@ A tabular data source produces sets of records, to be further processed by tabul ## Examples +The examples in this section show how to use the syntax to help you get started. + [!INCLUDE [help-cluster](../includes/help-cluster-note.md)] ### Filter rows by condition @@ -83,7 +85,7 @@ StormEvents **Output** | State | Population | TotalInjuries | -|---|---|---| +|--|--|--| | ALABAMA | 4918690 | 60 | | CALIFORNIA | 39562900 | 61 | | KANSAS | 2915270 | 63 |