diff --git a/data-explorer/kusto/management/data-export/continuous-data-export.md b/data-explorer/kusto/management/data-export/continuous-data-export.md
index e52817257b..c98f0481da 100644
--- a/data-explorer/kusto/management/data-export/continuous-data-export.md
+++ b/data-explorer/kusto/management/data-export/continuous-data-export.md
@@ -3,7 +3,7 @@ title: Continuous data export
description: This article describes Continuous data export.
ms.reviewer: yifats
ms.topic: reference
-ms.date: 12/08/2024
+ms.date: 07/30/2025
---
# Continuous data export overview
@@ -101,12 +101,14 @@ Followed by:
<| T | where cursor_before_or_at("636751928823156645")
```
+::: moniker range="azure-data-explorer"
## Continuous export from a table with Row Level Security
To create a continuous export job with a query that references a table with [Row Level Security policy](../../management/row-level-security-policy.md), you must:
* Provide a managed identity as part of the continuous export configuration. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md).
* Use [impersonation](../../api/connection-strings/storage-connection-strings.md#impersonation) authentication for the external table to which the data is exported.
+::: moniker-end
## Continuous export to delta table - Preview
diff --git a/data-explorer/kusto/management/data-export/create-alter-continuous.md b/data-explorer/kusto/management/data-export/create-alter-continuous.md
index 68e32eac7a..b391b92e16 100644
--- a/data-explorer/kusto/management/data-export/create-alter-continuous.md
+++ b/data-explorer/kusto/management/data-export/create-alter-continuous.md
@@ -3,7 +3,7 @@ title: .create or alter continuous-export
description: This article describes how to create or alter continuous data export.
ms.reviewer: yifats
ms.topic: reference
-ms.date: 12/08/2024
+ms.date: 07/30/2025
---
# .create or alter continuous-export
@@ -31,11 +31,15 @@ You must have at least [Database Admin](../../access-control/role-based-access-c
| *T1*, *T2* | `string` | | A comma-separated list of fact tables in the query. If not specified, all tables referenced in the query are assumed to be fact tables. If specified, tables *not* in this list are treated as dimension tables and aren't scoped, so all records participate in all exports. See [continuous data export overview](continuous-data-export.md) for details. |
| *propertyName*, *propertyValue* | `string` | | A comma-separated list of optional [properties](#supported-properties).|
+::: moniker range="azure-data-explorer"
> [!NOTE]
> If the target external table uses [impersonation](../../api/connection-strings/storage-connection-strings.md#impersonation) authentication, you must specify a managed identity to run the continuous export. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md).
+::: moniker-end
## Supported properties
+::: moniker range="azure-data-explorer"
+
| Property | Type | Description |
|--|--|--|
| `intervalBetweenRuns` | `Timespan` | The time span between continuous export executions. Must be greater than 1 minute. |
@@ -46,6 +50,20 @@ You must have at least [Database Admin](../../access-control/role-based-access-c
| `managedIdentity` | `string` | The managed identity for which the continuous export job runs. The managed identity can be an object ID, or the `system` reserved word. For more information, see [Use a managed identity to run a continuous export job](continuous-export-with-managed-identity.md#use-a-managed-identity-to-run-a-continuous-export-job). |
| `isDisabled` | `bool` | Disable or enable the continuous export. Default is false. |
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+| Property | Type | Description |
+|--|--|--|
+| `intervalBetweenRuns` | `Timespan` | The time span between continuous export executions. Must be greater than 1 minute. |
+| `forcedLatency` | `Timespan` | An optional period of time to limit the query to records ingested before a specified period relative to the current time. This property is useful if, for example, the query performs some aggregations or joins, and you want to make sure all relevant records have been ingested before running the export. |
+| `sizeLimit` | `long` | The size limit in bytes of a single storage artifact written before compression. Valid range: 100 MB (default) to 1 GB. |
+| `distributed` | `bool` | Disable or enable distributed export. Setting to false is equivalent to `single` distribution hint. Default is true. |
+| `parquetRowGroupSize` | `int` | Relevant only when data format is Parquet. Controls the row group size in the exported files. Default row group size is 100,000 records. |
+| `isDisabled` | `bool` | Disable or enable the continuous export. Default is false. |
+
+::: moniker-end
+
## Example
The following example creates or alters a continuous export `MyExport` that exports data from the `T` table to `ExternalBlob`. The data exports occur every hour, and have a defined forced latency and size limit per storage artifact.
diff --git a/data-explorer/kusto/management/external-tables-azure-storage.md b/data-explorer/kusto/management/external-tables-azure-storage.md
index 0662196fa1..93f5fa3bab 100644
--- a/data-explorer/kusto/management/external-tables-azure-storage.md
+++ b/data-explorer/kusto/management/external-tables-azure-storage.md
@@ -3,7 +3,7 @@ title: Create and alter Azure Storage external tables
description: This article describes how to create and alter external tables based on Azure Blob Storage or Azure Data Lake
ms.reviewer: orspodek
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 07/30/2025
---
# Create and alter Azure Storage external tables
@@ -13,13 +13,15 @@ ms.date: 08/11/2024
The commands in this article can be used to create or alter an Azure Storage [external table](../query/schema-entities/external-tables.md) in the database from which the command is executed. An Azure Storage external table references data located in Azure Blob Storage, Azure Data Lake Store Gen1, or Azure Data Lake Store Gen2.
> [!NOTE]
-> If the table exists, the `.create` command will fail with an error. Use `.create-or-alter` or `.alter` to modify existing tables.
+> If the table exists, the `.create` command fails with an error. Use `.create-or-alter` or `.alter` to modify existing tables.
## Permissions
To `.create` requires at least [Database User](../access-control/role-based-access-control.md) permissions, and to `.alter` requires at least [Table Admin](../access-control/role-based-access-control.md) permissions.
+:::moniker range="azure-data-explorer"
To `.create-or-alter` an external table using managed identity authentication requires [AllDatabasesAdmin](../access-control/role-based-access-control.md) permissions.
+:::moniker-end
## Syntax
@@ -38,15 +40,15 @@ To `.create-or-alter` an external table using managed identity authentication re
|*Schema*| `string` | :heavy_check_mark:|The external data schema is a comma-separated list of one or more column names and [data types](../query/scalar-data-types/index.md), where each item follows the format: *ColumnName* `:` *ColumnType*. If the schema is unknown, use [infer\_storage\_schema](../query/infer-storage-schema-plugin.md) to infer the schema based on external file contents.|
|*Partitions*| `string` || A comma-separated list of columns by which the external table is partitioned. Partition column can exist in the data file itself, or as part of the file path. See [partitions formatting](#partitions-formatting) to learn how this value should look.|
|*PathFormat*| `string` ||An external data folder URI path format to use with partitions. See [path format](#path-format).|
-|*DataFormat*| `string` | :heavy_check_mark:|The data format, which can be any of the [ingestion formats](../ingestion-supported-formats.md). We recommend using the `Parquet` format for external tables to improve query and export performance, unless you use `JSON` paths mapping. When using an external table for [export scenario](data-export/export-data-to-an-external-table.md), you're limited to the following formats: `CSV`, `TSV`, `JSON` and `Parquet`.|
-|*StorageConnectionString*| `string` | :heavy_check_mark:|One or more comma-separated paths to Azure Blob Storage blob containers, Azure Data Lake Gen 2 file systems or Azure Data Lake Gen 1 containers, including credentials. The external table storage type is determined by the provided connection strings. See [storage connection strings](../api/connection-strings/storage-connection-strings.md).|
+|*DataFormat*| `string` | :heavy_check_mark:|The data format, which can be any of the [ingestion formats](../ingestion-supported-formats.md). We recommend using the `Parquet` format for external tables to improve query and export performance, unless you use `JSON` paths mapping. When using an external table for [export scenario](data-export/export-data-to-an-external-table.md), you're limited to the following formats: `CSV`, `TSV`, `JSON`, and `Parquet`.|
+|*StorageConnectionString*| `string` | :heavy_check_mark:|One or more comma-separated paths to Azure Blob Storage blob containers, Azure Data Lake Gen 2 file systems or Azure Data Lake Gen 1 containers, including credentials. The provided connection string determines the external table storage type. See [storage connection strings](../api/connection-strings/storage-connection-strings.md).|
|*Property*| `string` ||A key-value property pair in the format *PropertyName* `=` *PropertyValue*. See [optional properties](#optional-properties).|
> [!NOTE]
-> CSV files with non-identical schema might result in data appearing shifted or missing. We recommend separating CSV files with distinct schemas to separate storage containers and defining an external table for each storage container with the proper schema.
+> CSV files with nonidentical schema might result in data appearing shifted or missing. We recommend separating CSV files with distinct schemas to separate storage containers and defining an external table for each storage container with the proper schema.
> [!TIP]
-> Provide more than a single storage account to avoid storage throttling while [exporting](data-export/export-data-to-an-external-table.md) large amounts of data to the external table. Export will distribute the writes between all accounts provided.
+> Provide more than a single storage account to avoid storage throttling while [exporting](data-export/export-data-to-an-external-table.md) large amounts of data to the external table. Export distributes the writes between all accounts provided.
## Authentication and authorization
@@ -54,6 +56,8 @@ The authentication method to access an external table is based on the connection
The following table lists the supported authentication methods for Azure Storage external tables and the permissions needed to read or write to the table.
+::: moniker range="azure-data-explorer"
+
| Authentication method | Azure Blob Storage / Data Lake Storage Gen2 | Data Lake Storage Gen1 |
|--|--|--|
|[Impersonation](../api/connection-strings/storage-connection-strings.md#impersonation)|**Read permissions:** Storage Blob Data Reader
**Write permissions:** Storage Blob Data Contributor|**Read permissions:** Reader
**Write permissions:** Contributor|
@@ -62,6 +66,18 @@ The following table lists the supported authentication methods for Azure Storage
|[Microsoft Entra access token](../api/connection-strings/storage-connection-strings.md#microsoft-entra-access-token)|No additional permissions required.|No additional permissions required.|
|[Storage account access key](../api/connection-strings/storage-connection-strings.md#storage-account-access-key)|No additional permissions required.|This authentication method isn't supported in Gen1.|
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+| Authentication method | Azure Blob Storage / Data Lake Storage Gen2 | Data Lake Storage Gen1 |
+|--|--|--|
+|[Impersonation](../api/connection-strings/storage-connection-strings.md#impersonation)|**Read permissions:** Storage Blob Data Reader
**Write permissions:** Storage Blob Data Contributor|**Read permissions:** Reader
**Write permissions:** Contributor|
+|[Shared Access (SAS) token](../api/connection-strings/storage-connection-strings.md#shared-access-sas-token)|**Read permissions:** List + Read
**Write permissions:** Write|This authentication method isn't supported in Gen1.|
+|[Microsoft Entra access token](../api/connection-strings/storage-connection-strings.md#microsoft-entra-access-token)|No additional permissions required.|No additional permissions required.|
+|[Storage account access key](../api/connection-strings/storage-connection-strings.md#storage-account-access-key)|No additional permissions required.|This authentication method isn't supported in Gen1.|
+
+::: moniker-end
+
[!INCLUDE [partitions-formatting](../includes/partitions-formatting.md)]
### Path format
@@ -125,12 +141,12 @@ external_table("ExternalTable")
| `compressed` | `bool` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
If set to true, the data is exported in the format specified by the `compressionType` property. For the read path, compression is automatically detected. |
| `compressionType` | `string` | Only relevant for the [export scenario](data-export/export-data-to-an-external-table.md).
The compression type of exported files. For non-Parquet files, only `gzip` is allowed. For Parquet files, possible values include `gzip`, `snappy`, `lz4_raw`, `brotli`, and `zstd`. Default is `gzip`. For the read path, compression type is automatically detected. |
| `includeHeaders` | `string` | For delimited text formats (CSV, TSV, ...), specifies whether files contain a header. Possible values are: `All` (all files contain a header), `FirstFile` (first file in a folder contains a header), `None` (no files contain a header). |
-| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files will be written with this prefix. On read operations, only files with this prefix are read. |
-| `fileExtension` | `string` | If set, specifies the extension of the files. On write, files names will end with this suffix. On read, only files with this file extension will be read. |
+| `namePrefix` | `string` | If set, specifies the prefix of the files. On write operations, all files are written with this prefix. On read operations, only files with this prefix are read. |
+| `fileExtension` | `string` | If set, specifies the extension of the files. On write, files names end with this suffix. On read, only files with this file extension are read. |
| `encoding` | `string` | Specifies how the text is encoded: `UTF8NoBOM` (default) or `UTF8BOM`. |
| `sampleUris` | `bool` | If set, the command result provides several examples of simulated external data files URI as they're expected by the external table definition. This option helps validate whether the *Partitions* and *PathFormat* parameters are defined properly. |
| `filesPreview` | `bool` | If set, one of the command result tables contains a preview of [.show external table artifacts](show-external-table-artifacts.md) command. Like `sampleUri`, the option helps validate the *Partitions* and *PathFormat* parameters of external table definition. |
-| `validateNotEmpty` | `bool` | If set, the connection strings are validated for having content in them. The command will fail if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
+| `validateNotEmpty` | `bool` | If set, the connection strings are validated for having content in them. The command fails if the specified URI location doesn't exist, or if there are insufficient permissions to access it. |
| `dryRun` | `bool` | If set, the external table definition isn't persisted. This option is useful for validating the external table definition, especially in conjunction with the `filesPreview` or `sampleUris` parameter. |
> [!NOTE]
@@ -141,7 +157,7 @@ external_table("ExternalTable")
### File filtering logic
-When querying an external table, performance is improved by filtering out irrelevant external storage files. The process of iterating files and deciding whether a file should be processed is as follows:
+When you query an external table, performance is improved by filtering out irrelevant external storage files. The process of iterating files and deciding whether a file should be processed is as follows:
1. Build a URI pattern that represents a place where files are found. Initially, the URI pattern equals a connection string provided as part of the external table definition. If there are any partitions defined, they're rendered using *PathFormat*, then appended to the URI pattern.
@@ -158,9 +174,9 @@ Once all the conditions are met, the file is fetched and processed.
## Examples
-### Non-partitioned external table
+### Nonpartitioned external table
-In the following non-partitioned external table, the files are expected to be placed directly under the container(s) defined:
+In the following nonpartitioned external table, the files are expected to be placed directly under the container(s) defined:
```kusto
.create external table ExternalTable (x:long, s:string)
@@ -250,8 +266,15 @@ external_table("ExternalTable")
## Related content
::: moniker range="azure-data-explorer"
+
* [Query external tables](/azure/data-explorer/data-lake-query-data).
-::: moniker-end
* [Export data to an external table](data-export/export-data-to-an-external-table.md).
+* [Continuous data export to an external table](data-export/continuous-data-export.md).
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+* [Export data to an external table](data-export/export-data-to-an-external-table.md).
* [Continuous data export to an external table](data-export/continuous-data-export.md).
+
+::: moniker-end
diff --git a/data-explorer/kusto/management/update-policy.md b/data-explorer/kusto/management/update-policy.md
index fcb112f158..e72d5afda7 100644
--- a/data-explorer/kusto/management/update-policy.md
+++ b/data-explorer/kusto/management/update-policy.md
@@ -3,7 +3,7 @@ title: Update policy overview
description: Learn how to trigger an update policy to add data to a source table.
ms.reviewer: orspodek
ms.topic: reference
-ms.date: 12/18/2024
+ms.date: 07/30/2025
---
# Update policy overview
@@ -24,6 +24,8 @@ An update policy is subject to the same restrictions and best practices as regul
An update policy is subject to the same restrictions and best practices as regular ingestion. The policy scales-out according to the Eventhouse size, and is more efficient when handling bulk ingestion.
::: moniker-end
+::: moniker range="azure-data-explorer"
+
> [!NOTE]
>
> * The source and target table must be in the same database.
@@ -31,6 +33,15 @@ An update policy is subject to the same restrictions and best practices as regul
> * The update policy function can reference tables in other databases. To do this, the update policy must be defined with a `ManagedIdentity` property, and the managed identity must have `viewer` [role](security-roles.md) on the referenced databases.
Ingesting formatted data improves performance, and CSV is preferred because of it's a well-defined format. Sometimes, however, you have no control over the format of the data, or you want to enrich ingested data, for example, by joining records with a static dimension table in your database.
+::: moniker-end
+::: moniker range="microsoft-fabric"
+> [!NOTE]
+>
+> * The source and target table must be in the same database.
+> * The update policy function schema and the target table schema must match in their column types, and order.
+
+::: moniker-end
+
## Update policy query
If the update policy is defined on the target table, multiple queries can run on data ingested into a source table. If there are multiple update policies, the order of execution isn't necessarily known.
@@ -76,7 +87,9 @@ When referencing the `Source` table in the `Query` part of the policy, or in fun
A table can have zero or more update policy objects associated with it.
Each such object is represented as a JSON property bag, with the following properties defined.
-|Property |Type |Description |
+::: moniker range="azure-data-explorer"
+
+|Property |Type | Description |
|---------|---------|----------------|
|IsEnabled |`bool` |States if update policy is *true* - enabled, or *false* - disabled|
|Source |`string` |Name of the table that triggers invocation of the update policy |
@@ -85,6 +98,19 @@ Each such object is represented as a JSON property bag, with the following prope
|PropagateIngestionProperties |`bool`|States if properties specified during ingestion to the source table, such as [extent tags](extent-tags.md) and creation time, apply to the target table. |
|ManagedIdentity | `string` | The managed identity on behalf of which the update policy runs. The managed identity can be an object ID, or the `system` reserved word. The update policy must be configured with a managed identity when the query references tables in other databases or tables with an enabled [row level security policy](row-level-security-policy.md). For more information, see [Use a managed identity to run a update policy](update-policy-with-managed-identity.md). |
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+|Property |Type |Description |
+|---------|---------|----------------|
+|IsEnabled |`bool` |States if update policy is *true* - enabled, or *false* - disabled|
+|Source |`string` |Name of the table that triggers invocation of the update policy |
+|Query |`string` |A query used to produce data for the update |
+|IsTransactional |`bool` |States if the update policy is transactional or not, default is *false*. If the policy is transactional and the update policy fails, the source table isn't updated. |
+|PropagateIngestionProperties |`bool`|States if properties specified during ingestion to the source table, such as [extent tags](extent-tags.md) and creation time, apply to the target table. |
+
+::: moniker-end
+
> [!NOTE]
> In production systems, set `IsTransactional`:*true* to ensure that the target table doesn't lose data in transient failures.
diff --git a/data-explorer/kusto/query/cosmosdb-plugin.md b/data-explorer/kusto/query/cosmosdb-plugin.md
index 7fd8644369..370d380584 100644
--- a/data-explorer/kusto/query/cosmosdb-plugin.md
+++ b/data-explorer/kusto/query/cosmosdb-plugin.md
@@ -3,7 +3,7 @@ title: cosmosdb_sql_request plugin
description: Learn how to use the cosmosdb_sql_request plugin to send a SQL query to an Azure Cosmos DB SQL network endpoint to query small datasets.
ms.reviewer: miwalia
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 07/30/2025
monikerRange: "microsoft-fabric || azure-data-explorer"
---
# cosmosdb_sql_request plugin
@@ -42,6 +42,8 @@ The following table describes the supported fields of the *Options* parameter.
To authorize to an Azure Cosmos DB SQL network endpoint, you need to specify the authorization information. The following table provides the supported authentication methods and the description for how to use that method.
+::: moniker range="azure-data-explorer"
+
|Authentication method|Description|
|--|--|
|Managed identity (Recommended)|Append `Authentication="Active Directory Managed Identity";User Id={object_id};` to the connection string. The request is made on behalf of a managed identity which must have the appropriate permissions to the database.
To enable managed identity authentication, you must add the managed identity to your cluster and alter the managed identity policy. For more information, see [Managed Identity policy](/azure/data-explorer/kusto/management/managed-identity-policy). |
@@ -49,6 +51,17 @@ To authorize to an Azure Cosmos DB SQL network endpoint, you need to specify the
|Account key|You can add the account key directly to the *ConnectionString* argument. However, this approach is less secure as it involves including the secret in the query text, and is less resilient to future changes in the account key. To enhance security, hide the secret as an [obfuscated string literal](scalar-data-types/string.md#obfuscated-string-literals).|
|Token|You can add a token value in the plugin [options](#supported-options). The token must belong to a principal with relevant permissions. To enhance security, hide the token as an [obfuscated string literal](scalar-data-types/string.md#obfuscated-string-literals).|
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+|Authentication method|Description|
+|--|--|
+|Azure Resource Manager resource ID |This authentication method requires specifying the `armResourceId` and optionally the `token` in the [options](#supported-options). The `armResourceId` identifies the Cosmos DB database account, and the `token` must be a valid Microsoft Entra bearer token for a principal with access permissions to the Cosmos DB database. If no `token` is provided, the Microsoft Entra token of the requesting principal will be used for authentication. |
+|Account key|You can add the account key directly to the *ConnectionString* argument. However, this approach is less secure as it involves including the secret in the query text, and is less resilient to future changes in the account key. To enhance security, hide the secret as an [obfuscated string literal](scalar-data-types/string.md#obfuscated-string-literals).|
+|Token|You can add a token value in the plugin [options](#supported-options). The token must belong to a principal with relevant permissions. To enhance security, hide the token as an [obfuscated string literal](scalar-data-types/string.md#obfuscated-string-literals).|
+
+::: moniker-end
+
## Set callout policy
The plugin makes callouts to the Azure Cosmos DB instance. Make sure that the cluster's [callout policy](../management/callout-policy.md) enables calls of type `cosmosdb` to the target *CosmosDbUri*.
@@ -113,7 +126,7 @@ evaluate cosmosdb_sql_request(
| where lastName == 'Smith'
```
-### Query Azure Cosmos DB with a database table
+### Query Azure Cosmos DB and join data with a database table
The following example joins partner data from an Azure Cosmos DB with partner data in a database using the `Partner` field. It results in a list of partners with their phone numbers, website, and contact email address sorted by partner name.
diff --git a/data-explorer/kusto/query/sql-request-plugin.md b/data-explorer/kusto/query/sql-request-plugin.md
index 9442086b90..add70fabaf 100644
--- a/data-explorer/kusto/query/sql-request-plugin.md
+++ b/data-explorer/kusto/query/sql-request-plugin.md
@@ -3,7 +3,7 @@ title: sql_request plugin
description: Learn how to use the sql_request plugin to send an SQL query to an SQL server network endpoint.
ms.reviewer: alexans
ms.topic: reference
-ms.date: 08/11/2024
+ms.date: 07/30/2025
monikerRange: "microsoft-fabric || azure-data-explorer"
---
# sql_request plugin
@@ -40,6 +40,8 @@ The plugin is invoked with the [`evaluate`](evaluate-operator.md) operator.
The sql_request plugin supports the following three methods of authentication to the
SQL Server endpoint.
+::: moniker range="azure-data-explorer"
+
|Authentication method|Syntax|How|Description|
|--|--|--|
|Microsoft Entra integrated|`Authentication="Active Directory Integrated"`|Add to the *ConnectionString* parameter.| The user or application authenticates via Microsoft Entra ID to your cluster, and the same token is used to access the SQL Server network endpoint.
The principal must have the appropriate permissions on the SQL resource to perform the requested action. For example, to read from the database the principal needs table SELECT permissions, and to write to an existing table the principal needs UPDATE and INSERT permissions. To write to a new table, CREATE permissions are also required.|
@@ -47,6 +49,17 @@ SQL Server endpoint.
|Username and password|`User ID=...; Password=...;`|Add to the *ConnectionString* parameter.|When possible, avoid this method as it may be less secure.|
|Microsoft Entra access token|`dynamic({'token': h"eyJ0..."})`|Add in the *Options* parameter.|The access token is passed as `token` property in the *Options* argument of the plugin.|
+::: moniker-end
+::: moniker range="microsoft-fabric"
+
+|Authentication method|Syntax|How|Description|
+|--|--|--|
+|Microsoft Entra integrated|`Authentication="Active Directory Integrated"`|Add to the *ConnectionString* parameter.| The user or application authenticates via Microsoft Entra ID to your cluster, and the same token is used to access the SQL Server network endpoint.
The principal must have the appropriate permissions on the SQL resource to perform the requested action. For example, to read from the database the principal needs table SELECT permissions, and to write to an existing table the principal needs UPDATE and INSERT permissions. To write to a new table, CREATE permissions are also required.|
+|Username and password|`User ID=...; Password=...;`|Add to the *ConnectionString* parameter.|When possible, avoid this method as it may be less secure.|
+|Microsoft Entra access token|`dynamic({'token': h"eyJ0..."})`|Add in the *Options* parameter.|The access token is passed as `token` property in the *Options* argument of the plugin.|
+
+::: moniker-end
+
> [!NOTE]
> Connection strings and queries that include confidential information or information that should be guarded should be obfuscated to be omitted from any Kusto tracing. For more information, see [obfuscated string literals](scalar-data-types/string.md#obfuscated-string-literals).