Skip to content

Commit 69bb56b

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/dataexplorer-docs-pr (branch live)
2 parents d7d3844 + 763cfbd commit 69bb56b

File tree

2 files changed

+21
-9
lines changed

2 files changed

+21
-9
lines changed

data-explorer/ingest-data-overview.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure Data Explorer data ingestion overview
33
description: Learn about the different ways you can ingest (load) data in Azure Data Explorer
44
ms.reviewer: akshay.dixit
55
ms.topic: conceptual
6-
ms.date: 02/16/2024
6+
ms.date: 04/07/2025
77
---
88

99
# Azure Data Explorer data ingestion overview
@@ -79,8 +79,7 @@ Azure Data Explorer offers the following ingestion management commands, which in
7979
* **Ingest from storage**: The [.ingest into command](/kusto/management/data-ingestion/ingest-from-storage?view=azure-data-explorer&preserve-view=true) gets the data to ingest from external storage, such as Azure Blob Storage, accessible by your cluster and pointed-to by the command.
8080

8181
> [!NOTE]
82-
> In the event of a failure, ingestion is performed again and is retried for up to 48 hours using the exponential backoff method for wait time between tries.
83-
82+
> In the event of a failure, ingestion is performed again, and is retried for up to 48 hours using the exponential backoff method for wait time between tries.
8483
8584
## Compare ingestion methods
8685

@@ -89,7 +88,7 @@ The following table compares the main ingestion methods:
8988
| Ingestion name | Data type | Maximum file size | Streaming, queued, direct | Most common scenarios | Considerations |
9089
|--|--|--|--|--|--|
9190
| [Apache Spark connector](spark-connector.md) | Every format supported by the Spark environment | Unlimited | Queued | Existing pipeline, preprocessing on Spark before ingestion, fast way to create a safe (Spark) streaming pipeline from the various sources the Spark environment supports. | Consider cost of Spark cluster. For batch write, compare with Azure Data Explorer data connection for Event Grid. For Spark streaming, compare with the data connection for event hub. |
92-
| [Azure Data Factory (ADF)](data-factory-integration.md) | [Supported data formats](/azure/data-factory/copy-activity-overview#supported-data-stores-and-formats) | Unlimited. Inherits ADF restrictions. | Queued or per ADF trigger | Supports formats that are unsupported, such as Excel and XML, and can copy large files from over 90 sources, from on perm to cloud | This method takes relatively more time until data is ingested. ADF uploads all data to memory and then begins ingestion. |
91+
| [Azure Data Factory (ADF)](data-factory-integration.md) | [Supported data formats](/azure/data-factory/copy-activity-overview#supported-data-stores-and-formats) | Unlimited. Inherits ADF restrictions. | Queued or per ADF trigger | Supports formats that are unsupported, such as Excel and XML, and can copy large files from over 90 sources, from on-premises to cloud | This method takes relatively more time until data is ingested. ADF uploads all data to memory and then begins ingestion. |
9392
| [Event Grid](ingest-data-event-grid-overview.md) | [Supported data formats](ingest-data-event-grid-overview.md#data-format) | 1 GB uncompressed | Queued | Continuous ingestion from Azure storage, external data in Azure storage | Ingestion can be triggered by blob renaming or blob creation actions |
9493
| [Event Hub](ingest-data-event-hub-overview.md) | [Supported data formats](ingest-data-event-hub-overview.md#data-format) | N/A | Queued, streaming | Messages, events | |
9594
| [Get data experience](get-data-file.md) | *SV, JSON | 1 GB uncompressed | Queued or direct ingestion | One-off, create table schema, definition of continuous ingestion with Event Grid, bulk ingestion with container (up to 5,000 blobs; no limit when using historical ingestion) | |
@@ -105,11 +104,22 @@ For information on other connectors, see [Connectors overview](integrate-data-ov
105104

106105
## Permissions
107106

108-
The following list describes the permissions required for various ingestion scenarios:
107+
The following list describes the [permissions](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true) required for various ingestion scenarios:
108+
109+
* To create a new table, you must have at least Database User permissions.
110+
* To ingest data into an existing table, without changing its schema, you must have at least Table Ingestor permissions.
111+
* To change the schema of an existing table, you must have at least Table Admin or Database Admin permissions.
109112

110-
* To create a new table requires at least Database User permissions.
111-
* To ingest data into an existing table, without changing its schema, requires at least Database Ingestor permissions.
112-
* To change the schema of an existing table requires at least Table Admin or Database Admin permissions.
113+
The following table describes the permissions required for each ingestion method:
114+
115+
| Ingestion method | Permissions |
116+
|--|--|
117+
| [One-time ingestion](#one-time-data-ingestion) | At least Table Ingestor |
118+
| [Continuous streaming ingestion](#continuous-data-ingestion) | At least Table Ingestor |
119+
| [Continuous queued ingestion](#continuous-data-ingestion) | At least Table Ingestor |
120+
| [Direct inline ingestion](#direct-ingestion-with-management-commands) | At least Table Ingestor and also Database Viewer |
121+
| [Direct ingestion from query](#direct-ingestion-with-management-commands) | At least Table Ingestor and also Database Viewer |
122+
| [Direct ingestion from storage](#direct-ingestion-with-management-commands) | At least Table Ingestor |
113123

114124
For more information, see [Kusto role-based access control](/kusto/access-control/role-based-access-control?view=azure-data-explorer&preserve-view=true).
115125

data-explorer/kusto/management/query-consistency-policy.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,8 @@ The following limits are configurable:
3434
3535
## Example
3636

37+
This policy configuration sets query consistency to weak and the maximum age for cached results to 5 minutes.
38+
3739
```json
3840
"QueryConsistencyPolicy": {
3941
"QueryConsistency": {
@@ -42,7 +44,7 @@ The following limits are configurable:
4244
},
4345
"CachedResultsMaxAge": {
4446
"IsRelaxable": true,
45-
"Value": "05:00:00"
47+
"Value": "00:05:00"
4648
}
4749
}
4850
```

0 commit comments

Comments
 (0)