Skip to content

Commit 6f2b27a

Browse files
authored
Update date and clarify indexer processing details
Updated the date for the document and added clarification on indexer processing for large data sets.
1 parent 53d0dee commit 6f2b27a

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

articles/search/search-how-to-large-index.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.service: azure-ai-search
99
ms.custom:
1010
- ignite-2023
1111
ms.topic: conceptual
12-
ms.date: 08/01/2025
12+
ms.date: 10/06/2025
1313
ms.update-cycle: 180-days
1414
---
1515

@@ -82,7 +82,8 @@ Default batch sizes are data-source specific. Azure SQL Database and Azure Cosmo
8282

8383
Indexer scheduling is an important mechanism for processing large data sets and for accommodating slow-running processes like image analysis in an enrichment pipeline.
8484

85-
Typically, indexer processing runs within a two-hour window. If the indexing workload takes days rather than hours to complete, you can put the indexer on a consecutive, recurring schedule that starts every two hours. Assuming the data source has [change tracking enabled](search-howto-create-indexers.md#change-detection-and-internal-state), the indexer resumes processing where it last left off. At this cadence, an indexer can work its way through a document backlog over a series of days until all unprocessed documents are processed.
85+
Typically, indexer processing runs within a two-hour window. If the indexing workload takes days rather than hours to complete, you can put the indexer on a consecutive, recurring schedule that starts every two hours. Assuming the data source has [change tracking enabled](search-howto-create-indexers.md#change-detection-and-internal-state), the indexer resumes processing where it last left off. At this cadence, an indexer can work its way through a document backlog over a series of days until all unprocessed documents are processed. This pattern is especially important during the initial run or when indexing large blob containers, where the blob listing phase alone can take multiple hours or days. During this time, the indexer may appear inactive, but unless an error is reported, it is likely still iterating through the blob list. Document processing and enrichment begin only after this phase completes, and this behavior is expected.
86+
8687

8788
```json
8889
{

0 commit comments

Comments
 (0)