Skip to content

Commit c47c4b4

Browse files
committed
Revised H2 sections
1 parent 35c8f44 commit c47c4b4

File tree

1 file changed

+36
-53
lines changed

1 file changed

+36
-53
lines changed

articles/search/search-howto-large-index.md

Lines changed: 36 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -13,64 +13,27 @@ ms.date: 01/17/2023
1313

1414
# Index large data sets in Azure Cognitive Search
1515

16-
If you have big data or complex data in a search indexing pipeline, this article describes the strategies for accommodating long running processes on Azure Cognitive Search.
16+
If your search solution includes indexing big data or complex data, this article describes the strategies for accommodating long running processes on Azure Cognitive Search.
1717

18-
This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data using a [search indexer](search-indexer-overview.md) on a supported data source.
18+
This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data using a [search indexer](search-indexer-overview.md) on a supported data source. The strategy you choose will be determined by the indexing approach you're already using. If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, given the skillset dependency on indexers.
1919

20-
Be sure to also review [Tips for better performance](search-performance-tips.md) for best practices on index and query design.
20+
This article complements [Tips for better performance](search-performance-tips.md), which offers best practices on index and query design. A well-designed index that includes only the fields and attributes you need is an important prerequisite for large-scale indexing.
2121

2222
> [!NOTE]
2323
> The strategies described in this article assume a single large data source. If your solution requires indexing from multiple data sources, see [Index multiple data sources in Azure Cognitive Search](https://github.com/Azure-Samples/azure-cognitive-search-multiple-containers-indexer/blob/main/README.md) for a recommended approach.
2424
25-
## Strategies for pull mode indexing with indexers
25+
## Index large data using the push APIs
2626

27-
[Indexers](search-indexer-overview.md) have several capabilities that are useful for long-running processes:
28-
29-
+ Batching documents
30-
+ Parallel indexing over partitioned data
31-
+ Scheduling and integration with change detection logic to index just new and change documents over time
32-
33-
If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, due to the skillset dependency on indexers.
34-
35-
The ["How to use indexers for long running jobs"](#how-to-use-indexers-for-long-running-jobs) section in this article describes each approach.
36-
37-
## Strategies for push mode indexing
38-
39-
If you aren't using indexers, then you're importing data through the push APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments).
40-
41-
For push mode indexing, a strategy for long-running indexing will have one or both of the following components:
42-
43-
+ Batching documents
44-
+ Managing threads
45-
46-
The ["How to index large data using the push API"](#how-to-index-large-datasets-with-the-push-api) section provides details.
27+
"Push" APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), are the most prevalent form of indexing in Cognitive Search. For solutions that use a push API, the strategy for long-running indexing will have one or both of the following components:
4728

48-
## Strategies for big data on Spark
49-
50-
If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
51-
52-
<!-- Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
53-
54-
As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
55-
56-
The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
57-
58-
The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
59-
60-
For C# tutorials, code samples, and alternative strategies, see:
61-
62-
+ [Tutorial: Optimize indexing workloads](tutorial-optimize-indexing-push-api.md)
63-
+ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) -->
64-
65-
## How to index large datasets with the "push" API
66-
67-
When pushing large data volumes into an index using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), batching documents and managing threads are two techniques that improve indexing speed.
29+
+ Batch documents
30+
+ Manage threads
6831

6932
### Batch multiple documents per request
7033

71-
One of the simplest mechanisms for indexing a larger data set is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
34+
A simple mechanism for indexing a large quantity of data is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
7235

73-
Using batches to index documents will significantly improve indexing performance. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
36+
Batching documents will significantly shorten the amount of time it takes to work through a large data volume. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
7437

7538
+ The schema of your index
7639
+ The size of your data
@@ -93,15 +56,17 @@ Indexers have built-in thread management, but when you're using the push APIs, y
9356

9457
The Azure .NET SDK automatically retries 503s and other failed requests, but you'll need to implement your own logic to retry 207s. Open-source tools such as [Polly](https://github.com/App-vNext/Polly) can also be used to implement a retry strategy.
9558

96-
## How to use indexers for long-running jobs
59+
## Index with indexers and the "pull" APIs
9760

98-
This section explains how to use the built-in capabilities of [Indexers](search-indexer-overview.md) for accommodating larger data sets:
61+
[Indexers](search-indexer-overview.md) have several capabilities that are useful for long-running processes:
9962

100-
+ Indexer schedules allow you to parcel out indexing at regular intervals so that you can spread it out over time.
63+
+ Batching documents
64+
+ Parallel indexing over partitioned data
65+
+ Scheduling and integration with change detection logic to index just new and change documents over time
10166

102-
+ Scheduled indexing can resume at the last known stopping point. If a data source isn't fully scanned within the processing window, the indexer picks up wherever it left off at the last job.
67+
Indexer schedules allow you to parcel out indexing at regular intervals. Scheduled indexing can resume at the last known stopping point. If a data source isn't fully scanned within the processing window, the indexer picks up wherever it left off at the last job.
10368

104-
+ Partitioning data into smaller individual data sources enables parallel processing. You can break up source data into smaller components, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then run multiple indexers in parallel.
69+
Partitioning data into smaller individual data sources enables parallel processing. You can break up source data into smaller components, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then [run the indexers in parallel](search-howto-run-reset-indexers.md), subject to the number of search units of your search service.
10570

10671
### Check indexer batch size
10772

@@ -150,7 +115,7 @@ If your data source is an [Azure Blob Storage container](../storage/blobs/storag
150115

151116
1. Specify the same target search index in each indexer.
152117

153-
1. Schedule the indexers.
118+
1. Schedule the indexers.
154119

155120
1. Review indexer status and execution history for confirmation.
156121

@@ -160,9 +125,27 @@ Second, Azure Cognitive Search doesn't lock the index for updates. Concurrent wr
160125

161126
Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer can't merge values from multiple runs into the same field.
162127

128+
## Index big data on Spark
129+
130+
If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
131+
163132
## See also
164133

165134
+ [Tips for improving performance](search-performance-tips.md)
166135
+ [Performance analysis](search-performance-analysis.md)
167136
+ [Indexer overview](search-indexer-overview.md)
168-
+ [Monitor indexer status](search-howto-monitor-indexers.md)
137+
+ [Monitor indexer status](search-howto-monitor-indexers.md)
138+
139+
140+
<!-- Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
141+
142+
As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
143+
144+
The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
145+
146+
The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
147+
148+
For C# tutorials, code samples, and alternative strategies, see:
149+
150+
+ [Tutorial: Optimize indexing workloads](tutorial-optimize-indexing-push-api.md)
151+
+ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) -->

0 commit comments

Comments
 (0)