You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/search/search-howto-large-index.md
+36-53Lines changed: 36 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,64 +13,27 @@ ms.date: 01/17/2023
13
13
14
14
# Index large data sets in Azure Cognitive Search
15
15
16
-
If you have big data or complex data in a search indexing pipeline, this article describes the strategies for accommodating long running processes on Azure Cognitive Search.
16
+
If your search solution includes indexing big data or complex data, this article describes the strategies for accommodating long running processes on Azure Cognitive Search.
17
17
18
-
This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data using a [search indexer](search-indexer-overview.md) on a supported data source.
18
+
This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data using a [search indexer](search-indexer-overview.md) on a supported data source. The strategy you choose will be determined by the indexing approach you're already using. If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, given the skillset dependency on indexers.
19
19
20
-
Be sure to also review [Tips for better performance](search-performance-tips.md) for best practices on index and query design.
20
+
This article complements [Tips for better performance](search-performance-tips.md), which offers best practices on index and query design. A well-designed index that includes only the fields and attributes you need is an important prerequisite for large-scale indexing.
21
21
22
22
> [!NOTE]
23
23
> The strategies described in this article assume a single large data source. If your solution requires indexing from multiple data sources, see [Index multiple data sources in Azure Cognitive Search](https://github.com/Azure-Samples/azure-cognitive-search-multiple-containers-indexer/blob/main/README.md) for a recommended approach.
24
24
25
-
## Strategies for pull mode indexing with indexers
25
+
## Index large data using the push APIs
26
26
27
-
[Indexers](search-indexer-overview.md) have several capabilities that are useful for long-running processes:
28
-
29
-
+ Batching documents
30
-
+ Parallel indexing over partitioned data
31
-
+ Scheduling and integration with change detection logic to index just new and change documents over time
32
-
33
-
If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, due to the skillset dependency on indexers.
34
-
35
-
The ["How to use indexers for long running jobs"](#how-to-use-indexers-for-long-running-jobs) section in this article describes each approach.
36
-
37
-
## Strategies for push mode indexing
38
-
39
-
If you aren't using indexers, then you're importing data through the push APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments).
40
-
41
-
For push mode indexing, a strategy for long-running indexing will have one or both of the following components:
42
-
43
-
+ Batching documents
44
-
+ Managing threads
45
-
46
-
The ["How to index large data using the push API"](#how-to-index-large-datasets-with-the-push-api) section provides details.
27
+
"Push" APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), are the most prevalent form of indexing in Cognitive Search. For solutions that use a push API, the strategy for long-running indexing will have one or both of the following components:
47
28
48
-
## Strategies for big data on Spark
49
-
50
-
If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
51
-
52
-
<!-- Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
53
-
54
-
As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
55
-
56
-
The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
57
-
58
-
The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
59
-
60
-
For C# tutorials, code samples, and alternative strategies, see:
+ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) -->
64
-
65
-
## How to index large datasets with the "push" API
66
-
67
-
When pushing large data volumes into an index using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), batching documents and managing threads are two techniques that improve indexing speed.
29
+
+ Batch documents
30
+
+ Manage threads
68
31
69
32
### Batch multiple documents per request
70
33
71
-
One of the simplest mechanisms for indexing a larger data set is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
34
+
A simple mechanism for indexing a large quantity of data is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
72
35
73
-
Using batches to index documents will significantly improve indexing performance. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
36
+
Batching documents will significantly shorten the amount of time it takes to work through a large data volume. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
74
37
75
38
+ The schema of your index
76
39
+ The size of your data
@@ -93,15 +56,17 @@ Indexers have built-in thread management, but when you're using the push APIs, y
93
56
94
57
The Azure .NET SDK automatically retries 503s and other failed requests, but you'll need to implement your own logic to retry 207s. Open-source tools such as [Polly](https://github.com/App-vNext/Polly) can also be used to implement a retry strategy.
95
58
96
-
## How to use indexers for long-running jobs
59
+
## Index with indexers and the "pull" APIs
97
60
98
-
This section explains how to use the built-in capabilities of [Indexers](search-indexer-overview.md)for accommodating larger data sets:
61
+
[Indexers](search-indexer-overview.md)have several capabilities that are useful for long-running processes:
99
62
100
-
+ Indexer schedules allow you to parcel out indexing at regular intervals so that you can spread it out over time.
63
+
+ Batching documents
64
+
+ Parallel indexing over partitioned data
65
+
+ Scheduling and integration with change detection logic to index just new and change documents over time
101
66
102
-
+ Scheduled indexing can resume at the last known stopping point. If a data source isn't fully scanned within the processing window, the indexer picks up wherever it left off at the last job.
67
+
Indexer schedules allow you to parcel out indexing at regular intervals. Scheduled indexing can resume at the last known stopping point. If a data source isn't fully scanned within the processing window, the indexer picks up wherever it left off at the last job.
103
68
104
-
+Partitioning data into smaller individual data sources enables parallel processing. You can break up source data into smaller components, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then run multiple indexers in parallel.
69
+
Partitioning data into smaller individual data sources enables parallel processing. You can break up source data into smaller components, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then [run the indexers in parallel](search-howto-run-reset-indexers.md), subject to the number of search units of your search service.
105
70
106
71
### Check indexer batch size
107
72
@@ -150,7 +115,7 @@ If your data source is an [Azure Blob Storage container](../storage/blobs/storag
150
115
151
116
1. Specify the same target search index in each indexer.
152
117
153
-
1. Schedule the indexers.
118
+
1. Schedule the indexers.
154
119
155
120
1. Review indexer status and execution history for confirmation.
156
121
@@ -160,9 +125,27 @@ Second, Azure Cognitive Search doesn't lock the index for updates. Concurrent wr
160
125
161
126
Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer can't merge values from multiple runs into the same field.
162
127
128
+
## Index big data on Spark
129
+
130
+
If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
131
+
163
132
## See also
164
133
165
134
+[Tips for improving performance](search-performance-tips.md)
<!-- Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
141
+
142
+
As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
143
+
144
+
The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
145
+
146
+
The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
147
+
148
+
For C# tutorials, code samples, and alternative strategies, see:
0 commit comments