Skip to content

Commit 235345a

Browse files
Merge pull request #4261 from HeidiSteen/heidist-april2
[azure search] Error 429 running out of storage
2 parents ab8a1e7 + 867fc24 commit 235345a

File tree

3 files changed

+14
-8
lines changed

3 files changed

+14
-8
lines changed

articles/search/search-capacity-planning.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.custom:
1111
- ignite-2023
1212
- ignite-2024
1313
ms.topic: conceptual
14-
ms.date: 04/10/2025
14+
ms.date: 04/22/2025
1515
---
1616

1717
# Estimate and manage capacity of a search service
@@ -53,7 +53,8 @@ A single service must have sufficient resources to handle all workloads (indexin
5353
Guidelines for determining whether to add capacity include:
5454

5555
+ Meeting the high availability criteria for service-level agreement.
56-
+ The frequency of HTTP 503 errors is increasing.
56+
+ The frequency of HTTP 503 (Service unavailable) errors is increasing.
57+
+ The frequency of HTTP 429 (Too many requests) errors is increasing, an indication of low storage.
5758
+ Large query volumes are expected.
5859
+ A [one-time upgrade](#how-to-upgrade-capacity) to newer infrastructure and larger partitions isn’t sufficient.
5960
+ The current number of partitions isn’t adequate for indexing workloads.

articles/search/search-how-to-load-search-index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,12 @@ ms.author: heidist
99

1010
ms.service: azure-ai-search
1111
ms.topic: how-to
12-
ms.date: 04/14/2025
12+
ms.date: 04/22/2025
1313
---
1414

1515
# Load data into a search index in Azure AI Search
1616

17-
This article explains how to import documents into a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md) with [data import](search-what-is-data-import.md) following as a second step. The exception is [Import wizards](search-import-data-portal.md) in the Azure portal and indexer pipelines, which create and load an index in one workflow.
17+
This article explains how to import documents into a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md) with [data import](search-what-is-data-import.md) following as a second step. The exception is [Import wizards](search-import-data-portal.md) in the Azure portal and [indexer pipelines](search-indexer-overview.md), which create and load an index in one workflow.
1818

1919
## How data import works
2020

articles/search/search-howto-reindex.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.service: azure-ai-search
1111
ms.custom:
1212
- ignite-2024
1313
ms.topic: how-to
14-
ms.date: 03/21/2025
14+
ms.date: 04/22/2025
1515
---
1616

1717
# Update or rebuild an index in Azure AI Search
@@ -24,17 +24,21 @@ For schema changes on applications already in production, we recommend creating
2424

2525
## Update content
2626

27-
Incremental indexing and synchronizing an index against changes in source data is fundamental to most search applications. This section explains the workflow for updating field contents in a search index through the REST API, but the Azure SDKs provide equivalent functionality.
27+
Incremental indexing and synchronizing an index against changes in source data is fundamental to most search applications. This section explains the workflow for adding, removing, or overwriting the content of a search index through the REST API, but the Azure SDKs provide equivalent functionality.
2828

29-
The body of the request contains one or more documents to be indexed. Documents are identified by a unique case-sensitive key. Each document is associated with an action: "upload", "delete", "merge", or "mergeOrUpload". Upload requests must include the document data as a set of key/value pairs.
29+
The body of the request contains one or more documents to be indexed. Within the request, each document in the index is:
30+
31+
+ Identified by a unique case-sensitive key.
32+
+ Associated with an action: "upload", "delete", "merge", or "mergeOrUpload".
33+
+ Populated with a set of name/value pairs for each field that you're adding or updating.
3034

3135
```json
3236
{
3337
"value": [
3438
{
3539
"@search.action": "upload (default) | merge | mergeOrUpload | delete",
3640
"key_field_name": "unique_key_of_document", (key/value pair for key field from index schema)
37-
"field_name": field_value (key/value pairs matching index schema)
41+
"field_name": field_value (name/value pairs matching index schema)
3842
...
3943
},
4044
...
@@ -130,6 +134,7 @@ The following table explains the various per-document status codes that can be r
130134
| 404 | The document couldn't be merged because the given key doesn't exist in the index. | No | This error doesn't occur for uploads since they create new documents, and it doesn't occur for deletes because they're idempotent. |
131135
| 409 | A version conflict was detected when attempting to index a document.| Yes | This can happen when you're trying to index the same document more than once concurrently. |
132136
| 422 | The index is temporarily unavailable because it was updated with the 'allowIndexDowntime' flag set to 'true'. | Yes | |
137+
|429 | Too Many Requests | Yes | If you get this error code during indexing, it usually means that you're running low on storage. As you near [storage limits](search-limits-quotas-capacity.md), the service can enter a state where you can't add or update until you delete some documents. For more information, see [Plan and manage capacity](search-capacity-planning.md#how-to-upgrade-capacity) if you want more storage, or free up space by deleting documents. |
133138
| 503 | Your search service is temporarily unavailable, possibly due to heavy load. | Yes | Your code should wait before retrying in this case or you risk prolonging the service unavailability.|
134139

135140
If your client code frequently encounters a 207 response, one possible reason is that the system is under load. You can confirm this by checking the statusCode property for 503. If the statusCode is 503, we recommend throttling indexing requests. Otherwise, if indexing traffic doesn't subside, the system could start rejecting all requests with 503 errors.

0 commit comments

Comments
 (0)