Skip to content

Add configurable index.max_doc_id_length setting (#19075)#20919

Open
sakrah wants to merge 3 commits intoopensearch-project:mainfrom
sakrah:sakrah/configurable-doc-id-length
Open

Add configurable index.max_doc_id_length setting (#19075)#20919
sakrah wants to merge 3 commits intoopensearch-project:mainfrom
sakrah:sakrah/configurable-doc-id-length

Conversation

@sakrah
Copy link
Contributor

@sakrah sakrah commented Mar 19, 2026

Description

Introduces a new per-index setting index.max_doc_id_length that allows operators to raise the maximum allowed _id field length beyond the current hard-coded 512-byte default, up to Lucene's MAX_TERM_LENGTH (32766 bytes).

Motivation: The existing 512-byte _id limit was inherited from Elasticsearch for HTTP GET URL ergonomics — not for any technical or performance reason. Workloads with naturally long identifiers (metric paths, URLs, composite keys) are forced to hash these identifiers and store the originals in a separate field, adding storage overhead, query complexity, and write-path logic. Making the limit configurable removes this workaround entirely.

Implementation — two-tier validation:

Tier Where Limit Purpose
Request level IndexRequest / UpdateRequest 32766 (Lucene hard limit) Early rejection before routing — prevents IDs that would always fail
Shard level TransportShardBulkAction index.max_doc_id_length (default 512) Per-index configurable enforcement where index settings are available

Key design decisions:

  • DELETE operations are exempt from the length check — you must always be able to delete a document regardless of the current setting
  • The setting is Dynamic and IndexScope — can be changed on existing indices without a restart
  • Minimum is 512 (the current default) to prevent accidentally breaking backward compatibility
  • Maximum is 32766 (Lucene's MAX_TERM_LENGTH) — the absolute physical limit

Files changed:

File Change
IndexSettings.java New MAX_DOC_ID_LENGTH_SETTING with default 512, min 512, max 32766
IndexScopedSettings.java Register the new setting
DocWriteRequest.java Parameterize validateDocIdLength(id, maxLength, ...) + add constants
IndexRequest.java / UpdateRequest.java Use Lucene hard limit (32766) for early validation
TransportShardBulkAction.java Shard-level enforcement using per-index setting, skipping DELETEs
BulkIntegrationIT.java Integration tests: default limit, custom limit, dynamic update, delete-with-long-id
IndexRequestTests.java / BulkRequestTests.java Updated unit tests for two-tier validation
CHANGELOG.md Entry under [Unreleased 3.x] > Added

Related Issues

Resolves #19075

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

…19075)

Introduce a per-index setting index.max_doc_id_length that allows
operators to raise the maximum allowed _id length beyond the current
hard-coded 512-byte default, up to Lucene MAX_TERM_LENGTH (32766 bytes).

This enables workloads with naturally long identifiers—metric paths,
URLs, composite keys—to use them as _id directly, avoiding the need
to hash the identifier and store the original in a separate field.

The implementation uses two-tier validation:
- Request level: rejects IDs exceeding Lucene hard limit (32766 bytes)
  for early failure before routing.
- Shard level: enforces the per-index index.max_doc_id_length setting
  (default 512) from TransportShardBulkAction where index settings are
  available.

The setting is dynamic and index-scoped, so it can be changed on
existing indices without a restart.

Signed-off-by: Sam Akrah <sakrah@uber.com>
Made-with: Cursor
@github-actions
Copy link
Contributor

github-actions bot commented Mar 19, 2026

PR Reviewer Guide 🔍

(Review updated until commit 0b69a6c)

Here are some key observations to aid the review process:

🧪 PR contains tests
🔒 No security concerns identified
✅ No TODO sections
🔀 Multiple PR themes

Sub-PR theme: Add index.max_doc_id_length setting definition and registration

Relevant files:

  • server/src/main/java/org/opensearch/index/IndexSettings.java
  • server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java
  • CHANGELOG.md

Sub-PR theme: Enforce two-tier doc ID length validation at request and shard level

Relevant files:

  • server/src/main/java/org/opensearch/action/DocWriteRequest.java
  • server/src/main/java/org/opensearch/action/index/IndexRequest.java
  • server/src/main/java/org/opensearch/action/update/UpdateRequest.java
  • server/src/main/java/org/opensearch/action/bulk/TransportShardBulkAction.java

Sub-PR theme: Add tests for configurable doc ID length validation

Relevant files:

  • server/src/test/java/org/opensearch/action/bulk/BulkRequestTests.java
  • server/src/test/java/org/opensearch/action/index/IndexRequestTests.java
  • server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java

⚡ Recommended focus areas for review

Validation Bypass

The shard-level validation only checks INDEX and UPDATE operations (skips DELETE). However, the request-level validation in IndexRequest and UpdateRequest uses the Lucene hard limit (32766), not the index-level setting. This means IDs between 513 and 32766 bytes will pass request-level validation and only be caught at the shard level. If the shard-level check is somehow bypassed (e.g., direct shard routing, replicas), the enforcement may be inconsistent.

final DocWriteRequest<?> currentRequest = context.getCurrent();
if (currentRequest.opType() != DocWriteRequest.OpType.DELETE) {
    final int maxDocIdLength = context.getPrimary().indexSettings().getMaxDocIdLength();
    final String docId = currentRequest.id();
    if (docId != null) {
        final int docIdLength = org.apache.lucene.util.UnicodeUtil.calcUTF16toUTF8Length(docId, 0, docId.length());
        if (docIdLength > maxDocIdLength) {
            final Engine.Result result = new Engine.IndexResult(
                new IllegalArgumentException(
                    "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
                ),
                currentRequest.version()
            );
            context.setRequestToExecute(currentRequest);
            context.markOperationAsExecuted(result);
            context.markAsCompleted(context.getExecutionResult());
            return true;
        }
    }
}
Replica Enforcement

The doc ID length validation is added only in executeBulkItemRequest, which runs on the primary shard. It is unclear whether replica shards also enforce this check. If the setting is lowered after documents with long IDs are indexed, replicas may behave differently from the primary during recovery or replication.

if (currentRequest.opType() != DocWriteRequest.OpType.DELETE) {
    final int maxDocIdLength = context.getPrimary().indexSettings().getMaxDocIdLength();
    final String docId = currentRequest.id();
    if (docId != null) {
        final int docIdLength = org.apache.lucene.util.UnicodeUtil.calcUTF16toUTF8Length(docId, 0, docId.length());
        if (docIdLength > maxDocIdLength) {
            final Engine.Result result = new Engine.IndexResult(
                new IllegalArgumentException(
                    "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
                ),
                currentRequest.version()
            );
            context.setRequestToExecute(currentRequest);
            context.markOperationAsExecuted(result);
            context.markAsCompleted(context.getExecutionResult());
            return true;
        }
    }
Minimum Value Constraint

The MAX_DOC_ID_LENGTH_SETTING enforces a minimum of 512, which means operators cannot lower the limit below 512. This is intentional for backward compatibility, but it should be clearly documented and validated. If an operator tries to set it below 512, the error message from the settings framework may not be user-friendly.

public static final Setting<Integer> MAX_DOC_ID_LENGTH_SETTING = Setting.intSetting(
    "index.max_doc_id_length",
    512,
    512,
    32766,
    Property.Dynamic,
    Property.IndexScope
);
Missing Cleanup

Integration tests create indices like testing, testing_custom_id_length, testing_dynamic_id_length, and testing_delete_long_id without cleanup. If tests run in a shared cluster environment, leftover indices from one test could interfere with others, especially testDocIdTooLongDefaultSetting which uses the generic name testing.

public void testDocIdTooLongDefaultSetting() {
    String index = "testing";
    createIndex(index);
    String validId = String.join("", Collections.nCopies(512, "a"));
    String invalidId = String.join("", Collections.nCopies(513, "a"));

    // Index Request - valid id should succeed
    IndexRequest indexRequest = new IndexRequest(index).source(Collections.singletonMap("foo", "baz"));
    assertFalse(client().prepareBulk().add(indexRequest.id(validId)).get().hasFailures());

    // Index Request - id exceeding default limit (512) is rejected at shard level
    BulkResponse bulkResponse = client().prepareBulk().add(indexRequest.id(invalidId)).get();
    assertTrue(bulkResponse.hasFailures());
    assertThat(bulkResponse.getItems()[0].getFailureMessage(), containsString("is too long, must be no longer than 512 bytes"));

    // Update Request - valid id should succeed
    UpdateRequest updateRequest = new UpdateRequest(index, validId).doc("reason", "no source");
    assertFalse(client().prepareBulk().add(updateRequest).get().hasFailures());

    // Update Request - id exceeding default limit is rejected at shard level
    bulkResponse = client().prepareBulk().add(updateRequest.id(invalidId)).get();
    assertTrue(bulkResponse.hasFailures());
    assertThat(bulkResponse.getItems()[0].getFailureMessage(), containsString("is too long, must be no longer than 512 bytes"));
}

public void testDocIdWithCustomMaxLength() {
    String index = "testing_custom_id_length";
    createIndex(index, Settings.builder().put("index.max_doc_id_length", 2048).build());
    String longId = String.join("", Collections.nCopies(1024, "a"));
    String tooLongId = String.join("", Collections.nCopies(2049, "a"));

    // 1024-byte ID should succeed with custom limit of 2048
    IndexRequest indexRequest = new IndexRequest(index).source(Collections.singletonMap("foo", "baz"));
    assertFalse(client().prepareBulk().add(indexRequest.id(longId)).get().hasFailures());

    // 2049-byte ID should be rejected
    BulkResponse bulkResponse = client().prepareBulk().add(indexRequest.id(tooLongId)).get();
    assertTrue(bulkResponse.hasFailures());
    assertThat(bulkResponse.getItems()[0].getFailureMessage(), containsString("is too long, must be no longer than 2048 bytes"));
}

public void testDocIdMaxLengthDynamicUpdate() {
    String index = "testing_dynamic_id_length";
    createIndex(index);
    String longId = String.join("", Collections.nCopies(1024, "a"));

    // 1024-byte ID should be rejected with the default limit of 512
    IndexRequest indexRequest = new IndexRequest(index).source(Collections.singletonMap("foo", "baz"));
    BulkResponse bulkResponse = client().prepareBulk().add(indexRequest.id(longId)).get();
    assertTrue(bulkResponse.hasFailures());
    assertThat(bulkResponse.getItems()[0].getFailureMessage(), containsString("is too long, must be no longer than 512 bytes"));

    // Dynamically raise the limit
    client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put("index.max_doc_id_length", 2048)).get();

    // Same 1024-byte ID should now succeed
    assertFalse(client().prepareBulk().add(indexRequest.id(longId)).get().hasFailures());
}

public void testDeleteWithLongDocIdAllowed() {
    String index = "testing_delete_long_id";
    createIndex(index, Settings.builder().put("index.max_doc_id_length", 2048).build());
    String longId = String.join("", Collections.nCopies(1024, "a"));

    // Index a document with a long ID
    IndexRequest indexRequest = new IndexRequest(index).id(longId).source(Collections.singletonMap("foo", "baz"));
    assertFalse(client().prepareBulk().add(indexRequest).get().hasFailures());

    // Lower the limit below the existing doc's ID length
    client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put("index.max_doc_id_length", 512)).get();

    // DELETE should still succeed even though the ID exceeds the current limit
    DeleteRequest deleteRequest = new DeleteRequest(index, longId);
    BulkResponse bulkResponse = client().prepareBulk().add(deleteRequest).get();
    assertFalse(bulkResponse.hasFailures());
}

@github-actions
Copy link
Contributor

github-actions bot commented Mar 19, 2026

PR Code Suggestions ✨

Latest suggestions up to 0b69a6c

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Reference shared constants for setting bounds

Setting the minimum value to 512 prevents users from lowering the limit below the
default, which is fine for new indices. However, it also means that if a user has
existing documents with IDs between 1 and 511 bytes and tries to set the limit to,
say, 256, it will be rejected at the settings level with a confusing error. More
importantly, the minimum of 512 means the setting cannot be used to restrict IDs to
less than 512 bytes, which may be an intentional design choice but should be clearly
documented. If the intent is to only allow raising the limit, the minimum should be
validated against DEFAULT_MAX_DOC_ID_LENGTH explicitly and the Javadoc should state
this constraint clearly.

server/src/main/java/org/opensearch/index/IndexSettings.java [370-377]

 public static final Setting<Integer> MAX_DOC_ID_LENGTH_SETTING = Setting.intSetting(
     "index.max_doc_id_length",
-    512,
-    512,
-    32766,
+    DocWriteRequest.DEFAULT_MAX_DOC_ID_LENGTH,
+    DocWriteRequest.DEFAULT_MAX_DOC_ID_LENGTH,
+    DocWriteRequest.MAX_DOC_ID_LENGTH_HARD_LIMIT,
     Property.Dynamic,
     Property.IndexScope
 );
Suggestion importance[1-10]: 5

__

Why: Using DocWriteRequest.DEFAULT_MAX_DOC_ID_LENGTH and MAX_DOC_ID_LENGTH_HARD_LIMIT constants instead of magic numbers improves maintainability and ensures consistency across the codebase. This is a valid improvement but has moderate impact.

Low
Verify delete operation actually removes document

The test lowers the limit to 512 but the longId is 1024 bytes. The
TransportShardBulkAction code explicitly skips the doc-id length check for DELETE
operations, so this test is valid. However, the test does not verify that the
document was actually deleted (e.g., by checking the response item's result is
DELETED or by fetching the document afterward). Without this, the test only confirms
no failure, not that the delete was actually executed.

server/src/internalClusterTest/java/org/opensearch/action/bulk/BulkIntegrationIT.java [291-297]

-// Lower the limit below the existing doc's ID length
-client().admin().indices().prepareUpdateSettings(index).setSettings(Settings.builder().put("index.max_doc_id_length", 512)).get();
-
 // DELETE should still succeed even though the ID exceeds the current limit
 DeleteRequest deleteRequest = new DeleteRequest(index, longId);
 BulkResponse bulkResponse = client().prepareBulk().add(deleteRequest).get();
 assertFalse(bulkResponse.hasFailures());
+assertThat(bulkResponse.getItems()[0].getResponse().getResult(), equalTo(DocWriteResponse.Result.DELETED));
Suggestion importance[1-10]: 4

__

Why: Adding an assertion that the document was actually deleted strengthens the test's correctness verification. However, it requires importing DocWriteResponse and is a minor test quality improvement.

Low
Possible issue
Use correct result type per operation type

When the opType is CREATE or UPDATE, using Engine.IndexResult to represent the
failure may be incorrect. An UPDATE operation that fails doc-id validation should
produce an Engine.IndexResult only if it resolves to an index; otherwise it may need
a different result type. More critically, the error is silently swallowed as a
shard-level failure without propagating the correct failure type for UPDATE
operations, which could confuse callers. Consider checking the opType and using the
appropriate Engine.Result subtype (e.g., Engine.IndexResult for INDEX/CREATE, and
handling UPDATE separately).

server/src/main/java/org/opensearch/action/bulk/TransportShardBulkAction.java [610-615]

-final Engine.Result result = new Engine.IndexResult(
-    new IllegalArgumentException(
-        "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
-    ),
-    currentRequest.version()
-);
+final Engine.Result result;
+if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+    result = new Engine.IndexResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        currentRequest.version()
+    );
+} else {
+    // UPDATE: treat as an index result since it resolves to an index operation
+    result = new Engine.IndexResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        currentRequest.version()
+    );
+}
Suggestion importance[1-10]: 2

__

Why: The suggestion proposes splitting the result creation by opType, but the improved code still uses Engine.IndexResult for both branches, making it functionally identical to the original. The suggestion doesn't actually fix anything meaningful.

Low

Previous suggestions

Suggestions up to commit 84f7372
CategorySuggestion                                                                                                                                    Impact
Possible issue
Use correct result type for each operation type

The shard-level validation only creates an Engine.IndexResult for all operation
types, but the current request could also be a DELETE or UPDATE operation. Using
Engine.IndexResult for non-index operations may cause incorrect behavior or
ClassCastExceptions downstream. The result type should match the actual operation
type (e.g., Engine.DeleteResult for deletes).

server/src/main/java/org/opensearch/action/bulk/TransportShardBulkAction.java [609-618]

-final Engine.Result result = new Engine.IndexResult(
-    new IllegalArgumentException(
-        "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
-    ),
-    0
-);
+final Engine.Result result;
+if (currentRequest.opType() == DocWriteRequest.OpType.DELETE) {
+    result = new Engine.DeleteResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        0
+    );
+} else {
+    result = new Engine.IndexResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        0
+    );
+}
 context.setRequestToExecute(currentRequest);
 context.markOperationAsExecuted(result);
 context.markAsCompleted(context.getExecutionResult());
 return true;
Suggestion importance[1-10]: 6

__

Why: The code always creates an Engine.IndexResult even for DELETE or UPDATE operations, which could cause incorrect behavior downstream. However, looking at the PR context, the bulk operation types handled here are primarily INDEX and UPDATE (not DELETE at this level), so the risk may be limited, but it's still a valid concern worth addressing.

Low
General
Prevent setting max doc ID length below default

The minimum value of 1 allows setting the max doc ID length to 1 byte, which would
make it impossible to index any document with a non-trivial ID. A more reasonable
minimum (e.g., 512 to preserve backward compatibility and prevent accidental
misconfiguration) should be enforced, or at least the minimum should be documented
as a potential footgun.

server/src/main/java/org/opensearch/index/IndexSettings.java [369-376]

 public static final Setting<Integer> MAX_DOC_ID_LENGTH_SETTING = Setting.intSetting(
     "index.max_doc_id_length",
     512,
-    1,
+    512,
     32766,
     Property.Dynamic,
     Property.IndexScope
 );
Suggestion importance[1-10]: 4

__

Why: Setting the minimum to 1 could allow misconfiguration that makes indexing nearly impossible. However, there may be valid use cases for smaller limits, and changing the minimum to 512 would be a breaking change for anyone who intentionally sets a lower value. This is a design decision rather than a clear bug.

Low
Suggestions up to commit 20e35a3
CategorySuggestion                                                                                                                                    Impact
Possible issue
Handle all operation types in shard-level ID check

The shard-level doc ID check only creates an Engine.IndexResult, but bulk operations
can also be DELETE or UPDATE op types. Using Engine.IndexResult for non-index
operations may cause incorrect result handling or ClassCastExceptions downstream.
Consider checking the op type and creating the appropriate Engine.Result subtype
(e.g., Engine.DeleteResult for deletes), or using a more generic failure path.

server/src/main/java/org/opensearch/action/bulk/TransportShardBulkAction.java [609-619]

-final Engine.Result result = new Engine.IndexResult(
-    new IllegalArgumentException(
-        "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
-    ),
-    0
-);
+final Engine.Result result;
+if (currentRequest.opType() == DocWriteRequest.OpType.DELETE) {
+    result = new Engine.DeleteResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        0
+    );
+} else {
+    result = new Engine.IndexResult(
+        new IllegalArgumentException(
+            "id [" + docId + "] is too long, must be no longer than " + maxDocIdLength + " bytes but was: " + docIdLength
+        ),
+        0
+    );
+}
 context.setRequestToExecute(currentRequest);
 context.markOperationAsExecuted(result);
 context.markAsCompleted(context.getExecutionResult());
 return true;
Suggestion importance[1-10]: 6

__

Why: The PR uses Engine.IndexResult for all op types including DELETE and UPDATE, which could cause issues downstream when the result is cast to the expected type. However, DELETE operations with a doc ID exceeding the limit are an edge case, and the suggestion's impact depends on how results are consumed downstream.

Low
General
Raise minimum allowed doc ID length setting

The minimum value of 1 allows setting index.max_doc_id_length to extremely small
values (e.g., 1 byte), which would make the index nearly unusable and could break
existing documents. Consider raising the minimum to a more practical value (e.g.,
512, the original default) to prevent accidental misconfiguration that could reject
all document IDs.

server/src/main/java/org/opensearch/index/IndexSettings.java [369-376]

 public static final Setting<Integer> MAX_DOC_ID_LENGTH_SETTING = Setting.intSetting(
     "index.max_doc_id_length",
     512,
-    1,
+    512,
     32766,
     Property.Dynamic,
     Property.IndexScope
 );
Suggestion importance[1-10]: 4

__

Why: Setting index.max_doc_id_length to a very small value like 1 could make an index unusable, but this is a configuration concern rather than a bug. Raising the minimum to 512 would prevent accidental misconfiguration but also removes flexibility for users who may have legitimate reasons for smaller values.

Low
Avoid duplicating default doc ID length literal

DEFAULT_MAX_DOC_ID_LENGTH is declared but appears unused in the PR —
IndexSettings.MAX_DOC_ID_LENGTH_SETTING uses the literal 512 as its default, and
validateDocIdLength callers pass MAX_DOC_ID_LENGTH_HARD_LIMIT. To avoid drift, the
IndexSettings setting default should reference this constant rather than duplicating
the literal.

server/src/main/java/org/opensearch/action/DocWriteRequest.java [249]

-static final int DEFAULT_MAX_DOC_ID_LENGTH = 512;
+// In IndexSettings.java, reference the constant:
+public static final Setting<Integer> MAX_DOC_ID_LENGTH_SETTING = Setting.intSetting(
+    "index.max_doc_id_length",
+    DocWriteRequest.DEFAULT_MAX_DOC_ID_LENGTH,
+    1,
+    32766,
+    Property.Dynamic,
+    Property.IndexScope
+);
Suggestion importance[1-10]: 3

__

Why: The DEFAULT_MAX_DOC_ID_LENGTH constant is indeed unused since IndexSettings uses the literal 512 directly. Referencing the constant would reduce duplication, but this is a minor maintainability improvement. The improved_code modifies a different file than the existing_code, which is slightly inconsistent.

Low

@github-actions
Copy link
Contributor

❌ Gradle check result for 20e35a3: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Contributor

Persistent review updated to latest commit 84f7372

…19075)

Introduce a per-index setting index.max_doc_id_length that allows
operators to raise the maximum allowed _id length beyond the current
hard-coded 512-byte default, up to Lucene MAX_TERM_LENGTH (32766 bytes).

This enables workloads with naturally long identifiers—metric paths,
URLs, composite keys—to use them as _id directly, avoiding the need
to hash the identifier and store the original in a separate field.

The implementation uses two-tier validation:
- Request level: rejects IDs exceeding Lucene hard limit (32766 bytes)
  for early failure before routing.
- Shard level: enforces the per-index index.max_doc_id_length setting
  (default 512) from TransportShardBulkAction where index settings are
  available.

The setting is dynamic and index-scoped, so it can be changed on
existing indices without a restart.

Signed-off-by: Sam Akrah <sakrah@uber.com>
@github-actions
Copy link
Contributor

Persistent review updated to latest commit 0b69a6c

@github-actions github-actions bot added the enhancement Enhancement or improvement to existing feature or request label Mar 19, 2026
@github-actions github-actions bot added the Indexing Indexing, Bulk Indexing and anything related to indexing label Mar 19, 2026
@github-actions
Copy link
Contributor

❌ Gradle check result for 0b69a6c: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Enhancement or improvement to existing feature or request Indexing Indexing, Bulk Indexing and anything related to indexing lucene

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] Raise or remove the length limit on _id

2 participants