Skip to content

Conversation

dimitris-athanasiou
Copy link
Contributor

Previously the inference_id of semantic_text fields was not updatable. This commit allows users to update the inference_id of a semantic_text field. This is particularly useful for scenarios where the user wants to switch to using the same model but from a different service.

There are two circumstances when the update is allowed.

  • No values have been written for the semantic_text field.

The inference endpoint can be changed freely as there is no need for compatibility between the current and the new endpoint.

  • The new inference endpoint is compatible with the previous one.

The model_settings of the new inference endpoint are compatible with those of the current endpoint, thus the update is allowed.

Previously the `inference_id` of `semantic_text` fields was not updatable.
This commit allows users to update the `inference_id` of a `semantic_text` field.
This is particularly useful for scenarios where the user wants to switch to using
the same model but from a different service.

There are two circumstances when the update is allowed.

  - No values have been written for the `semantic_text` field.

The inference endpoint can be changed freely as there is no need for
compatibility between the current and the new endpoint.

  - The new inference endpoint is compatible with the previous one.

The `model_settings` of the new inference endpoint are compatible
with those of the current endpoint, thus the update is allowed.
@dimitris-athanasiou dimitris-athanasiou added >enhancement :SearchOrg/Relevance Label for the Search (solution/org) Relevance team :Search Foundations/Search Catch all for Search Foundations v9.3.0 labels Oct 7, 2025
@elasticsearchmachine elasticsearchmachine added Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch Team:Search - Relevance The Search organization Search Relevance team labels Oct 7, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/search-relevance (Team:Search - Relevance)

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search-foundations (Team:Search Foundations)

@elasticsearchmachine
Copy link
Collaborator

Hi @dimitris-athanasiou, I've created a changelog YAML for you.

@dimitris-athanasiou
Copy link
Contributor Author

PR is ready for review. However, I intend to add documentation changes soon.

@@ -0,0 +1,5 @@
pr: 136120
summary: Allow updating `inference_id` of `semantic_text` fields
area: "Search"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Search or Mapping?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say mapping

Copy link
Contributor

github-actions bot commented Oct 8, 2025

🔍 Preview links for changed docs

Copy link
Contributor

github-actions bot commented Oct 8, 2025

ℹ️ Important: Docs version tagging

👋 Thanks for updating the docs! Just a friendly reminder that our docs are now cumulative. This means all 9.x versions are documented on the same page and published off of the main branch, instead of creating separate pages for each minor version.

We use applies_to tags to mark version-specific features and changes.

Expand for a quick overview

When to use applies_to tags:

✅ At the page level to indicate which products/deployments the content applies to (mandatory)
✅ When features change state (e.g. preview, ga) in a specific version
✅ When availability differs across deployments and environments

What NOT to do:

❌ Don't remove or replace information that applies to an older version
❌ Don't add new information that applies to a specific version without an applies_to tag
❌ Don't forget that applies_to tags can be used at the page, section, and inline level

🤔 Need help?

Copy link
Contributor

@leemthompo leemthompo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of very minor wording suggestions from me :)

to create the endpoint. If `search_inference_id` is specified, the {{infer}}
endpoint will only be used at index time.

::::{applies-switch}
Copy link
Contributor

@leemthompo leemthompo Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot to say that I think you can indent this whole applies-switch section a little bit, otherwise LGTM :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't seem to be able to do this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no worries :)

Copy link
Member

@kderusso kderusso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! I've left a few comments but I think it's pretty close.

@@ -0,0 +1,5 @@
pr: 136120
summary: Allow updating `inference_id` of `semantic_text` fields
area: "Search"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say mapping

* @param modelSettings the new model settings. If null the mapper will be returned unchanged.
* @return A mapper with the copied settings applied
*/
private SemanticTextFieldMapper copyWithNewModelSettingsIfNotSet(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we're doing a null check here before we copy the settings, but we silently ignore if that case happens. Should we throw if it's not null and this method is called?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be that it's not null because the update itself contains model_settings. In that case the explicitly set model_settings take precedence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't review this file, assume it's the same as the other yaml test

Copy link
Contributor

@Mikep86 Mikep86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial review, didn't get to the tests. I'll pick it up next week.

Co-authored-by: Mike Pellegrini <[email protected]>
Copy link
Contributor

@Mikep86 Mikep86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Production code looks good! I left some comments about test adjustments, other than that we're good to go 🚀

mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", "test_model").endObject()),
useLegacyFormat
);
assertSemanticTextField(mapperService, fieldName, false, null, null);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a call to assertInferenceEndpoints here to round out this test case?

Comment on lines +575 to +608
public void testUpdateInferenceId_GivenCurrentHasNoModelSettingsAndNewIsIncompatibleTaskType_ShouldSucceed() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.sparseEmbedding("previous_service");
givenModelSettings(oldInferenceId, previousModelSettings);
var mapperService = createMapperService(
mapping(
b -> b.startObject(fieldName)
.field("type", SemanticTextFieldMapper.CONTENT_TYPE)
.field(INFERENCE_ID_FIELD, oldInferenceId)
.endObject()
),
useLegacyFormat
);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, false, null, null);

String newInferenceId = "new_inference_id";
MinimalServiceSettings newModelSettings = MinimalServiceSettings.textEmbedding(
"new_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(newInferenceId, newModelSettings);
merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
);

assertInferenceEndpoints(mapperService, fieldName, newInferenceId, newInferenceId);
assertSemanticTextField(mapperService, fieldName, false, null, null);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couple points on this test:

  • Can we check that the embeddings field mapper registered with the mapper service is as expected, both after initially creating the mapper service and after updating it? We can check with a call like mapperService.mappingLookup().getMapper(SemanticTextField.getEmbeddingsFieldName(fieldName))
  • I think this is a great place for us to utilize randomization. We should be able to update from any model to any other model. So, we could randomly generate both the initial and updated model using TestModel (we would only take the service settings in this case) and the update should always succeed. Iterate over that a few times in the test we get decent coverage.

Comment on lines +610 to +890
public void testUpdateInferenceId_GivenCurrentHasSparseModelSettingsAndNewIsCompatible() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.sparseEmbedding("previous_service");
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

String newInferenceId = "new_inference_id";
MinimalServiceSettings newModelSettings = MinimalServiceSettings.sparseEmbedding("new_service");
givenModelSettings(newInferenceId, newModelSettings);
merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
);

assertInferenceEndpoints(mapperService, fieldName, newInferenceId, newInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);
SemanticTextFieldMapper semanticFieldMapper = getSemanticFieldMapper(mapperService, fieldName);
assertThat(semanticFieldMapper.fieldType().getModelSettings(), equalTo(newModelSettings));
}

public void testUpdateInferenceId_GivenCurrentHasSparseModelSettingsAndNewSetsDefault() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.sparseEmbedding("previous_service");
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

MinimalServiceSettings newModelSettings = MinimalServiceSettings.sparseEmbedding("new_service");
givenModelSettings(DEFAULT_ELSER_2_INFERENCE_ID, newModelSettings);
merge(mapperService, mapping(b -> b.startObject(fieldName).field("type", "semantic_text").endObject()));

assertInferenceEndpoints(mapperService, fieldName, DEFAULT_ELSER_2_INFERENCE_ID, DEFAULT_ELSER_2_INFERENCE_ID);
assertSemanticTextField(mapperService, fieldName, true, null, null);
SemanticTextFieldMapper semanticFieldMapper = getSemanticFieldMapper(mapperService, fieldName);
assertThat(semanticFieldMapper.fieldType().getModelSettings(), equalTo(newModelSettings));
}

public void testUpdateInferenceId_GivenCurrentHasSparseModelSettingsAndNewIsIncompatibleTaskType() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.sparseEmbedding("previous_service");
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

String newInferenceId = "new_inference_id";
MinimalServiceSettings newModelSettings = MinimalServiceSettings.textEmbedding(
"new_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(newInferenceId, newModelSettings);

Exception exc = expectThrows(
IllegalArgumentException.class,
() -> merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
)
);

assertThat(
exc.getMessage(),
containsString(
"Cannot update [semantic_text] field ["
+ fieldName
+ "] because inference endpoint ["
+ oldInferenceId
+ "] with model settings ["
+ previousModelSettings
+ "] is not compatible with new inference endpoint ["
+ newInferenceId
+ "] with model settings ["
+ newModelSettings
+ "]"
)
);
}

public void testUpdateInferenceId_GivenCurrentHasDenseModelSettingsAndNewIsCompatible() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.textEmbedding(
"previous_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

String newInferenceId = "new_inference_id";
MinimalServiceSettings newModelSettings = MinimalServiceSettings.textEmbedding(
"new_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(newInferenceId, newModelSettings);
merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
);

assertInferenceEndpoints(mapperService, fieldName, newInferenceId, newInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);
SemanticTextFieldMapper semanticFieldMapper = getSemanticFieldMapper(mapperService, fieldName);
assertThat(semanticFieldMapper.fieldType().getModelSettings(), equalTo(newModelSettings));
}

public void testUpdateInferenceId_GivenCurrentHasDenseModelSettingsAndNewIsIncompatibleTaskType() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.textEmbedding(
"previous_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

String newInferenceId = "new_inference_id";
MinimalServiceSettings newModelSettings = MinimalServiceSettings.sparseEmbedding("new_service");
givenModelSettings(newInferenceId, newModelSettings);

Exception exc = expectThrows(
IllegalArgumentException.class,
() -> merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
)
);

assertThat(
exc.getMessage(),
containsString(
"Cannot update [semantic_text] field ["
+ fieldName
+ "] because inference endpoint ["
+ oldInferenceId
+ "] with model settings ["
+ previousModelSettings
+ "] is not compatible with new inference endpoint ["
+ newInferenceId
+ "] with model settings ["
+ newModelSettings
+ "]"
)
);
}

public void testUpdateInferenceId_GivenCurrentHasDenseModelSettingsAndNewHasIncompatibleDimensions() throws IOException {
testUpdateInferenceId_GivenDenseModelsWithDifferentSettings(
MinimalServiceSettings.textEmbedding("previous_service", 48, SimilarityMeasure.L2_NORM, DenseVectorFieldMapper.ElementType.BIT),
MinimalServiceSettings.textEmbedding("new_service", 40, SimilarityMeasure.L2_NORM, DenseVectorFieldMapper.ElementType.BIT)
);
}

public void testUpdateInferenceId_GivenCurrentHasDenseModelSettingsAndNewHasIncompatibleSimilarityMeasure() throws IOException {
testUpdateInferenceId_GivenDenseModelsWithDifferentSettings(
MinimalServiceSettings.textEmbedding(
"previous_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BYTE
),
MinimalServiceSettings.textEmbedding("new_service", 48, SimilarityMeasure.COSINE, DenseVectorFieldMapper.ElementType.BYTE)
);
}

public void testUpdateInferenceId_GivenCurrentHasDenseModelSettingsAndNewHasIncompatibleElementType() throws IOException {
testUpdateInferenceId_GivenDenseModelsWithDifferentSettings(
MinimalServiceSettings.textEmbedding(
"previous_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BYTE
),
MinimalServiceSettings.textEmbedding("new_service", 48, SimilarityMeasure.L2_NORM, DenseVectorFieldMapper.ElementType.BIT)
);
}

public void testUpdateInferenceId_CurrentHasDenseModelSettingsAndNewSetsDefault_ShouldFailAsDefaultIsSparse() throws IOException {
String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
MinimalServiceSettings previousModelSettings = MinimalServiceSettings.textEmbedding(
"previous_service",
48,
SimilarityMeasure.L2_NORM,
DenseVectorFieldMapper.ElementType.BIT
);
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

MinimalServiceSettings newModelSettings = MinimalServiceSettings.sparseEmbedding("new_service");
givenModelSettings(DEFAULT_ELSER_2_INFERENCE_ID, newModelSettings);

Exception exc = expectThrows(
IllegalArgumentException.class,
() -> merge(mapperService, mapping(b -> b.startObject(fieldName).field("type", "semantic_text").endObject()))
);

assertThat(
exc.getMessage(),
containsString(
"Cannot update [semantic_text] field ["
+ fieldName
+ "] because inference endpoint ["
+ oldInferenceId
+ "] with model settings ["
+ previousModelSettings
+ "] is not compatible with new inference endpoint ["
+ DEFAULT_ELSER_2_INFERENCE_ID
+ "] with model settings ["
+ newModelSettings
+ "]"
)
);
}

private void testUpdateInferenceId_GivenDenseModelsWithDifferentSettings(
MinimalServiceSettings previousModelSettings,
MinimalServiceSettings newModelSettings
) throws IOException {
assertThat(previousModelSettings.taskType(), equalTo(TaskType.TEXT_EMBEDDING));
assertThat(newModelSettings.taskType(), equalTo(TaskType.TEXT_EMBEDDING));
assertThat(newModelSettings, not(equalTo(previousModelSettings)));

String fieldName = randomAlphaOfLengthBetween(5, 15);
String oldInferenceId = "old_inference_id";
givenModelSettings(oldInferenceId, previousModelSettings);
MapperService mapperService = mapperServiceForFieldWithModelSettings(fieldName, oldInferenceId, previousModelSettings);

assertInferenceEndpoints(mapperService, fieldName, oldInferenceId, oldInferenceId);
assertSemanticTextField(mapperService, fieldName, true, null, null);

String newInferenceId = "new_inference_id";
givenModelSettings(newInferenceId, newModelSettings);

Exception exc = expectThrows(
IllegalArgumentException.class,
() -> merge(
mapperService,
mapping(b -> b.startObject(fieldName).field("type", "semantic_text").field("inference_id", newInferenceId).endObject())
)
);

assertThat(
exc.getMessage(),
containsString(
"Cannot update [semantic_text] field ["
+ fieldName
+ "] because inference endpoint ["
+ oldInferenceId
+ "] with model settings ["
+ previousModelSettings
+ "] is not compatible with new inference endpoint ["
+ newInferenceId
+ "] with model settings ["
+ newModelSettings
+ "]"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get equal or better test coverage with less code with randomization here. If I understand correctly, all of these tests are testing one of two high level cases:

  • The model settings exist and the new inference ID is compatible
  • The model settings exist and the new inference ID is not compatible

We can use randomization to test the former by generating a random model (using TestModel again), then mutating it in a way that maintains compatibility. In most cases, this is just changing the service name, but we can also add randomly update to a default inference ID when the original model is a sparse embedding model.

We can randomize the latter by doing much of the same, except randomly mutating in a way that is not compatible (different task type, or dimensions, or element type, etc)

Perform multiple iterations over both and I think we have better coverage with less code.


private static SemanticTextFieldMapper getSemanticFieldMapper(MapperService mapperService, String fieldName) {
Mapper mapper = mapperService.mappingLookup().getMapper(fieldName);
assertNotNull(mapper);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: technically the null check isn't necessary, since null will never be an instance of SemanticTextFieldMapper

Comment on lines +1563 to +1572
- do:
catch: /non-existing-inference-id does not exist in this cluster./
indices.put_mapping:
index: test-index
body:
properties:
sparse_field:
type: semantic_text
inference_id: non-existing-inference-id

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also check that this is a 400 error?

Comment on lines +1458 to +1464
- match: { hits.total.value: 2 }
- match: { hits.hits.0._source.sparse_field.inference.inference_id: sparse-inference-id }
- match: { hits.hits.0._source.sparse_field.inference.model_settings.service: test_service }
- exists: hits.hits.0._source.sparse_field.inference.chunks.0.embeddings
- match: { hits.hits.1._source.sparse_field.inference.inference_id: sparse-inference-id-2 }
- match: { hits.hits.1._source.sparse_field.inference.model_settings.service: alternate_sparse_embedding_test_service }
- exists: hits.hits.1._source.sparse_field.inference.chunks.0.embeddings
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like how this test shows the effect of having the model settings in source in the legacy format 🙌

- match: { "test-index.mappings.properties.sparse_field.inference_id": sparse-inference-id-2 }
- match: { "test-index.mappings.properties.sparse_field.model_settings.service": alternate_sparse_embedding_test_service }

# We index another doc to later check inference fields new updated endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get _inference_fields prior to indexing a new doc and check that _inference_fields.sparse_field.inference changed as expected? We shouldn't need to index a new doc to see this update.

- match: { "test-index.mappings.properties.sparse_field.type": semantic_text }
- match: { "test-index.mappings.properties.sparse_field.inference_id": dense-inference-id }
- not_exists: test-index.mappings.properties.sparse_field.model_settings

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we test indexing a doc after this update, and then getting _inference_fields to check that it's as expected?

You can update the inference endpoint if no values have been indexed or if the new endpoint is compatible with the current one.

::::{warning}
When updating an `inference_id` it is important to ensure the new {{infer}} endpoint produces the correct embeddings for your use case. This typically means using the same underlying model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When updating an `inference_id` it is important to ensure the new {{infer}} endpoint produces the correct embeddings for your use case. This typically means using the same underlying model.
When updating an `inference_id` it is important to ensure the new {{infer}} endpoint produces embeddings compatible with those already indexed. This typically means using the same underlying model.

to create the endpoint. If `search_inference_id` is specified, the {{infer}}
endpoint will only be used at index time.

::::{applies-switch}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice usage of tabs here! I will be copying this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

>enhancement :Search Foundations/Search Catch all for Search Foundations :SearchOrg/Relevance Label for the Search (solution/org) Relevance team Team:Search - Relevance The Search organization Search Relevance team Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch v9.3.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants