Skip to content

Commit 4614cb1

Browse files
authored
[DOCS] Documents that deployment_id can be used as inference_id in certain cases. (#121055) (#121059)
1 parent 7e03356 commit 4614cb1

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

docs/reference/query-dsl/sparse-vector-query.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,11 +62,14 @@ GET _search
6262
(Required, string) The name of the field that contains the token-weight pairs to be searched against.
6363

6464
`inference_id`::
65-
(Optional, string) The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
65+
(Optional, string)
66+
The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
6667
It must be the same inference ID that was used to create the tokens from the input text.
6768
Only one of `inference_id` and `query_vector` is allowed.
6869
If `inference_id` is specified, `query` must also be specified.
6970
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.
71+
You can reference a `deployment_id` of a {ml} trained model deployment as an `inference_id`.
72+
For example, if you download and deploy the ELSER model in the {ml-cap} trained models UI in {kib}, you can use the `deployment_id` of that deployment as the `inference_id`.
7073

7174
`query`::
7275
(Optional, string) The query text you want to use for search.

0 commit comments

Comments
 (0)