You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/mapping/types/semantic-text.asciidoc
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,7 @@ Long passages are <<auto-text-chunking, automatically chunked>> to smaller secti
13
13
The `semantic_text` field type specifies an inference endpoint identifier that will be used to generate embeddings.
14
14
You can create the inference endpoint by using the <<put-inference-api>>.
15
15
This field type and the <<query-dsl-semantic-query,`semantic` query>> type make it simpler to perform semantic search on your data.
16
+
The `semantic_text` field type may also be queried with <<query-dsl-match-query, match>>, <<query-dsl-sparse-vector-query, sparse_vector>> or <<query-dsl-knn-query, knn>> queries.
16
17
17
18
If you don’t specify an inference endpoint, the `inference_id` field defaults to `.elser-2-elasticsearch`, a preconfigured endpoint for the elasticsearch service.
Copy file name to clipboardExpand all lines: docs/reference/query-dsl/sparse-vector-query.asciidoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,8 @@ This can be achieved with one of two strategies:
11
11
- Using an {nlp} model to convert query text into a list of token-weight pairs
12
12
- Sending in precalculated token-weight pairs as query vectors
13
13
14
-
These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>.
14
+
These token-weight pairs are then used in a query against a <<sparse-vector,sparse vector>>
15
+
or a <<semantic-text, semantic_text>> field with a compatible sparse inference model.
15
16
At query time, query vectors are calculated using the same inference model that was used to create the tokens.
16
17
When querying, these query vectors are ORed together with their respective weights, which means scoring is effectively a <<vector-functions-dot-product,dot product>> calculation between stored dimensions and query dimensions.
17
18
@@ -65,6 +66,7 @@ GET _search
65
66
It must be the same inference ID that was used to create the tokens from the input text.
66
67
Only one of `inference_id` and `query_vector` is allowed.
67
68
If `inference_id` is specified, `query` must also be specified.
69
+
If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.
68
70
69
71
`query`::
70
72
(Optional, string) The query text you want to use for search.
@@ -291,5 +293,3 @@ GET my-index/_search
291
293
//TEST[skip: Requires inference]
292
294
293
295
NOTE: When performing <<modules-cross-cluster-search, cross-cluster search>>, inference is performed on the local cluster.
0 commit comments