Skip to content

Commit b6b2c09

Browse files
authored
Adds adaptive allocations, default endpoints, and chunking strategy to inference landing page (#661)
## Overview Related to elastic/developer-docs-team#260 This PR adds sections to the inference landing page about adaptive allocations, default inference endpoints, and chunking strategy.
1 parent eab7596 commit b6b2c09

File tree

1 file changed

+64
-1
lines changed

1 file changed

+64
-1
lines changed

solutions/search/inference-api.md

Lines changed: 64 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,67 @@ To add a new interference endpoint using the UI:
3939
1. Select the **Add endpoint** button.
4040
1. Select a service from the drop down menu.
4141
1. Provide the required configuration details.
42-
1. Select **Save** to create the endpoint.
42+
1. Select **Save** to create the endpoint.
43+
44+
## Adaptive allocations [adaptive-allocations]
45+
46+
Adaptive allocations allow inference services to dynamically adjust the number of model allocations based on the current load.
47+
48+
When adaptive allocations are enabled:
49+
50+
* The number of allocations scales up automatically when the load increases.
51+
* Allocations scale down to a minimum of 0 when the load decreases, saving resources.
52+
53+
For more information about adaptive allocations and resources, refer to the trained model autoscaling documentation.
54+
55+
% TO DO: Add a link to trained model autoscaling when the page is available.%
56+
57+
## Default {{infer}} endpoints [default-enpoints]
58+
59+
Your {{es}} deployment contains preconfigured {{infer}} endpoints which makes them easier to use when defining `semantic_text` fields or using {{infer}} processors. The following list contains the default {infer} endpoints listed by `inference_id`:
60+
61+
* `.elser-2-elasticsearch`: uses the [ELSER](../../explore-analyze/machine-learning/nlp/ml-nlp-elser.md) built-in trained model for `sparse_embedding` tasks (recommended for English language tex). The `model_id` is `.elser_model_2_linux-x86_64`.
62+
* `.multilingual-e5-small-elasticsearch`: uses the [E5](../../explore-analyze/machine-learning/nlp/ml-nlp-e5.md) built-in trained model for `text_embedding` tasks (recommended for non-English language texts). The `model_id` is `.e5_model_2_linux-x86_64`.
63+
64+
Use the `inference_id` of the endpoint in a [`semantic_text`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/semantic-text.md) field definition or when creating an [{{infer}} processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/inference-processor.md). The API call will automatically download and deploy the model which might take a couple of minutes. Default {{infer}} enpoints have adaptive allocations enabled. For these models, the minimum number of allocations is `0`. If there is no {{infer}} activity that uses the endpoint, the number of allocations will scale down to `0` automatically after 15 minutes.
65+
66+
## Configuring chunking [infer-chunking-config]
67+
68+
{{infer-cap}} endpoints have a limit on the amount of text they can process at once, determined by the model's input capacity. Chunking is the process of splitting the input text into pieces that remain within these limits.
69+
It occurs when ingesting documents into [`semantic_text` fields](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/mapping-reference/semantic-text.md). Chunking also helps produce sections that are digestible for humans. Returning a long document in search results is less useful than providing the most relevant chunk of text.
70+
71+
Each chunk will include the text subpassage and the corresponding embedding generated from it.
72+
73+
By default, documents are split into sentences and grouped in sections up to 250 words with 1 sentence overlap so that each chunk shares a sentence with the previous chunk. Overlapping ensures continuity and prevents vital contextual information in the input text from being lost by a hard break.
74+
75+
{{es}} uses the [ICU4J](https://unicode-org.github.io/icu-docs/) library to detect word and sentence boundaries for chunking. [Word boundaries](https://unicode-org.github.io/icu/userguide/boundaryanalysis/#word-boundary) are identified by following a series of rules, not just the presence of a whitespace character. For written languages that do use whitespace such as Chinese or Japanese dictionary lookups are used to detect word boundaries.
76+
77+
### Chunking strategies
78+
79+
Two strategies are available for chunking: `sentence` and `word`.
80+
81+
The `sentence` strategy splits the input text at sentence boundaries. Each chunk contains one or more complete sentences ensuring that the integrity of sentence-level context is preserved, except when a sentence causes a chunk to exceed a word count of `max_chunk_size`, in which case it will be split across chunks. The `sentence_overlap` option defines the number of sentences from the previous chunk to include in the current chunk which is either `0` or `1`.
82+
83+
The `word` strategy splits the input text on individual words up to the `max_chunk_size` limit. The `overlap` option is the number of words from the previous chunk to include in the current chunk.
84+
85+
The default chunking strategy is `sentence`.
86+
87+
#### Example of configuring the chunking behavior
88+
89+
The following example creates an {{infer}} endpoint with the `elasticsearch` service that deploys the ELSER model by default and configures the chunking behavior.
90+
91+
```console
92+
PUT _inference/sparse_embedding/small_chunk_size
93+
{
94+
"service": "elasticsearch",
95+
"service_settings": {
96+
"num_allocations": 1,
97+
"num_threads": 1
98+
},
99+
"chunking_settings": {
100+
"strategy": "sentence",
101+
"max_chunk_size": 100,
102+
"sentence_overlap": 0
103+
}
104+
}
105+
```

0 commit comments

Comments
 (0)