You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/autoscaling/trained-model-autoscaling.md
+73-58Lines changed: 73 additions & 58 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ There are two ways to enable autoscaling:
20
20
To fully leverage model autoscaling in {{ech}}, {{ece}}, and {{eck}}, it is highly recommended to enable [{{es}} deployment autoscaling](../../deploy-manage/autoscaling.md).
21
21
::::
22
22
23
-
Trained model autoscaling is available for both serverless and Cloud deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
23
+
Trained model autoscaling is available for both {{serverless-short}} and Cloud deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
24
24
25
25
Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results.
26
26
@@ -43,7 +43,7 @@ You can enable adaptive allocations by using:
43
43
If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put).
44
44
45
45
:::{note}
46
-
When you create inference endpoints on Serverless using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them.
46
+
When you create inference endpoints on {{serverless-short}} using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them.
47
47
:::
48
48
49
49
### Optimizing for typical use cases [optimizing-for-typical-use-cases]
@@ -68,31 +68,31 @@ Refer to the tables in the [Model deployment resource matrix](#model-deployment-
68
68
69
69
Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations.
70
70
71
-
On Serverless, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects.
71
+
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects.
72
72
73
73
## Model deployment resource matrix [model-deployment-resource-matrix]
74
74
75
75
The used resources for trained model deployments depend on three factors:
76
76
77
-
* your cluster environment (Serverless, Cloud, or on-premises)
77
+
* your cluster environment ({{serverless-short}}, Cloud, or on-premises)
78
78
* the use case you optimize the model deployment for (ingest or search)
79
79
* whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources
80
80
81
81
If you use {{es}} on-premises, vCPUs level ranges are derived from the `total_ml_processors` and `max_single_ml_node_processors` values. Use the [get {{ml}} info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-info) to check these values. The following tables show you the number of allocations, threads, and vCPUs available in Cloud when adaptive resources are enabled or disabled.
82
82
83
83
::::{note}
84
-
On Serverless, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects.
84
+
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects.
85
85
::::
86
86
87
-
### Deployments in Cloud optimized for ingest [_deployments_in_cloud_optimized_for_ingest]
88
-
```{applies_to}
89
-
deployment:
90
-
ech: all
91
-
```
87
+
### Ingest optimized
92
88
93
89
In case of ingest-optimized deployments, we maximize the number of model allocations.
@@ -102,89 +102,104 @@ In case of ingest-optimized deployments, we maximize the number of model allocat
102
102
103
103
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
| Low |2 if available, otherwise 1, statically | 1 |2 if available|
110
-
| Medium |the smaller of 32 or the limit set in the Cloud console, statically | 1 |32 if available|
111
-
| High |Maximum available set in the Cloud console *, statically | 1 |Maximum available set in the Cloud console, statically|
111
+
| Low |0 to 2 dynamically | 1 |0 to 16 dynamically|
112
+
| Medium |1 to 32 dynamically | 1 |8 to 256 dynamically|
113
+
| High |1 to 512 for Search<br> 1 to 128 for Security and Observability<br> | 1 |8 to 4096 for Search<br> 8 to 1024 for Security and Observability<br>|
112
114
113
-
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
115
+
:::
114
116
115
-
### Deployments in Cloud optimized for search [_deployments_in_cloud_optimized_for_search]
116
-
```{applies_to}
117
-
deployment:
118
-
ech: all
119
-
```
117
+
::::
120
118
121
-
In case of search-optimized deployments, we maximize the number of threads. The maximum number of threads that can be claimed depends on the hardware your architecture has.
| Medium |1 to 2 (if threads=16) dynamically | maximum that the hardware allows (for example, 16)| 1 to 32 dynamically|
129
-
| High |1 to limit set in the Cloud console *, dynamically|maximum that the hardware allows (for example, 16) | 1 to limit set in the Cloud console, dynamically|
127
+
| Low |2 if available, otherwise 1, statically | 1| 2 if available|
128
+
| Medium |the smaller of 32 or the limit set in the Cloud console, statically| 1 | 32 if available|
129
+
| High |Maximum available set in the Cloud console *, statically|1 | Maximum available set in the Cloud console, statically|
130
130
131
131
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
| Low | 1 if available, statically | 2 | 2 if available |
138
-
| Medium | 2 (if threads=16) statically | maximum that the hardware allows (for example, 16) | 32 if available |
139
-
| High | Maximum available set in the Cloud console *, statically | maximum that the hardware allows (for example, 16) | Maximum available set in the Cloud console, statically |
135
+
:::{tab-item} {{serverless-short}}
140
136
141
-
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
137
+
| Level | Allocations | Threads | VCUs |
138
+
| --- | --- | --- | --- |
139
+
| Low | Exactly 2 | 1 | 16 |
140
+
| Medium | Exactly 32 | 1 | 256 |
141
+
| High | 512 for Search<br> No static allocations for Security and Observability<br> | 1 | 4096 for Search<br> No static allocations for Security and Observability<br> |
142
142
143
-
### Deployments on serverless optimized for ingest [deployments-on-serverless-optimized-for-ingest]
144
-
```{applies_to}
145
-
serverless: all
146
-
```
143
+
:::
147
144
148
-
In case of ingest-optimized deployments, we maximize the number of model allocations.
In case of search-optimized deployments, we maximize the number of threads. The maximum number of threads that can be claimed depends on the hardware your architecture has.
152
150
153
-
| Level | Allocations | Threads | VCUs |
154
-
| --- | --- | --- | --- |
155
-
| Low | 0 to 2 dynamically | 1 | 0 to 16 dynamically |
156
-
| Medium | 1 to 32 dynamically | 1 | 8 to 256 dynamically |
157
-
| High | 1 to 512 for Search<br> 1 to 128 for Security and Observability<br> | 1 | 8 to 4096 for Search<br> 8 to 1024 for Security and Observability<br> |
| High | 512 for Search<br> No static allocations for Security and Observability<br> | 1 | 4096 for Search<br> No static allocations for Security and Observability<br> |
167
-
159
+
| Low | 1 | 2 | 2 |
160
+
| Medium | 1 to 2 (if threads=16) dynamically | maximum that the hardware allows (for example, 16) | 1 to 32 dynamically |
161
+
| High | 1 to limit set in the Cloud console *, dynamically | maximum that the hardware allows (for example, 16) | 1 to limit set in the Cloud console, dynamically |
168
162
169
-
### Deployments on serverless optimized for Search [deployments-on-serverless-optimized-for-search]
170
-
```{applies_to}
171
-
serverless: all
172
-
```
163
+
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
| Low | 0 to 1 dynamically | Always 2 | 0 to 16 dynamically |
180
172
| Medium | 1 to 2 (if threads=16), dynamically | Maximum (for example, 16) | 8 to 256 dynamically |
181
173
| High | 1 to 32 (if threads=16), dynamically<br> 1 to 128 for Security and Observability<br> | Maximum (for example, 16) | 8 to 4096 for Search<br> 8 to 1024 for Security and Observability<br> |
| Low | 1 if available, statically | 2 | 2 if available |
188
+
| Medium | 2 (if threads=16) statically | maximum that the hardware allows (for example, 16) | 32 if available |
189
+
| High | Maximum available set in the Cloud console *, statically | maximum that the hardware allows (for example, 16) | Maximum available set in the Cloud console, statically |
190
+
191
+
\* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
192
+
193
+
:::
194
+
195
+
:::{tab-item} {{serverless-short}}
185
196
186
197
| Level | Allocations | Threads | VCUs |
187
198
| --- | --- | --- | --- |
188
199
| Low | 1 statically | Always 2 | 16 |
189
200
| Medium | 2 statically (if threads=16) | Maximum (for example, 16) | 256 |
190
-
| High | 32 statically (if threads=16) for Search<br> No static allocations for Security and Observability<br> | Maximum (for example, 16) | 4096 for Search<br> No static allocations for Security and Observability<br> |
201
+
| High | 32 statically (if threads=16) for Search<br> No static allocations for Security and Observability<br> | Maximum (for example, 16) | 4096 for Search<br> No static allocations for Security and Observability<br> |
0 commit comments