You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/autoscaling/trained-model-autoscaling.md
+6-12Lines changed: 6 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,13 @@ There are two ways to enable autoscaling:
22
22
* through APIs by enabling adaptive allocations
23
23
* in {{kib}} by enabling adaptive resources
24
24
25
+
For {{serverless-short}} projects, trained model autoscaling is automatically enabled and cannot be disabled.
26
+
25
27
::::{important}
26
28
To fully leverage model autoscaling in {{ech}}, {{ece}}, and {{eck}}, it is highly recommended to enable [{{es}} deployment autoscaling](../../deploy-manage/autoscaling.md).
27
29
::::
28
30
29
-
Trained model autoscaling is available for {{serverless-short}}, {{ech}}, {{ece}}, and {{eck}} deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
31
+
Trained model autoscaling is available for {{serverless-short}}, {{ech}}, {{ece}}, and {{eck}} deployments. In {{serverless-short}} projects, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
30
32
31
33
:::{admonition} Trained model auto-scaling for self-managed deployments
32
34
The available resources of self-managed deployments are static, so trained model autoscaling is not applicable. However, available resources are still segmented based on the settings described in this section.
@@ -54,10 +56,6 @@ You can enable adaptive allocations by using:
54
56
55
57
If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference).
56
58
57
-
:::{note}
58
-
When you create inference endpoints on {{serverless-short}} using {{kib}}, adaptive allocations are automatically turned on, and there is no option to disable them.
59
-
:::
60
-
61
59
### Optimizing for typical use cases [optimizing-for-typical-use-cases]
62
60
63
61
You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes.
@@ -73,16 +71,16 @@ You can choose from three levels of resource usage for your trained model deploy
73
71
74
72
Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the settings for the level you selected.
The image below shows the process of starting a trained model on an {{ech}} deployment. In {{serverless-short}} projects, the **Adaptive resources** toggle is not available when starting trained model deployments, as adaptive allocations are always enabled and cannot be disabled.
:alt: ELSER deployment with adaptive resources enabled.
78
78
:screenshot:
79
79
:width: 500px
80
80
:::
81
81
82
82
In {{serverless-full}}, Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations.
83
83
84
-
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types.
85
-
86
84
## Model deployment resource matrix [model-deployment-resource-matrix]
87
85
88
86
The used resources for trained model deployments depend on three factors:
@@ -100,10 +98,6 @@ If you use a self-managed cluster or ECK, vCPUs level ranges are derived from th
100
98
101
99
The following tables show you the number of allocations, threads, and vCPUs available in ECE and ECH when adaptive resources are enabled or disabled.
102
100
103
-
::::{note}
104
-
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects.
105
-
::::
106
-
107
101
### Ingest optimized
108
102
109
103
In case of ingest-optimized deployments, we maximize the number of model allocations.
Copy file name to clipboardExpand all lines: deploy-manage/security/secure-settings.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ products:
22
22
23
23
Some settings are sensitive, and relying on filesystem permissions to protect their values is not sufficient. For this use case, {{es}} and {{kib}} provide secure keystores to store sensitive configuration values such as passwords, API keys, and tokens.
24
24
25
-
Secure settings are often referred to as **keystore settings**, since they must be added to the product-specific keystore rather than the standard [`elasticsearch.yml` or `kibana.yml files](/deploy-manage/stack-settings.md). Unlike regular settings, they are encrypted and protected at rest, and they cannot be read or modified through the usual configuration files or environment variables.
25
+
Secure settings are often referred to as **keystore settings**, since they must be added to the product-specific keystore rather than the standard [`elasticsearch.yml` or `kibana.yml` files](/deploy-manage/stack-settings.md). Unlike regular settings, they are encrypted and protected at rest, and they cannot be read or modified through the usual configuration files or environment variables.
26
26
27
27
Keystore settings must be handled using a specific tool or method depending on the deployment type. The following table summarizes how {{es}} and {{kib}} keystores are supported and managed across different deployment models:
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ You can deploy a model multiple times by assigning a unique deployment ID when s
16
16
17
17
You can optimize your deplyoment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. When you have dedicated deployments for different purposes, you ensure that the search speed remains unaffected by ingest workloads, and vice versa. Having separate deployments for search and ingest mitigates performance issues resulting from interactions between the two, which can be hard to diagnose.
Copy file name to clipboardExpand all lines: explore-analyze/scripting/script-fields-api.md
+23-1Lines changed: 23 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,6 +25,12 @@ Use the `field` API to access document fields:
25
25
field('my_field').get(<default_value>)
26
26
```
27
27
28
+
Alternatively use the shortcut of `$` to get a field.
29
+
30
+
```painless
31
+
$('my_field', <default_value>)
32
+
```
33
+
28
34
This API fundamentally changes how you access documents in Painless. Previously, you had to access the `doc` map with the field name that you wanted to access:
29
35
30
36
```painless
@@ -77,7 +83,6 @@ ZonedDateTime end = field('end').get(null);
## Supported mapped field types [_supported_mapped_field_types]
82
87
83
88
The following table indicates the mapped field types that the `field` API supports. For each supported type, values are listed that are returned by the `field` API (from the `get` and `as<Type>` methods) and the `doc` map (from the `getValue` and `get` methods).
@@ -112,3 +117,20 @@ The `fields` API currently doesn’t support some fields, but you can still acce
112
117
|`wildcard`|`String`| - |`String`|`String`|
113
118
|`flattened`|`String`| - |`String`|`String`|
114
119
120
+
## Manipulation of the fields data
121
+
122
+
The field API provides a `set(<value>)` operation that will take the field name and create the necessary structure. Calling this inside an ingest pipelines script processor context:
123
+
124
+
```painless
125
+
field("foo.bar").set("abc")
126
+
```
127
+
128
+
leads to the generation of this JSON representation.
0 commit comments