You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cloud-account/dark-mode.md
+7-2Lines changed: 7 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ applies_to:
6
6
7
7
# Use dark mode in Kibana
8
8
9
-
The dark mode changes Kibana's default light appearance to a darker and higher-contrast color theme. From the application header, you can turn on dark mode or synchronize the color mode with your operating system settings.
9
+
The dark mode changes Kibana's default light appearance to a darker color theme. From the application header, you can turn on dark mode or synchronize the color mode with your operating system settings.
10
10
11
11
:::{tip}
12
12
If you're using {{ecloud}}, this setting only applies to the Kibana UI of your serverless projects and hosted deployments. If you'd like to change the {{ecloud}} Console color theme too, you must do so separately from its respective interface.
@@ -16,10 +16,15 @@ If you're using {{ecloud}}, this setting only applies to the Kibana UI of your s
16
16
17
17
1. Open the user menu from the header.
18
18
2. Select **Appearance**.
19
+
20
+
:::{note}
21
+
On self-managed deployments of {{kib}}, this option is located on your profile page. To access it, select **Edit profile** from the header's user menu.
22
+
:::
23
+
19
24
3. Choose a color mode:
20
25
21
26
-**Light**: The default color mode of Kibana
22
-
-**Dark**: The dark and high-contrast color mode of Kibana
27
+
-**Dark**: The dark color mode of Kibana
23
28
-**System**: Synchronizes Kibana's color mode with your system settings
24
29
-**Space default**: Sets the color mode to the value defined in the [Space settings](kibana://reference/advanced-settings.md#kibana-general-settings)
You can change the interface contrast mode of Kibana to improve visibility and readability in low-light conditions.
13
+
14
+
1. Open the user menu from the header.
15
+
2. Select **Appearance**.
16
+
17
+
:::{note}
18
+
On self-managed deployments of {{kib}}, this option is located on your profile page. To access it, select **Edit profile** from the header's user menu.
19
+
:::
20
+
21
+
3. Choose an interface contrast mode:
22
+
23
+
-**System**: Synchronizes Kibana's contrast mode with your system settings.
24
+
-**Normal**: Normal contrast mode.
25
+
-**High**: The high-contrast color mode of Kibana.
26
+
27
+
4. Select **Save changes**.
28
+
5. Refresh the page to apply the selected contrast mode.
Copy file name to clipboardExpand all lines: deploy-manage/_snippets/ecloud-security.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,9 @@
1
1
{{ecloud}} has built-in security. For example, HTTPS communications between {{ecloud}} and the internet, as well as inter-node communications, are secured automatically, and cluster data is encrypted at rest.
2
2
3
+
In both {{ech}} and {{serverless-full}}, you can also configure [IP filters](/deploy-manage/security/ip-filtering-cloud.md) to prevent unauthorized access to your deployments and projects.
4
+
3
5
In {{ech}}, you can augment these security features in the following ways:
4
-
* Configure [traffic filtering](/deploy-manage/security/traffic-filtering.md) to prevent unauthorized access to your deployments.
6
+
*[Configure private connectivity and apply VPC filtering](/deploy-manage/security/private-connectivity.md) to establish a secure connection for your {{ecloud}} deployments to communicate with other cloud services, and restrict traffic to deployments based on those private connections.
5
7
* Encrypt your deployment with a [customer-managed encryption key](/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md).
6
8
*[Secure your settings](/deploy-manage/security/secure-settings.md) using {{es}} and {{kib}} keystores.
7
9
* Use the list of [{{ecloud}} static IPs](/deploy-manage/security/elastic-cloud-static-ips.md) to allow or restrict communications in your infrastructure.
Copy file name to clipboardExpand all lines: deploy-manage/api-keys/elastic-cloud-api-keys.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ These keys provides access to the API that enables you to manage your deployment
30
30
3. On the **API keys** tab of the **Organization** page, click **Create API key**.
31
31
4. On the **Create API key** flyout, you can configure your new key by adding a name, set expiration, or assign [roles](../users-roles/cloud-organization/user-roles.md).
32
32
33
-
By default, API keys expire after three months. You can set the expiration to a different preset value or to a specific date, up to one year. If you need the key to work indefinitely, you can also set its expiration to Never. In this case, the key won’t expire.
33
+
By default, API keys expire after three months. You can set the expiration to a different preset value or to a specific date, up to one year. If you need the key to work indefinitely, you can also set its expiration to Never. In this case, the key won’t expire. Each user is allowed to create up to 64 API keys.
34
34
35
35
::::{note}
36
36
When an API key is nearing expiration, Elastic sends an email to the creator of the API key and each of the operational contacts. When you use an API key to authenticate, the API response header `X-Elastic-Api-Key-Expiration` indicates the key’s expiration date. You can log this value to detect API keys that are nearing expiration.
Copy file name to clipboardExpand all lines: deploy-manage/autoscaling/autoscaling-in-eck.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,7 +177,7 @@ spec:
177
177
max: 512Gi
178
178
```
179
179
180
-
You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md).
180
+
You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{version.eck | M.M}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md).
181
181
182
182
183
183
#### Change the polling interval [k8s-autoscaling-polling-interval]
Copy file name to clipboardExpand all lines: deploy-manage/autoscaling/trained-model-autoscaling.md
+9-35Lines changed: 9 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,13 @@ There are two ways to enable autoscaling:
22
22
* through APIs by enabling adaptive allocations
23
23
* in {{kib}} by enabling adaptive resources
24
24
25
+
For {{serverless-short}} projects, trained model autoscaling is automatically enabled and cannot be disabled.
26
+
25
27
::::{important}
26
28
To fully leverage model autoscaling in {{ech}}, {{ece}}, and {{eck}}, it is highly recommended to enable [{{es}} deployment autoscaling](../../deploy-manage/autoscaling.md).
27
29
::::
28
30
29
-
Trained model autoscaling is available for {{serverless-short}}, {{ech}}, {{ece}}, and {{eck}} deployments. In serverless deployments, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
31
+
Trained model autoscaling is available for {{serverless-short}}, {{ech}}, {{ece}}, and {{eck}} deployments. In {{serverless-short}} projects, processing power is managed differently across Search, Observability, and Security projects, which impacts their costs and resource limits.
30
32
31
33
:::{admonition} Trained model auto-scaling for self-managed deployments
32
34
The available resources of self-managed deployments are static, so trained model autoscaling is not applicable. However, available resources are still segmented based on the settings described in this section.
@@ -54,10 +56,6 @@ You can enable adaptive allocations by using:
54
56
55
57
If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference).
56
58
57
-
:::{note}
58
-
When you create inference endpoints on {{serverless-short}} using {{kib}}, adaptive allocations are automatically turned on, and there is no option to disable them.
59
-
:::
60
-
61
59
### Optimizing for typical use cases [optimizing-for-typical-use-cases]
62
60
63
61
You can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes.
@@ -73,16 +71,16 @@ You can choose from three levels of resource usage for your trained model deploy
73
71
74
72
Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the settings for the level you selected.
The image below shows the process of starting a trained model on an {{ech}} deployment. In {{serverless-short}} projects, the **Adaptive resources** toggle is not available when starting trained model deployments, as adaptive allocations are always enabled and cannot be disabled.
:alt: ELSER deployment with adaptive resources enabled.
78
78
:screenshot:
79
79
:width: 500px
80
80
:::
81
81
82
82
In {{serverless-full}}, Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations.
83
83
84
-
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects.
85
-
86
84
## Model deployment resource matrix [model-deployment-resource-matrix]
87
85
88
86
The used resources for trained model deployments depend on three factors:
@@ -100,10 +98,6 @@ If you use a self-managed cluster or ECK, vCPUs level ranges are derived from th
100
98
101
99
The following tables show you the number of allocations, threads, and vCPUs available in ECE and ECH when adaptive resources are enabled or disabled.
102
100
103
-
::::{note}
104
-
On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects.
105
-
::::
106
-
107
101
### Ingest optimized
108
102
109
103
In case of ingest-optimized deployments, we maximize the number of model allocations.
@@ -152,16 +146,6 @@ In case of ingest-optimized deployments, we maximize the number of model allocat
152
146
153
147
:::
154
148
155
-
:::{tab-item} {{serverless-short}}
156
-
157
-
| Level | Allocations | Threads | VCUs |
158
-
| --- | --- | --- | --- |
159
-
| Low | Exactly 2 | 1 | 16 |
160
-
| Medium | Exactly 32 | 1 | 256 |
161
-
| High | 512 for Search<br> No static allocations for Security and Observability<br> | 1 | 4096 for Search<br> No static allocations for Security and Observability<br> |
162
-
163
-
:::
164
-
165
149
::::
166
150
167
151
### Search optimized
@@ -188,9 +172,9 @@ In case of search-optimized deployments, we maximize the number of threads. The
188
172
189
173
| Level | Allocations | Threads | VCUs |
190
174
| --- | --- | --- | --- |
191
-
| Low | 0 to 1 dynamically |Always 2 | 0 to 16 dynamically |
192
-
| Medium |1 to 2 (if threads=16), dynamically | Maximum (for example, 16) | 8 to 256 dynamically |
193
-
| High |1 to 32 (if threads=16), dynamically<br> 1 to 128 for Security and Observability<br> |Maximum (for example, 16) | 8 to 4096 for Search<br> 8 to 1024 for Security and Observability<br> |
175
+
| Low | 0 to 1 dynamically | 2 | 0 to 16 dynamically |
176
+
| Medium |0 to 2 dynamically for Search and Observatibility<br> 1 to 2 dynamically for Security | 4 | 0 to 256 dynamically for Search and Observatibility<br> 8 to 256 dynamically for Security|
177
+
| High |0 to 32 dynamically for Search and Observatibility<br> 1 to 128 dynamically for Security<br> |8 | 0 to 4096 dynamically for Search<br> 0 to 1024 dynamically for Observability<br>8 to 1014 dynamically for Security|
194
178
195
179
:::
196
180
@@ -212,14 +196,4 @@ In case of search-optimized deployments, we maximize the number of threads. The
212
196
213
197
:::
214
198
215
-
:::{tab-item} {{serverless-short}}
216
-
217
-
| Level | Allocations | Threads | VCUs |
218
-
| --- | --- | --- | --- |
219
-
| Low | 1 statically | Always 2 | 16 |
220
-
| Medium | 2 statically (if threads=16) | Maximum (for example, 16) | 256 |
221
-
| High | 32 statically (if threads=16) for Search<br> No static allocations for Security and Observability<br> | Maximum (for example, 16) | 4096 for Search<br> No static allocations for Security and Observability<br> |
When installing the {{stack}}, you must use the same version across the entire stack. For example, if you are using {{es}} {{stack-version}}, you install Beats {{stack-version}}, APM Server {{stack-version}}, {{es}} Hadoop {{stack-version}}, {{kib}} {{stack-version}}, and Logstash {{stack-version}}.
1
+
When installing the {{stack}}, you must use the same version across the entire stack. For example, if you are using {{es}} {{version.stack}}, you install Beats {{version.stack}}, APM Server {{version.stack}}, {{es}} Hadoop {{version.stack}}, {{kib}} {{version.stack}}, and Logstash {{version.stack}}.
2
2
3
-
If you’re upgrading an existing installation, see [](/deploy-manage/upgrade.md) for information about how to ensure compatibility with {{stack-version}}.
3
+
If you’re upgrading an existing installation, see [](/deploy-manage/upgrade.md) for information about how to ensure compatibility with {{version.stack}}.
Copy file name to clipboardExpand all lines: deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md
+27-7Lines changed: 27 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,11 @@ products:
14
14
This type of installation is recommended for deployments with significant overall search and indexing throughput. You need:
15
15
16
16
* 3 hosts with at least 64 GB RAM each for directors and coordinators (ECE management services)
17
-
* 3 hosts for allocators, each with one of the following RAM configurations:
17
+
*A minimum of 3 hosts for allocators, using one of the following configurations per availability zone:
18
18
19
-
* 1 x 256 GB RAM
20
-
* 2 x 128 GB RAM
21
-
* 4 x 64 GB RAM
19
+
* 1 host with 256 GB RAM → 3 hosts total
20
+
* 2 hosts with 128 GB RAM each → 6 hosts total
21
+
* 4 hosts with 64 GB RAM each → 12 hosts total
22
22
23
23
* 3 hosts with 16 GB RAM each for proxies
24
24
* 3 availability zones
@@ -27,6 +27,10 @@ This type of installation is recommended for deployments with significant overal
27
27
:alt: A large installation with nine to twelve hosts across three availability zones
28
28
:::
29
29
30
+
::::{note}
31
+
In the diagram, the Director Coordinator host in Availability zone 1, which represents the first host to be installed, has the allocator and proxy roles greyed out. This host temporarily holds all roles until the other nodes are added and configured. Eventually, the allocator and proxy roles will be removed from this host.
32
+
::::
33
+
30
34
## Important considerations [ece_before_you_start_3]
31
35
32
36
Note that the large-sized {{ece}} installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services).
@@ -100,8 +104,24 @@ Make sure you have completed all prerequisites and environment preparations desc
6. [Change the deployment configuration](working-with-deployments.md) for the `admin-console-elasticsearch`, `logging-and-metrics`, and `security` clusters to use three availability zones and resize the nodes to use at least 4 GB of RAM. This change makes sure that the clusters used by the administration console are highly available and provisioned sufficiently.
107
+
6. [Log into the Cloud UI](log-into-cloud-ui.md).
108
+
109
+
7. [Change the deployment configuration](/deploy-manage/deploy/cloud-enterprise/customize-deployment.md) for the `admin-console-elasticsearch`, `logging-and-metrics`, and `security` [system deployments](/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md) to use three availability zones and resize the nodes to use at least 4 GB of RAM. This ensures the system clusters are both highly available and sufficiently provisioned.
110
+
111
+
8. [Vacate all instances from the initial host](/deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md#move-nodes-from-allocators). This host runs some {{es}} and {{kib}} instances from system deployments, which must be moved to other allocators before proceeding.
112
+
113
+
Wait until all instances have been moved off the initial host before continuing.
114
+
115
+
9. [Remove the `allocator` and `proxy` roles](/deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md) from the initial host. You cannot remove the `allocator` role until all instances have been vacated.
116
+
117
+
::::{note}
118
+
After removing the proxy role from the first host, the {{es}} and {{kib}} URLs shown in the Cloud UI will stop working. This happens because the **Deployment domain name**in**Platform**>**Settings** is set to the IP address of the first host, in the format `FIRST_HOST_IP.ip.es.io`. For more details, refer to [Change endpoint URLs](./change-endpoint-urls.md).
119
+
120
+
To resolve this, follow the steps in [Post-installation steps](./post-installation-steps.md) to complete the integration between your load balancer, ECE proxies, TLS certificates, and wildcard DNS record.
121
+
::::
104
122
105
-
7. [Log into the Cloud UI](log-into-cloud-ui.md) to provision your deployment.
123
+
::::{tip}
124
+
If you don't yet have a load balancer, TLS certificates, or a wildcard DNS record ready, you can [change the endpoint URL](./change-endpoint-urls.md) to the IP address of one of the ECE proxies, using the format `PROXY_IP.ip.es.io`. This will allow you to continue using the deployment endpoint URLs provided by the Cloud UI.
125
+
::::
106
126
107
-
Once the installation is complete, you can continue with [](./post-installation-steps.md).
127
+
Once the installation is complete, you can continue with [](./post-installation-steps.md).
0 commit comments