Skip to content
  •  
  •  
  •  
10 changes: 5 additions & 5 deletions deploy-manage/_snippets/deployment-options-overview.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
**Quick start options**

* [**Elastic Cloud**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes.
* [**Elastic Cloud Hosted**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration).
* [**Elastic Cloud Serverless**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration).
* [**{{ecloud}}**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes.
* [**{{ech}}**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted {{stack}} offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration).
* [**{{serverless-full}}**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration).
* [**Local development**](/solutions/search/run-elasticsearch-locally.md): Get started quickly with {{es}} and {{kib}} in Docker for local development and testing.

**Advanced options**

* [**Self-managed**](/deploy-manage/deploy/self-managed.md): Install, configure, and run Elastic on your own premises.
* [**Elastic Cloud Enterprise**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy Elastic Cloud on public or private clouds, virtual machines, or your own premises.
* [**Elastic Cloud on Kubernetes**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy Elastic Cloud on Kubernetes.
* [**{{ece}}**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy {{ecloud}} on public or private clouds, virtual machines, or your own premises.
* [**{{eck}}**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy {{eck}}.
4 changes: 2 additions & 2 deletions deploy-manage/api-keys/elasticsearch-api-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ mapped_pages:

Several types of {{es}} API keys exist:

* **Personal/User** API key: allows external services to access the Elastic Stack on behalf of a user.
* **Personal/User** API key: allows external services to access the {{stack}} on behalf of a user.
* **Cross-cluster** API key: allows other clusters to connect to this cluster.
* **Managed** API key: created and managed by Kibana to run background tasks.
* **Managed** API key: created and managed by {{kib}} to run background tasks.

To manage API keys in {{kib}}, go to the **API Keys** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).

Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ By default, {{serverless-full}} automatically scales your {{es}} resources based
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
::::

Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An Elasticsearch cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. Elasticsearch monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires.
Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An {{es}} cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. {{es}} monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires.

A policy is composed of a list of roles and a list of deciders. The policy governs the nodes matching the roles. The deciders provide independent estimates of the capacity required. See [Autoscaling deciders](../deploy-manage/autoscaling/autoscaling-deciders.md) for details on available deciders.

Expand Down
4 changes: 2 additions & 2 deletions deploy-manage/autoscaling/autoscaling-deciders.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ applies_to:

# Autoscaling deciders [autoscaling-deciders]

[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.
[Autoscaling](/deploy-manage/autoscaling.md) in {{es}} enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.

::::{admonition} Indirect use only
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
Expand Down Expand Up @@ -96,7 +96,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) frozen shards decider (`fr
### Configuration settings [autoscaling-frozen-shards-decider-settings]

`memory_per_shard`
: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB.
: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the {{es}} default heap sizing mechanism is used and that nodes are not bigger than 64 GB.

## Frozen storage decider [autoscaling-frozen-storage-decider]

Expand Down
18 changes: 9 additions & 9 deletions deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Currently, autoscaling behavior is as follows:

* **Data tiers**

* Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers.
* Each {{es}} [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers.
* In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html).
* To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field.
* Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually.
Expand All @@ -53,7 +53,7 @@ Currently, autoscaling behavior is as follows:
* The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models.

::::{note}
For any Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone.
For any {{es}} component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone.
::::

## When does autoscaling occur?[ec-autoscaling-factors]
Expand Down Expand Up @@ -81,7 +81,7 @@ In {{ece}} deployments, a warning is also issued in the ECE `service-constructor

The following are known limitations and restrictions with autoscaling:

* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed.
* Autoscaling will not run if the cluster is unhealthy or if the last {{es}} plan failed.

In {{ech}} the following additional limitations apply:

Expand Down Expand Up @@ -129,12 +129,12 @@ Each autoscaling setting is configured with a default value. You can adjust thes

% ECE NOTE
::::{note} - {{ece}}
On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings.
On {{ece}}, system-owned deployment templates include the default values for all deployment autoscaling settings.
::::

## Autoscaling example [ec-autoscaling-example]

To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elastic Cloud Enterprise or {{ech}} deployment.
To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample {{ece}} or {{ech}} deployment.

1. Enable autoscaling:

Expand All @@ -145,7 +145,7 @@ To help you better understand the available autoscaling settings, this example d

2. View and adjust autoscaling settings on data tiers:

1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings:
1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each {{es}} data tier. In this example, the hot data and content tier has the following settings:

| | | |
| --- | --- | --- |
Expand Down Expand Up @@ -197,8 +197,8 @@ To learn more about the {{ece}} API, see the [RESTful API](cloud://reference/clo

Note the following requirements when you run this API request:

* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example.
* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties:
* All {{es}} components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example.
* The request requires a format that supports data tiers. Specifically, all {{es}} components must contain the following properties:

* `id`
* `node_attributes`
Expand All @@ -218,7 +218,7 @@ $$$ece-autoscaling-api-example-requirements-table$$$
| data tier | ✓ | ✕ | ✓ |
| machine learning node | ✕ | ✓ | ✓ |
| coordinating and master nodes | ✓ | ✕ | ✕ |
| Kibana | ✓ | ✕ | ✕ |
| {{kib}} | ✓ | ✕ | ✕ |
| APM | ✓ | ✕ | ✕ |

* ✓ = Include the property.
Expand Down
26 changes: 13 additions & 13 deletions deploy-manage/autoscaling/autoscaling-in-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,33 +9,33 @@ navigation_title: "In ECK"
---
# Autoscaling in {{eck}}

Configure autoscaling for Elasticsearch deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like Kibana, APM Server, and Elastic Maps Server.
Configure autoscaling for {{es}} deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like {{kib}}, APM Server, and Elastic Maps Server.

## Deployments autoscaling on ECK [k8s-autoscaling]

::::{note}
Elasticsearch autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses.
{{es}} autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses.
::::


ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in {{es}} 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for {{es}} [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.


### Enable autoscaling [k8s-enable]

To enable autoscaling on an Elasticsearch cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the Elasticsearch configuration.
To enable autoscaling on an {{es}} cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the {{es}} configuration.


#### Define autoscaling policies [k8s-autoscaling-policies]

Autoscaling policies can be defined in an `ElasticsearchAutoscaler` resource. Each autoscaling policy must have the following fields:

* `name` is a unique name used to identify the autoscaling policy.
* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the Elasticsearch resource specification.
* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the {{es}} resource specification.
* `resources` helps define the minimum and maximum compute resources usage:

* `nodeCount` defines the minimum and maximum nodes allowed in the tier.
* `cpu` and `memory` enforce minimum and maximum compute resources usage for the Elasticsearch container.
* `cpu` and `memory` enforce minimum and maximum compute resources usage for the {{es}} container.
* `storage` enforces minimum and maximum storage request per PersistentVolumeClaim.


Expand All @@ -47,7 +47,7 @@ kind: ElasticsearchAutoscaler
metadata:
name: autoscaling-sample
spec:
## The name of the Elasticsearch cluster to be scaled automatically.
## The name of the {{es}} cluster to be scaled automatically.
elasticsearchRef:
name: elasticsearch-sample
## The autoscaling policies.
Expand Down Expand Up @@ -93,7 +93,7 @@ A node role should not be referenced in more than one autoscaling policy.
In the case of storage the following restrictions apply:

* Scaling the storage size automatically requires the `ExpandInUsePersistentVolumes` feature to be enabled. It also requires a storage class that supports [volume expansion](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
* Only one persistent volume claim per Elasticsearch node is supported when autoscaling is enabled.
* Only one persistent volume claim per {{es}} node is supported when autoscaling is enabled.
* Volume size cannot be scaled down.
* Scaling up (vertically) is only supported if the available capacity in a PersistentVolume matches the capacity claimed in the PersistentVolumeClaim. Refer to the next section for more information.

Expand Down Expand Up @@ -172,7 +172,7 @@ You can find [a complete example in the ECK GitHub repository](https://github.co

#### Change the polling interval [k8s-autoscaling-polling-interval]

The Elasticsearch autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification:
The {{es}} autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification:

```yaml
apiVersion: autoscaling.k8s.elastic.co/v1alpha1
Expand Down Expand Up @@ -207,7 +207,7 @@ spec:

#### Autoscaling status [k8s-autoscaling-status]

In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the Elasticsearch cluster, or if a resource limit has been reached:
In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the {{es}} cluster, or if a resource limit has been reached:

```sh
kubectl get elasticsearchautoscaler autoscaling-sample \
Expand All @@ -234,7 +234,7 @@ kubectl get elasticsearchautoscaler autoscaling-sample \
},
{
"lastTransitionTime": "2022-09-09T07:56:22Z",
"message": "Elasticsearch is available",
"message": "{{es}} is available",
"status": "True",
"type": "Online"
}
Expand Down Expand Up @@ -302,7 +302,7 @@ You should adjust those settings manually to match the size of your deployment w
## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling]

::::{note}
This section only applies to stateless applications. Check [Elasticsearch autoscaling](#k8s-autoscaling) for more details about scaling automatically Elasticsearch.
This section only applies to stateless applications. Check [{{es}} autoscaling](#k8s-autoscaling) for more details about automatically scaling {{es}}.
::::


Expand All @@ -312,7 +312,7 @@ The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application
* APM Server
* Elastic Maps Server

These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a Kibana deployment according to the CPU load:
These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a {{kib}} deployment according to the CPU load:

```yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
Expand Down
Loading
Loading