Skip to content

Commit c96185c

Browse files
authored
replace deployment type words with variables (also es and kib) (#1027)
replace as many deployments and products with their variables as possible in the deploy and manage section
1 parent 6e72537 commit c96185c

File tree

345 files changed

+1968
-1952
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

345 files changed

+1968
-1952
lines changed
Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
**Quick start options**
22

3-
* [**Elastic Cloud**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes.
4-
* [**Elastic Cloud Hosted**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration).
5-
* [**Elastic Cloud Serverless**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration).
3+
* [**{{ecloud}}**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes.
4+
* [**{{ech}}**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted {{stack}} offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration).
5+
* [**{{serverless-full}}**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration).
66
* [**Local development**](/solutions/search/run-elasticsearch-locally.md): Get started quickly with {{es}} and {{kib}} in Docker for local development and testing.
77

88
**Advanced options**
99

1010
* [**Self-managed**](/deploy-manage/deploy/self-managed.md): Install, configure, and run Elastic on your own premises.
11-
* [**Elastic Cloud Enterprise**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy Elastic Cloud on public or private clouds, virtual machines, or your own premises.
12-
* [**Elastic Cloud on Kubernetes**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy Elastic Cloud on Kubernetes.
11+
* [**{{ece}}**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy {{ecloud}} on public or private clouds, virtual machines, or your own premises.
12+
* [**{{eck}}**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy {{eck}}.

deploy-manage/api-keys/elasticsearch-api-keys.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ mapped_pages:
99

1010
Several types of {{es}} API keys exist:
1111

12-
* **Personal/User** API key: allows external services to access the Elastic Stack on behalf of a user.
12+
* **Personal/User** API key: allows external services to access the {{stack}} on behalf of a user.
1313
* **Cross-cluster** API key: allows other clusters to connect to this cluster.
14-
* **Managed** API key: created and managed by Kibana to run background tasks.
14+
* **Managed** API key: created and managed by {{kib}} to run background tasks.
1515

1616
To manage API keys in {{kib}}, go to the **API Keys** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md).
1717

deploy-manage/autoscaling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ By default, {{serverless-full}} automatically scales your {{es}} resources based
2323
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
2424
::::
2525

26-
Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An Elasticsearch cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. Elasticsearch monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires.
26+
Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An {{es}} cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. {{es}} monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires.
2727

2828
A policy is composed of a list of roles and a list of deciders. The policy governs the nodes matching the roles. The deciders provide independent estimates of the capacity required. See [Autoscaling deciders](../deploy-manage/autoscaling/autoscaling-deciders.md) for details on available deciders.
2929

deploy-manage/autoscaling/autoscaling-deciders.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ applies_to:
1616

1717
# Autoscaling deciders [autoscaling-deciders]
1818

19-
[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.
19+
[Autoscaling](/deploy-manage/autoscaling.md) in {{es}} enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention.
2020

2121
::::{admonition} Indirect use only
2222
This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported.
@@ -96,7 +96,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) frozen shards decider (`fr
9696
### Configuration settings [autoscaling-frozen-shards-decider-settings]
9797

9898
`memory_per_shard`
99-
: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB.
99+
: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the {{es}} default heap sizing mechanism is used and that nodes are not bigger than 64 GB.
100100

101101
## Frozen storage decider [autoscaling-frozen-storage-decider]
102102

deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Currently, autoscaling behavior is as follows:
3939

4040
* **Data tiers**
4141

42-
* Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers.
42+
* Each {{es}} [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers.
4343
* In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html).
4444
* To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field.
4545
* Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually.
@@ -53,7 +53,7 @@ Currently, autoscaling behavior is as follows:
5353
* The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models.
5454

5555
::::{note}
56-
For any Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone.
56+
For any {{es}} component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone.
5757
::::
5858

5959
## When does autoscaling occur?[ec-autoscaling-factors]
@@ -81,7 +81,7 @@ In {{ece}} deployments, a warning is also issued in the ECE `service-constructor
8181

8282
The following are known limitations and restrictions with autoscaling:
8383

84-
* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed.
84+
* Autoscaling will not run if the cluster is unhealthy or if the last {{es}} plan failed.
8585

8686
In {{ech}} the following additional limitations apply:
8787

@@ -129,12 +129,12 @@ Each autoscaling setting is configured with a default value. You can adjust thes
129129

130130
% ECE NOTE
131131
::::{note} - {{ece}}
132-
On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings.
132+
On {{ece}}, system-owned deployment templates include the default values for all deployment autoscaling settings.
133133
::::
134134

135135
## Autoscaling example [ec-autoscaling-example]
136136

137-
To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elastic Cloud Enterprise or {{ech}} deployment.
137+
To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample {{ece}} or {{ech}} deployment.
138138

139139
1. Enable autoscaling:
140140

@@ -145,7 +145,7 @@ To help you better understand the available autoscaling settings, this example d
145145

146146
2. View and adjust autoscaling settings on data tiers:
147147

148-
1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings:
148+
1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each {{es}} data tier. In this example, the hot data and content tier has the following settings:
149149

150150
| | | |
151151
| --- | --- | --- |
@@ -197,8 +197,8 @@ To learn more about the {{ece}} API, see the [RESTful API](cloud://reference/clo
197197

198198
Note the following requirements when you run this API request:
199199

200-
* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example.
201-
* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties:
200+
* All {{es}} components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example.
201+
* The request requires a format that supports data tiers. Specifically, all {{es}} components must contain the following properties:
202202

203203
* `id`
204204
* `node_attributes`
@@ -218,7 +218,7 @@ $$$ece-autoscaling-api-example-requirements-table$$$
218218
| data tier ||||
219219
| machine learning node ||||
220220
| coordinating and master nodes ||||
221-
| Kibana ||||
221+
| {{kib}} ||||
222222
| APM ||||
223223

224224
* ✓ = Include the property.

deploy-manage/autoscaling/autoscaling-in-eck.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -9,33 +9,33 @@ navigation_title: "In ECK"
99
---
1010
# Autoscaling in {{eck}}
1111

12-
Configure autoscaling for Elasticsearch deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like Kibana, APM Server, and Elastic Maps Server.
12+
Configure autoscaling for {{es}} deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like {{kib}}, APM Server, and Elastic Maps Server.
1313

1414
## Deployments autoscaling on ECK [k8s-autoscaling]
1515

1616
::::{note}
17-
Elasticsearch autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses.
17+
{{es}} autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses.
1818
::::
1919

2020

21-
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.
21+
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in {{es}} 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for {{es}} [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.
2222

2323

2424
### Enable autoscaling [k8s-enable]
2525

26-
To enable autoscaling on an Elasticsearch cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the Elasticsearch configuration.
26+
To enable autoscaling on an {{es}} cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the {{es}} configuration.
2727

2828

2929
#### Define autoscaling policies [k8s-autoscaling-policies]
3030

3131
Autoscaling policies can be defined in an `ElasticsearchAutoscaler` resource. Each autoscaling policy must have the following fields:
3232

3333
* `name` is a unique name used to identify the autoscaling policy.
34-
* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the Elasticsearch resource specification.
34+
* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the {{es}} resource specification.
3535
* `resources` helps define the minimum and maximum compute resources usage:
3636

3737
* `nodeCount` defines the minimum and maximum nodes allowed in the tier.
38-
* `cpu` and `memory` enforce minimum and maximum compute resources usage for the Elasticsearch container.
38+
* `cpu` and `memory` enforce minimum and maximum compute resources usage for the {{es}} container.
3939
* `storage` enforces minimum and maximum storage request per PersistentVolumeClaim.
4040

4141

@@ -47,7 +47,7 @@ kind: ElasticsearchAutoscaler
4747
metadata:
4848
name: autoscaling-sample
4949
spec:
50-
## The name of the Elasticsearch cluster to be scaled automatically.
50+
## The name of the {{es}} cluster to be scaled automatically.
5151
elasticsearchRef:
5252
name: elasticsearch-sample
5353
## The autoscaling policies.
@@ -93,7 +93,7 @@ A node role should not be referenced in more than one autoscaling policy.
9393
In the case of storage the following restrictions apply:
9494
9595
* Scaling the storage size automatically requires the `ExpandInUsePersistentVolumes` feature to be enabled. It also requires a storage class that supports [volume expansion](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
96-
* Only one persistent volume claim per Elasticsearch node is supported when autoscaling is enabled.
96+
* Only one persistent volume claim per {{es}} node is supported when autoscaling is enabled.
9797
* Volume size cannot be scaled down.
9898
* Scaling up (vertically) is only supported if the available capacity in a PersistentVolume matches the capacity claimed in the PersistentVolumeClaim. Refer to the next section for more information.
9999

@@ -172,7 +172,7 @@ You can find [a complete example in the ECK GitHub repository](https://github.co
172172

173173
#### Change the polling interval [k8s-autoscaling-polling-interval]
174174

175-
The Elasticsearch autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification:
175+
The {{es}} autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification:
176176

177177
```yaml
178178
apiVersion: autoscaling.k8s.elastic.co/v1alpha1
@@ -207,7 +207,7 @@ spec:
207207

208208
#### Autoscaling status [k8s-autoscaling-status]
209209

210-
In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the Elasticsearch cluster, or if a resource limit has been reached:
210+
In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the {{es}} cluster, or if a resource limit has been reached:
211211

212212
```sh
213213
kubectl get elasticsearchautoscaler autoscaling-sample \
@@ -234,7 +234,7 @@ kubectl get elasticsearchautoscaler autoscaling-sample \
234234
},
235235
{
236236
"lastTransitionTime": "2022-09-09T07:56:22Z",
237-
"message": "Elasticsearch is available",
237+
"message": "{{es}} is available",
238238
"status": "True",
239239
"type": "Online"
240240
}
@@ -302,7 +302,7 @@ You should adjust those settings manually to match the size of your deployment w
302302
## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling]
303303

304304
::::{note}
305-
This section only applies to stateless applications. Check [Elasticsearch autoscaling](#k8s-autoscaling) for more details about scaling automatically Elasticsearch.
305+
This section only applies to stateless applications. Check [{{es}} autoscaling](#k8s-autoscaling) for more details about automatically scaling {{es}}.
306306
::::
307307

308308

@@ -312,7 +312,7 @@ The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application
312312
* APM Server
313313
* Elastic Maps Server
314314

315-
These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a Kibana deployment according to the CPU load:
315+
These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a {{kib}} deployment according to the CPU load:
316316

317317
```yaml
318318
apiVersion: elasticsearch.k8s.elastic.co/v1

0 commit comments

Comments
 (0)