diff --git a/deploy-manage/_snippets/deployment-options-overview.md b/deploy-manage/_snippets/deployment-options-overview.md index 4d370bc7b9..027587e5dc 100644 --- a/deploy-manage/_snippets/deployment-options-overview.md +++ b/deploy-manage/_snippets/deployment-options-overview.md @@ -1,12 +1,12 @@ **Quick start options** -* [**Elastic Cloud**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes. - * [**Elastic Cloud Hosted**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration). - * [**Elastic Cloud Serverless**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration). +* [**{{ecloud}}**](/deploy-manage/deploy/elastic-cloud.md): Get a hosted solution up and running in minutes. + * [**{{ech}}**](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md): Our hosted {{stack}} offering, deployed in the cloud with your provider of choice. Sign up for a [14-day free trial](https://cloud.elastic.co/registration). + * [**{{serverless-full}}**](/deploy-manage/deploy/elastic-cloud/serverless.md): Create serverless projects for autoscaled and fully managed Elastic deployments. Sign up for a [14-day free trial](https://cloud.elastic.co/serverless-registration). * [**Local development**](/solutions/search/run-elasticsearch-locally.md): Get started quickly with {{es}} and {{kib}} in Docker for local development and testing. **Advanced options** * [**Self-managed**](/deploy-manage/deploy/self-managed.md): Install, configure, and run Elastic on your own premises. -* [**Elastic Cloud Enterprise**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy Elastic Cloud on public or private clouds, virtual machines, or your own premises. -* [**Elastic Cloud on Kubernetes**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy Elastic Cloud on Kubernetes. \ No newline at end of file +* [**{{ece}}**](https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html): Deploy {{ecloud}} on public or private clouds, virtual machines, or your own premises. +* [**{{eck}}**](/deploy-manage/deploy/cloud-on-k8s.md): Deploy {{eck}}. \ No newline at end of file diff --git a/deploy-manage/api-keys/elasticsearch-api-keys.md b/deploy-manage/api-keys/elasticsearch-api-keys.md index d3c7ebfa10..e02ca6fc1b 100644 --- a/deploy-manage/api-keys/elasticsearch-api-keys.md +++ b/deploy-manage/api-keys/elasticsearch-api-keys.md @@ -9,9 +9,9 @@ mapped_pages: Several types of {{es}} API keys exist: -* **Personal/User** API key: allows external services to access the Elastic Stack on behalf of a user. +* **Personal/User** API key: allows external services to access the {{stack}} on behalf of a user. * **Cross-cluster** API key: allows other clusters to connect to this cluster. -* **Managed** API key: created and managed by Kibana to run background tasks. +* **Managed** API key: created and managed by {{kib}} to run background tasks. To manage API keys in {{kib}}, go to the **API Keys** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md). diff --git a/deploy-manage/autoscaling.md b/deploy-manage/autoscaling.md index b193f1bd4a..f8afe60c2a 100644 --- a/deploy-manage/autoscaling.md +++ b/deploy-manage/autoscaling.md @@ -23,7 +23,7 @@ By default, {{serverless-full}} automatically scales your {{es}} resources based This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. :::: -Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An Elasticsearch cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. Elasticsearch monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires. +Cluster autoscaling allows an operator to create tiers of nodes that monitor themselves and determine if scaling is needed based on an operator-defined policy. An {{es}} cluster can use the autoscaling API to report when additional resources are required. For example, an operator can define a policy that scales a warm tier based on available disk space. {{es}} monitors disk space in the warm tier. If it predicts low disk space for current and future shard copies, the autoscaling API reports that the cluster needs to scale. It remains the responsibility of the operator to add the additional resources that the cluster signals it requires. A policy is composed of a list of roles and a list of deciders. The policy governs the nodes matching the roles. The deciders provide independent estimates of the capacity required. See [Autoscaling deciders](../deploy-manage/autoscaling/autoscaling-deciders.md) for details on available deciders. diff --git a/deploy-manage/autoscaling/autoscaling-deciders.md b/deploy-manage/autoscaling/autoscaling-deciders.md index 6d36970b0e..7da7eff9f8 100644 --- a/deploy-manage/autoscaling/autoscaling-deciders.md +++ b/deploy-manage/autoscaling/autoscaling-deciders.md @@ -16,7 +16,7 @@ applies_to: # Autoscaling deciders [autoscaling-deciders] -[Autoscaling](/deploy-manage/autoscaling.md) in Elasticsearch enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention. +[Autoscaling](/deploy-manage/autoscaling.md) in {{es}} enables dynamic resource allocation based on predefined policies. A key component of this mechanism is autoscaling deciders, which independently assess resource requirements and determine when scaling actions are necessary. Deciders analyze various factors, such as storage usage, indexing rates, and machine learning workloads, to ensure clusters maintain optimal performance without manual intervention. ::::{admonition} Indirect use only This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. @@ -96,7 +96,7 @@ The [autoscaling](../../deploy-manage/autoscaling.md) frozen shards decider (`fr ### Configuration settings [autoscaling-frozen-shards-decider-settings] `memory_per_shard` -: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the Elasticsearch default heap sizing mechanism is used and that nodes are not bigger than 64 GB. +: (Optional, [byte value](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#byte-units)) The memory needed per shard, in bytes. Defaults to 2000 shards per 64 GB node (roughly 32 MB per shard). Notice that this is total memory, not heap, assuming that the {{es}} default heap sizing mechanism is used and that nodes are not bigger than 64 GB. ## Frozen storage decider [autoscaling-frozen-storage-decider] diff --git a/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md index 44df11fb16..dc69981ffc 100644 --- a/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md +++ b/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md @@ -39,7 +39,7 @@ Currently, autoscaling behavior is as follows: * **Data tiers** - * Each Elasticsearch [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. + * Each {{es}} [data tier](../../manage-data/lifecycle/data-tiers.md) scales upward based on the amount of available storage. When we detect more storage is needed, autoscaling will scale up each data tier independently to ensure you can continue and ingest more data to your hot and content tier, or move data to the warm, cold, or frozen data tiers. * In addition to scaling up existing data tiers, a new data tier will be automatically added when necessary, based on your [index lifecycle management policies](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html). * To control the maximum size of each data tier and ensure it will not scale above a certain size, you can use the maximum size per zone field. * Autoscaling based on memory or CPU, as well as autoscaling downward, is not currently supported. In case you want to adjust the size of your data tier to add more memory or CPU, or in case you deleted data and want to scale it down, you can set the current size per zone of each data tier manually. @@ -53,7 +53,7 @@ Currently, autoscaling behavior is as follows: * The determination of when to scale is based on the expected memory and CPU requirements for the currently configured machine learning jobs and trained models. ::::{note} -For any Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. +For any {{es}} component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. :::: ## When does autoscaling occur?[ec-autoscaling-factors] @@ -81,7 +81,7 @@ In {{ece}} deployments, a warning is also issued in the ECE `service-constructor The following are known limitations and restrictions with autoscaling: -* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. +* Autoscaling will not run if the cluster is unhealthy or if the last {{es}} plan failed. In {{ech}} the following additional limitations apply: @@ -129,12 +129,12 @@ Each autoscaling setting is configured with a default value. You can adjust thes % ECE NOTE ::::{note} - {{ece}} -On Elastic Cloud Enterprise, system-owned deployment templates include the default values for all deployment autoscaling settings. +On {{ece}}, system-owned deployment templates include the default values for all deployment autoscaling settings. :::: ## Autoscaling example [ec-autoscaling-example] -To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elastic Cloud Enterprise or {{ech}} deployment. +To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample {{ece}} or {{ech}} deployment. 1. Enable autoscaling: @@ -145,7 +145,7 @@ To help you better understand the available autoscaling settings, this example d 2. View and adjust autoscaling settings on data tiers: - 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each Elasticsearch data tier. In this example, the hot data and content tier has the following settings: + 1. Open the **Edit** page for your deployment to get the current and maximum size per zone of each {{es}} data tier. In this example, the hot data and content tier has the following settings: | | | | | --- | --- | --- | @@ -197,8 +197,8 @@ To learn more about the {{ece}} API, see the [RESTful API](cloud://reference/clo Note the following requirements when you run this API request: -* All Elasticsearch components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example. -* The request requires a format that supports data tiers. Specifically, all Elasticsearch components must contain the following properties: +* All {{es}} components must be included in the request, even if they are not enabled (that is, if they have a zero size). All components are included in this example. +* The request requires a format that supports data tiers. Specifically, all {{es}} components must contain the following properties: * `id` * `node_attributes` @@ -218,7 +218,7 @@ $$$ece-autoscaling-api-example-requirements-table$$$ | data tier | ✓ | ✕ | ✓ | | machine learning node | ✕ | ✓ | ✓ | | coordinating and master nodes | ✓ | ✕ | ✕ | -| Kibana | ✓ | ✕ | ✕ | +| {{kib}} | ✓ | ✕ | ✕ | | APM | ✓ | ✕ | ✕ | * ✓ = Include the property. diff --git a/deploy-manage/autoscaling/autoscaling-in-eck.md b/deploy-manage/autoscaling/autoscaling-in-eck.md index cd27a422a1..4cb12bbcf5 100644 --- a/deploy-manage/autoscaling/autoscaling-in-eck.md +++ b/deploy-manage/autoscaling/autoscaling-in-eck.md @@ -9,21 +9,21 @@ navigation_title: "In ECK" --- # Autoscaling in {{eck}} -Configure autoscaling for Elasticsearch deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like Kibana, APM Server, and Elastic Maps Server. +Configure autoscaling for {{es}} deployments in {{eck}}. Learn how to enable autoscaling, define policies, manage resource limits, and monitor scaling. Includes details on autoscaling stateless applications like {{kib}}, APM Server, and Elastic Maps Server. ## Deployments autoscaling on ECK [k8s-autoscaling] ::::{note} -Elasticsearch autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses. +{{es}} autoscaling requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses. :::: -ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. +ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in {{es}} 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for {{es}} [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. ### Enable autoscaling [k8s-enable] -To enable autoscaling on an Elasticsearch cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the Elasticsearch configuration. +To enable autoscaling on an {{es}} cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the `node.roles` setting in the {{es}} configuration. #### Define autoscaling policies [k8s-autoscaling-policies] @@ -31,11 +31,11 @@ To enable autoscaling on an Elasticsearch cluster, you need to define one or mor Autoscaling policies can be defined in an `ElasticsearchAutoscaler` resource. Each autoscaling policy must have the following fields: * `name` is a unique name used to identify the autoscaling policy. -* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the Elasticsearch resource specification. +* `roles` contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the {{es}} resource specification. * `resources` helps define the minimum and maximum compute resources usage: * `nodeCount` defines the minimum and maximum nodes allowed in the tier. - * `cpu` and `memory` enforce minimum and maximum compute resources usage for the Elasticsearch container. + * `cpu` and `memory` enforce minimum and maximum compute resources usage for the {{es}} container. * `storage` enforces minimum and maximum storage request per PersistentVolumeClaim. @@ -47,7 +47,7 @@ kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: - ## The name of the Elasticsearch cluster to be scaled automatically. + ## The name of the {{es}} cluster to be scaled automatically. elasticsearchRef: name: elasticsearch-sample ## The autoscaling policies. @@ -93,7 +93,7 @@ A node role should not be referenced in more than one autoscaling policy. In the case of storage the following restrictions apply: * Scaling the storage size automatically requires the `ExpandInUsePersistentVolumes` feature to be enabled. It also requires a storage class that supports [volume expansion](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/). -* Only one persistent volume claim per Elasticsearch node is supported when autoscaling is enabled. +* Only one persistent volume claim per {{es}} node is supported when autoscaling is enabled. * Volume size cannot be scaled down. * Scaling up (vertically) is only supported if the available capacity in a PersistentVolume matches the capacity claimed in the PersistentVolumeClaim. Refer to the next section for more information. @@ -172,7 +172,7 @@ You can find [a complete example in the ECK GitHub repository](https://github.co #### Change the polling interval [k8s-autoscaling-polling-interval] -The Elasticsearch autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification: +The {{es}} autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the `pollingPeriod` field in the autoscaling specification: ```yaml apiVersion: autoscaling.k8s.elastic.co/v1alpha1 @@ -207,7 +207,7 @@ spec: #### Autoscaling status [k8s-autoscaling-status] -In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the Elasticsearch cluster, or if a resource limit has been reached: +In addition to the logs generated by the operator, an autoscaling status is maintained in the `ElasticsearchAutoscaler` resource. This status holds several `Conditions` to summarize the health and the status of the autoscaling mechanism. For example, dedicated `Conditions` may report if the controller cannot connect to the {{es}} cluster, or if a resource limit has been reached: ```sh kubectl get elasticsearchautoscaler autoscaling-sample \ @@ -234,7 +234,7 @@ kubectl get elasticsearchautoscaler autoscaling-sample \ }, { "lastTransitionTime": "2022-09-09T07:56:22Z", - "message": "Elasticsearch is available", + "message": "{{es}} is available", "status": "True", "type": "Online" } @@ -302,7 +302,7 @@ You should adjust those settings manually to match the size of your deployment w ## Autoscaling stateless applications on ECK [k8s-stateless-autoscaling] ::::{note} -This section only applies to stateless applications. Check [Elasticsearch autoscaling](#k8s-autoscaling) for more details about scaling automatically Elasticsearch. +This section only applies to stateless applications. Check [{{es}} autoscaling](#k8s-autoscaling) for more details about automatically scaling {{es}}. :::: @@ -312,7 +312,7 @@ The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application * APM Server * Elastic Maps Server -These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a Kibana deployment according to the CPU load: +These resources expose the `scale` subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an `HorizontalPodAutoscaler` resource to adjust the replicas of a {{kib}} deployment according to the CPU load: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 diff --git a/deploy-manage/autoscaling/trained-model-autoscaling.md b/deploy-manage/autoscaling/trained-model-autoscaling.md index ba5b29ca92..1c6ad088d4 100644 --- a/deploy-manage/autoscaling/trained-model-autoscaling.md +++ b/deploy-manage/autoscaling/trained-model-autoscaling.md @@ -52,7 +52,7 @@ You can enable adaptive allocations by using: If the new allocations fit on the current {{ml}} nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your {{ml}} node will be scaled up if {{ml}} autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and [{{infer}} endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference). :::{note} -When you create inference endpoints on {{serverless-short}} using Kibana, adaptive allocations are automatically turned on, and there is no option to disable them. +When you create inference endpoints on {{serverless-short}} using {{kib}}, adaptive allocations are automatically turned on, and there is no option to disable them. ::: ### Optimizing for typical use cases [optimizing-for-typical-use-cases] @@ -78,7 +78,7 @@ Refer to the tables in the [Model deployment resource matrix](#model-deployment- In {{serverless-full}}, Search projects are given access to more processing resources, while Security and Observability projects have lower limits. This difference is reflected in the UI configuration: Search projects have higher resource limits compared to Security and Observability projects to accommodate their more complex operations. -On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects. +On {{serverless-short}}, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in {{kib}} for Observability and Security projects. ## Model deployment resource matrix [model-deployment-resource-matrix] diff --git a/deploy-manage/cloud-organization/billing.md b/deploy-manage/cloud-organization/billing.md index 47b0f8c089..64d5276939 100644 --- a/deploy-manage/cloud-organization/billing.md +++ b/deploy-manage/cloud-organization/billing.md @@ -19,7 +19,7 @@ Elastic charges a recurring fee for using our offerings on {{ecloud}}. In this s * [{{ech}} deployments](/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md) * [{{serverless-full}}](/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md) - * [Elasticsearch projects](/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md) + * [{{es}} projects](/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md) * [Elastic Observability projects](/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md) * [Elastic Security projects](/deploy-manage/cloud-organization/billing/security-billing-dimensions.md) diff --git a/deploy-manage/cloud-organization/billing/billing-faq.md b/deploy-manage/cloud-organization/billing/billing-faq.md index 7802ed5b29..e433111e59 100644 --- a/deploy-manage/cloud-organization/billing/billing-faq.md +++ b/deploy-manage/cloud-organization/billing/billing-faq.md @@ -95,21 +95,21 @@ $$$faq-refund$$$Can I get a refund? $$$faq-included$$$What is included in my paid {{ech}} deployment? : All subscription tiers for {{ech}} include the following free allowance: - * Free 1GB RAM Kibana instance + * Free 1GB RAM {{kib}} instance * Free 1GB RAM Machine Learning node * Free 1GB RAM APM server * A free allowance for [data transfer and snapshot storage costs](#faq-dts) - Note that if you go above the free tier of Kibana/ML/APM (for example, a 2GB Kibana instance), you will be charged in full for the size of that instance. + Note that if you go above the free tier of Kibana/ML/APM (for example, a 2GB {{kib}} instance), you will be charged in full for the size of that instance. $$$faq-dts$$$What are the data transfer and storage charges and how can I control them? : Read about our [usage-based billing dimensions](/deploy-manage/cloud-organization/billing.md#pricing-model). $$$faq-taxes$$$What taxes will be applied on my invoice? -: Customers within the United States, and US territories, will be billed from Elasticsearch Inc., based out of the United States. The US Sales Tax rate will be based on the SaaS tax rates in the local jurisdiction (state/county/city) of the billing address of your subscription. +: Customers within the United States, and US territories, will be billed from {{es}} Inc., based out of the United States. The US Sales Tax rate will be based on the SaaS tax rates in the local jurisdiction (state/county/city) of the billing address of your subscription. - Customers outside the United States, will be billed from Elasticsearch BV, based out of the Netherlands. Customers with a billing address in countries with applicable EU VAT will have VAT applied based on their country and status as a business or private customer. Elastic collects VAT Numbers associated with EU VAT to determine your status as a business (B2B) or private / non-business customer (B2C), as this is a key factor to determine Elastic’s liability to charge VAT on your subscription. To update your VAT Number follow the instructions provided in [Add your billing details](/deploy-manage/cloud-organization/billing/add-billing-details.md). Customers located in countries without EU VAT will not be applied VAT on their invoices. + Customers outside the United States, will be billed from {{es}} BV, based out of the Netherlands. Customers with a billing address in countries with applicable EU VAT will have VAT applied based on their country and status as a business or private customer. Elastic collects VAT Numbers associated with EU VAT to determine your status as a business (B2B) or private / non-business customer (B2C), as this is a key factor to determine Elastic’s liability to charge VAT on your subscription. To update your VAT Number follow the instructions provided in [Add your billing details](/deploy-manage/cloud-organization/billing/add-billing-details.md). Customers located in countries without EU VAT will not be applied VAT on their invoices. diff --git a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md index bb13d910b7..4af03c8e28 100644 --- a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md @@ -21,7 +21,7 @@ Read on for detail about each of these billing dimensions. ## Deployment capacity [ram-hours] -Deployment capacity refers to the cost of the nodes in your Elasticsearch deployment, plus additional node types such as Kibana, APM, and ML. Each node type is priced in terms of GB of RAM per hour (CPU and disk are scaled with RAM and included in this price). To calculate deployment capacity costs, we total up the cost of the nodes in your deployment(s) and multiply by GBs of RAM and how long they’ve been running. +Deployment capacity refers to the cost of the nodes in your {{es}} deployment, plus additional node types such as {{kib}}, APM, and ML. Each node type is priced in terms of GB of RAM per hour (CPU and disk are scaled with RAM and included in this price). To calculate deployment capacity costs, we total up the cost of the nodes in your deployment(s) and multiply by GBs of RAM and how long they’ve been running. Deployment capacity typically constitutes the majority of your bill, and is the easiest to understand and control. @@ -44,7 +44,7 @@ We meter and bill data transfer using three dimensions: : *Data out* accounts for all of the traffic coming out of the deployment. This includes search results, as well as monitoring data sent from the deployment. The same rate applies regardless of the destination of the data, whether to the internet, to another region, or to a cloud provider account in the same region. Data coming out of the deployment through AWS PrivateLink, GCP Private Service Connect, or Azure Private Link, is also considered *Data out*. 3. Data inter-node -: *Data inter-node* accounts for all of the traffic sent between the components of the deployment. This includes the data sync between nodes of a cluster which is managed automatically by Elasticsearch cluster sharding. It also includes data related to search queries executed across multiple nodes of a cluster. Note that single-node Elasticsearch clusters typically have lower charges, but may still incur inter-node charges accounting for data exchanged with Kibana nodes or other nodes, such as machine learning or APM. +: *Data inter-node* accounts for all of the traffic sent between the components of the deployment. This includes the data sync between nodes of a cluster which is managed automatically by {{es}} cluster sharding. It also includes data related to search queries executed across multiple nodes of a cluster. Note that single-node {{es}} clusters typically have lower charges, but may still incur inter-node charges accounting for data exchanged with {{kib}} nodes or other nodes, such as machine learning or APM. We provide a free allowance of 100GB per month, which includes the sum of *data out* and *data inter-node*, across all deployments in the account. Once this threshold is passed, a charge is applied for any data transfer used in excess of the 100GB monthly free allowance. @@ -65,7 +65,7 @@ The exact root cause of unusual data transfer is not always something we can ide ## Storage [storage] -Storage costs are tied to the cost of storing the backup snapshots in the underlying IaaS object store, such as AWS S3, Google Cloud GCS or Azure Storage. These storage costs are *not* for the disk storage that persists the Elasticsearch indices, as that is already included in the [RAM Hours](#ram-hours). +Storage costs are tied to the cost of storing the backup snapshots in the underlying IaaS object store, such as AWS S3, Google Cloud GCS or Azure Storage. These storage costs are *not* for the disk storage that persists the {{es}} indices, as that is already included in the [RAM Hours](#ram-hours). As is common with Cloud providers, we meter and bill snapshot storage using two dimensions: @@ -92,9 +92,9 @@ As is common with Cloud providers, we meter and bill snapshot storage using two Snapshots in {{ech}} save data incrementally at each snapshot event. This means that the effective snapshot size may be larger than the size of the current indices. The snapshot size increases as data is added or updated in the cluster, and deletions do not reduce the snapshot size until the snapshot containing that data is removed. -API requests are executed every time a snapshot is taken or restored, affecting usage costs. In the event that you have any automated processes that use the Elasticsearch API to create or restore snapshots, these should be set so as to avoid unexpected charges. +API requests are executed every time a snapshot is taken or restored, affecting usage costs. In the event that you have any automated processes that use the {{es}} API to create or restore snapshots, these should be set so as to avoid unexpected charges. -You can use Kibana to configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted, along with other options. To learn more, refer to the [Snapshot and Restore](../../tools/snapshot-and-restore/create-snapshots.md) documentation. +You can use {{kib}} to configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted, along with other options. To learn more, refer to the [Snapshot and Restore](../../tools/snapshot-and-restore/create-snapshots.md) documentation. Note that reducing either the snapshot frequency or retention period limits the availability and the recency of available data to restore from. Your snapshot policy should be configured with both costs and data availability in mind in order to minimize the potential for loss of data. Note also that reducing snapshot frequency and retention will not necessarily decrease your storage costs significantly. For example, if your dataset is only growing over time, then the total amount of data stored across all of your snapshots will be equal to your cluster size, whether that’s split across 10 snapshots or 100. diff --git a/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md b/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md index 343fbbe5d7..07b695c89a 100644 --- a/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md @@ -15,7 +15,7 @@ Your monthly bill is based on the capabilities you use. When you use {{obs-serve * **Ingest** — Measured by the number of GB of log/event/info data that you send to your Observability project over the course of a month. * **Retention** — Measured by the total amount of ingested data stored in your Observability project. -Data volumes for ingest and retention are based on the fully enriched normalized data size at the end of the ingest pipeline, before Elasticsearch compression is performed, and will be higher than the volumes traditionally reported by Elasticsearch index size. In addition, these volumes might be larger than those reported by cloud provider proxy logs for data going into Elasticsearch. This allows you to have flexibility in choosing your preferred ingest architecture for enrichment, whether it’s through Elastic Agent, Logstash, OpenTelemetry, or collectors — with no impact on the cost. +Data volumes for ingest and retention are based on the fully enriched normalized data size at the end of the ingest pipeline, before {{es}} compression is performed, and will be higher than the volumes traditionally reported by {{es}} index size. In addition, these volumes might be larger than those reported by cloud provider proxy logs for data going into {{es}}. This allows you to have flexibility in choosing your preferred ingest architecture for enrichment, whether it’s through Elastic Agent, Logstash, OpenTelemetry, or collectors — with no impact on the cost. ## Synthetics [synthetics-billing] diff --git a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md index e6833198f2..ac76bcec91 100644 --- a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md @@ -6,9 +6,9 @@ applies_to: serverless: all --- -# Elasticsearch billing dimensions [elasticsearch-billing] +# {{es}} billing dimensions [elasticsearch-billing] -Elasticsearch is priced based on consumption of the underlying infrastructure that supports your use case, with the performance characteristics you need. Measurements are in Virtual Compute Units (VCUs). Each VCU represents a fraction of RAM, CPU, and local disk for caching. +{{es}} is priced based on consumption of the underlying infrastructure that supports your use case, with the performance characteristics you need. Measurements are in Virtual Compute Units (VCUs). Each VCU represents a fraction of RAM, CPU, and local disk for caching. The number of VCUs you need is determined by: @@ -23,7 +23,7 @@ For detailed {{es-serverless}} project rates, see the [{{es-serverless}} pricing ## VCU types: Search, Indexing, and ML [elasticsearch-billing-information-about-the-vcu-types-search-ingest-and-ml] -Elasticsearch uses three VCU types: +{{es}} uses three VCU types: * **Indexing:** The VCUs used to index incoming documents. * **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require. @@ -44,4 +44,4 @@ You can control costs using the following strategies: * **Machine learning trained model autoscaling:** Configure your trained model deployment to allow it to scale down to zero allocations when there are no active inference requests: * When starting or updating a trained model deployment, [Enable adaptive resources](../../autoscaling/trained-model-autoscaling.md#enabling-autoscaling-in-kibana-adaptive-resources) and set the VCU usage level to **Low**. - * When using the inference API for Elasticsearch or ELSER, [enable `adaptive_allocations`](../../autoscaling/trained-model-autoscaling.md#enabling-autoscaling-through-apis-adaptive-allocations). + * When using the inference API for {{es}} or ELSER, [enable `adaptive_allocations`](../../autoscaling/trained-model-autoscaling.md#enabling-autoscaling-through-apis-adaptive-allocations). diff --git a/deploy-manage/cloud-organization/billing/manage-subscription.md b/deploy-manage/cloud-organization/billing/manage-subscription.md index b1c3ca2607..581fb8ad63 100644 --- a/deploy-manage/cloud-organization/billing/manage-subscription.md +++ b/deploy-manage/cloud-organization/billing/manage-subscription.md @@ -66,7 +66,7 @@ You can [change your subscription level](/deploy-manage/cloud-organization/billi : Make sure that there are no applications that use the SQL [JDBC](/explore-analyze/query-filter/languages/sql-jdbc.md) or [ODBC](/explore-analyze/query-filter/languages/sql-odbc.md) clients. `Field-level or document-level security` -: Remove any user role configurations based on field or document access [through the API](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) or the Kibana [Roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) page. +: Remove any user role configurations based on field or document access [through the API](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) or the {{kib}} [Roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) page. ::::{note} After you have made your changes to the deployment, it can take up to one hour to clear the notification banner. diff --git a/deploy-manage/cloud-organization/billing/security-billing-dimensions.md b/deploy-manage/cloud-organization/billing/security-billing-dimensions.md index 601555be85..792dcd16ce 100644 --- a/deploy-manage/cloud-organization/billing/security-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/security-billing-dimensions.md @@ -18,7 +18,7 @@ Your monthly bill is based on the capabilities you use. When you use Security An * **Ingest** — Measured by the number of GB of log/event/info data that you send to your Security project over the course of a month. * **Retention** — Measured by the total amount of ingested data stored in your Security project. -Data volumes for ingest and retention are based on the fully enriched normalized data size at the end of the ingest pipeline, before {{es}} compression is performed, and will be higher than the volumes traditionally reported by {{es}} index size. In addition, these volumes might be larger than those reported by cloud provider proxy logs for data going into Elasticsearch. This allows you to have flexibility in choosing your preferred ingest architecture for enrichment, whether it’s through {{agent}}, {{ls}}, OpenTelemetry, or collectors — with no impact on the cost. +Data volumes for ingest and retention are based on the fully enriched normalized data size at the end of the ingest pipeline, before {{es}} compression is performed, and will be higher than the volumes traditionally reported by {{es}} index size. In addition, these volumes might be larger than those reported by cloud provider proxy logs for data going into {{es}}. This allows you to have flexibility in choosing your preferred ingest architecture for enrichment, whether it’s through {{agent}}, {{ls}}, OpenTelemetry, or collectors — with no impact on the cost. ## Endpoint Protection [security-billing-endpoint-protection] diff --git a/deploy-manage/deploy.md b/deploy-manage/deploy.md index 0aea408a00..ebb96ff48d 100644 --- a/deploy-manage/deploy.md +++ b/deploy-manage/deploy.md @@ -69,7 +69,7 @@ Consider orchestration if you: - Need to build a multi-tenant platform (ECE) Orchestrators manage the lifecycle of your Elastic deployments but don't change how the core products work. When using an orchestrated deployment: -- You'll still use the same Elasticsearch and Kibana features and configurations +- You'll still use the same {{es}} and {{kib}} features and configurations - Most product documentation remains applicable - You can add other Elastic products as needed - The orchestrator handles operational tasks while you focus on using and configuring the products diff --git a/deploy-manage/deploy/_snippets/ecctl.md b/deploy-manage/deploy/_snippets/ecctl.md index 43f2bfe62d..a12502f432 100644 --- a/deploy-manage/deploy/_snippets/ecctl.md +++ b/deploy-manage/deploy/_snippets/ecctl.md @@ -1,4 +1,4 @@ -Elastic Cloud Control (ECCTL) is the command-line interface for {{ecloud}} APIs. It wraps typical operations commonly needed by operators within a single command line tool. +{{ecloud}} Control (ECCTL) is the command-line interface for {{ecloud}} APIs. It wraps typical operations commonly needed by operators within a single command line tool. ECCTL provides the following benefits: diff --git a/deploy-manage/deploy/_snippets/installation-order.md b/deploy-manage/deploy/_snippets/installation-order.md index 4819e02fa4..f5595b4cf8 100644 --- a/deploy-manage/deploy/_snippets/installation-order.md +++ b/deploy-manage/deploy/_snippets/installation-order.md @@ -5,7 +5,7 @@ If you're deploying the {{stack}} in a self-managed cluster, then install the {{ * [Logstash](logstash://reference/index.md) * [{{agent}}](/reference/fleet/index.md) or [Beats](beats://reference/index.md) * [APM](/solutions/observability/apps/application-performance-monitoring-apm.md) -* [Elasticsearch Hadoop](elasticsearch-hadoop://reference/index.md) +* [{{es}} Hadoop](elasticsearch-hadoop://reference/index.md) Installing in this order ensures that the components each product depends on are in place. diff --git a/deploy-manage/deploy/_snippets/stack-version-compatibility.md b/deploy-manage/deploy/_snippets/stack-version-compatibility.md index 04d400a082..0fe8644575 100644 --- a/deploy-manage/deploy/_snippets/stack-version-compatibility.md +++ b/deploy-manage/deploy/_snippets/stack-version-compatibility.md @@ -1,3 +1,3 @@ -When installing the {{stack}}, you must use the same version across the entire stack. For example, if you are using {{es}} {{stack-version}}, you install Beats {{stack-version}}, APM Server {{stack-version}}, Elasticsearch Hadoop {{stack-version}}, {{kib}} {{stack-version}}, and Logstash {{stack-version}}. +When installing the {{stack}}, you must use the same version across the entire stack. For example, if you are using {{es}} {{stack-version}}, you install Beats {{stack-version}}, APM Server {{stack-version}}, {{es}} Hadoop {{stack-version}}, {{kib}} {{stack-version}}, and Logstash {{stack-version}}. If you’re upgrading an existing installation, see [](/deploy-manage/upgrade.md) for information about how to ensure compatibility with {{stack-version}}. \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-enterprise.md b/deploy-manage/deploy/cloud-enterprise.md index eb789b6492..2793ca03f7 100644 --- a/deploy-manage/deploy/cloud-enterprise.md +++ b/deploy-manage/deploy/cloud-enterprise.md @@ -7,7 +7,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html --- -# Elastic Cloud Enterprise [Elastic-Cloud-Enterprise-overview] +# {{ece}} [Elastic-Cloud-Enterprise-overview] {{ece}} (ECE) is an Elastic self-managed solution for deploying, orchestrating, and managing {{es}} clusters at scale. It provides a centralized platform that allows organizations to run {{es}}, {{kib}}, and other {{stack}} components across multiple machines. diff --git a/deploy-manage/deploy/cloud-enterprise/access-kibana.md b/deploy-manage/deploy/cloud-enterprise/access-kibana.md index 0cf330bb65..ca32356e40 100644 --- a/deploy-manage/deploy/cloud-enterprise/access-kibana.md +++ b/deploy-manage/deploy/cloud-enterprise/access-kibana.md @@ -7,36 +7,36 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-kibana.html --- -# Access Kibana [ece-access-kibana] +# Access {{kib}} [ece-access-kibana] -Kibana is an open source analytics and visualization platform designed to search, view, and interact with data stored in Elasticsearch indices. +{{kib}} is an open source analytics and visualization platform designed to search, view, and interact with data stored in {{es}} indices. ::::{tip} -Most deployment templates include a Kibana instance, but if it wasn’t part of the initial deployment you can [customize your deployment components](./customize-deployment.md) to add {{kib}}. +Most deployment templates include a {{kib}} instance, but if it wasn’t part of the initial deployment you can [customize your deployment components](./customize-deployment.md) to add {{kib}}. :::: -To access Kibana: +To access {{kib}}: 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. -3. Under **Applications**, select the Kibana **Open** link and wait for Kibana to open. +3. Under **Applications**, select the {{kib}} **Open** link and wait for {{kib}} to open. ::::{note} - Both ports 443 and 9243 can be used to access Kibana. SSO only works with 9243 on older deployments, where you will see an option in the Cloud UI to migrate the default to port 443. In addition, any version upgrade will automatically migrate the default port to 443. + Both ports 443 and 9243 can be used to access {{kib}}. SSO only works with 9243 on older deployments, where you will see an option in the Cloud UI to migrate the default to port 443. In addition, any version upgrade will automatically migrate the default port to 443. :::: -4. Log into Kibana. Single sign-on (SSO) is enabled between your {{ece}} account and the Kibana instance. If you’re logged in already, then Kibana opens without requiring you to log in again. However, if your token has expired, choose from one of these methods to log in: +4. Log into {{kib}}. Single sign-on (SSO) is enabled between your {{ece}} account and the {{kib}} instance. If you’re logged in already, then {{kib}} opens without requiring you to log in again. However, if your token has expired, choose from one of these methods to log in: - * Select **Login with Cloud**. You’ll need to log in with an [ECE account](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) credentials and then you’ll be redirected to Kibana. + * Select **Login with Cloud**. You’ll need to log in with an [ECE account](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) credentials and then you’ll be redirected to {{kib}}. * Log in with the `elastic` superuser. The password was provided when you created your cluster and [can be reset](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). - * Log in with [any users you created in Kibana already](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md). + * Log in with [any users you created in {{kib}} already](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md). ::::{tip} - On AWS and not able to access Kibana? [Check if you need to update your endpoint URL first](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). + On AWS and not able to access {{kib}}? [Check if you need to update your endpoint URL first](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). :::: -In production systems, you might need to control what Elasticsearch data users can access through Kibana, so you need create credentials that can be used to access the necessary Elasticsearch resources. This means granting read access to the necessary indexes, as well as access to update the `.kibana` index. Refer to [](/deploy-manage/users-roles/cluster-or-deployment-auth.md) for more information. +In production systems, you might need to control what {{es}} data users can access through {{kib}}, so you need create credentials that can be used to access the necessary {{es}} resources. This means granting read access to the necessary indexes, as well as access to update the `.kibana` index. Refer to [](/deploy-manage/users-roles/cluster-or-deployment-auth.md) for more information. diff --git a/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md index d8ff85c74f..c5438ba870 100644 --- a/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md +++ b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md @@ -19,7 +19,7 @@ ECE allows you to add custom plugins or external files as bundled ZIP files to y Follow these steps to configure custom bundles and plugins to your {{es}} clusters, making them available to all {{es}} instances: -* Update your Elasticsearch cluster using the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md): +* Update your {{es}} cluster using the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md): * For bundles, modify the `resources.elasticsearch.plan.elasticsearch.user_bundles` JSON attribute. * For plugins, modify the `resources.elasticsearch.plan.elasticsearch.user_plugins` JSON attribute. @@ -34,7 +34,7 @@ Here are some examples of custom bundles you can add: ## Add custom plugins to your deployment [ece-add-custom-plugin] -Custom plugins can include the official Elasticsearch plugins not provided with Elastic Cloud Enterprise, any of the community-sourced plugins, or plugins that you write yourself. +Custom plugins can include the official {{es}} plugins not provided with {{ece}}, any of the community-sourced plugins, or plugins that you write yourself. 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. From the **Deployments** page, select your deployment. @@ -77,7 +77,7 @@ Custom plugins can include the official Elasticsearch plugins not provided with } ``` 1. The URL for the plugin must be always available. Make sure you host the plugin artifacts internally in a highly available environment. The URL must use the scheme `http` or `https` - 2. The version must match exactly your Elasticsearch version, such as `7.17.1`. Wildcards (*) are not allowed. + 2. The version must match exactly your {{es}} version, such as `7.17.1`. Wildcards (*) are not allowed. 5. Save your changes. 6. To verify that all nodes have the plugins installed, use one of these commands: `GET /_nodes/plugins?filter_path=nodes.*.plugins` or `GET _cat/plugins?v` @@ -87,8 +87,8 @@ Custom plugins can include the official Elasticsearch plugins not provided with This example adds a custom LDAP bundle for deployment level role-based access control (RBAC). To set platform level RBAC, check [](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). -1. Prepare a custom bundle as a ZIP file that contains your keystore file with the private key and certificate inside of a `truststore` folder [in the same way that you would on Elastic Cloud](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). This bundle allows all Elasticsearch containers to access the same keystore file through your `ssl.truststore` settings. -2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your new Elasticsearch cluster with the custom bundle you have just created. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: +1. Prepare a custom bundle as a ZIP file that contains your keystore file with the private key and certificate inside of a `truststore` folder [in the same way that you would on {{ecloud}}](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). This bundle allows all {{es}} containers to access the same keystore file through your `ssl.truststore` settings. +2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your new {{es}} cluster with the custom bundle you have just created. Modify the `user_bundles` JSON attribute of **each** {{es}} instance type as shown in the following example: ```sh { @@ -132,9 +132,9 @@ In this example, we assume the Identity Provider does not publish its SAML metad 1. Prepare a ZIP file with a custom bundle that contains your Identity Provider’s metadata (`metadata.xml`). Place the file inside a `saml` folder within the ZIP (`saml/metadata.xml`). - This bundle will allow all Elasticsearch containers to access the metadata file. + This bundle will allow all {{es}} containers to access the metadata file. -2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: +2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your {{es}} cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** {{es}} instance type as shown in the following example: ```text { @@ -182,7 +182,7 @@ In this example, we assume the Identity Provider does not publish its SAML metad ## Example: Custom JVM trust store bundle [ece-add-custom-bundle-example-cacerts] -If you are using SSL certificates signed by non-public certificate authorities, Elasticsearch is not able to communicate with the services using those certificates unless you import a custom JVM trust store containing the certificates of your signing authority into your Elastic Cloud Enterprise installation. You’ll need the trust store to access snapshot repositories like Minio, for your Elastic Cloud Enterprise proxy, or to reindex from remote. +If you are using SSL certificates signed by non-public certificate authorities, {{es}} is not able to communicate with the services using those certificates unless you import a custom JVM trust store containing the certificates of your signing authority into your {{ece}} installation. You’ll need the trust store to access snapshot repositories like Minio, for your {{ece}} proxy, or to reindex from remote. To import a JVM trust store: @@ -194,7 +194,7 @@ To import a JVM trust store: openssl s_client -connect -showcerts <1> ``` - 1. The server address (name and port number) of the service that you want Elasticsearch to be able to access. This command prints the entire certificate chain to `stdout`. You can choose a certificate at any level to be added to the trust store. + 1. The server address (name and port number) of the service that you want {{es}} to be able to access. This command prints the entire certificate chain to `stdout`. You can choose a certificate at any level to be added to the trust store. 2. Save it to a file with as a PEM extension. 3. Locate your JRE’s default trust store, and copy it to the current directory: @@ -237,7 +237,7 @@ To import a JVM trust store: A bundle may contain other contents beyond the trust store if you prefer, but we recommend creating separate bundles for different purposes. :::: -3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your {{es}} cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** {{es}} instance type as shown in the following example: ```sh { @@ -260,9 +260,9 @@ To import a JVM trust store: ... ``` 1. The URL for the bundle ZIP file must be always available. Make sure you host the plugin artefacts internally in a highly available environment. - 2. Wildcards are allowed here, since the certificates are independent from the Elasticsearch version. + 2. Wildcards are allowed here, since the certificates are independent from the {{es}} version. -4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `Elasticsearch cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page: +4. (Optional) If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `{{es}} cluster data` section of the [advanced configuration](./advanced-cluster-configuration.md) page: ```sh "jvm_trust_store": { @@ -292,7 +292,7 @@ To import a JVM trust store: ``` 2. Copy the ZIP file to a webserver that is reachable from any allocator in your environment. -3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example. +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your {{es}} cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** {{es}} instance type as shown in the following example. ```sh { @@ -342,7 +342,7 @@ To import a JVM trust store: ``` 2. Copy the ZIP file to a webserver that is reachable from any allocator in your environment. -3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example. +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your {{es}} cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** {{es}} instance type as shown in the following example. ```sh { diff --git a/deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md b/deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md index a3c4df4dc0..79c8c7c6eb 100644 --- a/deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md +++ b/deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md @@ -17,11 +17,11 @@ To edit the cluster configuration directly: Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. -3. From your deployment menu, go to the **Edit** page then go to the bottom of the page and select **advanced Elasticsearch configuration**. +3. From your deployment menu, go to the **Edit** page then go to the bottom of the page and select **advanced {{es}} configuration**. 4. Edit the configuration: - * The contents of the **Deployment configuration** section describe your current configuration, such as the current capacity, the node count, the installed Elasticsearch version, and so forth. You can manually edit the configuration. - * The **Elasticsearch cluster data** section describes additional parts of your cluster, such as its name, whether snapshots are enabled, security information, and whether Kibana is enabled. + * The contents of the **Deployment configuration** section describe your current configuration, such as the current capacity, the node count, the installed {{es}} version, and so forth. You can manually edit the configuration. + * The **{{es}} cluster data** section describes additional parts of your cluster, such as its name, whether snapshots are enabled, security information, and whether {{kib}} is enabled. 5. Select **Save** for the sections that you modified to apply the new configuration. You should receive a message that your new configuration is successful. diff --git a/deploy-manage/deploy/cloud-enterprise/air-gapped-install.md b/deploy-manage/deploy/cloud-enterprise/air-gapped-install.md index 21d56760dd..2a4a010176 100644 --- a/deploy-manage/deploy/cloud-enterprise/air-gapped-install.md +++ b/deploy-manage/deploy/cloud-enterprise/air-gapped-install.md @@ -18,7 +18,7 @@ Before you start, you must: * Follow the same prerequisites described in [](./install.md#ece-install-prerequisites). This includes [](./identify-deployment-scenario.md) and [](./prepare-environment.md) steps. * [Configure your operating system](./configure-operating-system.md) in all ECE hosts. -* Be part of the `docker` group to run the installation script. You should not install Elastic Cloud Enterprise as the `root` user. +* Be part of the `docker` group to run the installation script. You should not install {{ece}} as the `root` user. * Set up and run a local copy of the Elastic Package Repository, otherwise your deployments with APM server and Elastic agent won’t work. Refer to the [Running EPR in air-gapped environments](/reference/fleet/air-gapped.md#air-gapped-diy-epr) documentation. When you are ready to install ECE, you can proceed: diff --git a/deploy-manage/deploy/cloud-enterprise/alternative-install-ece-with-ansible.md b/deploy-manage/deploy/cloud-enterprise/alternative-install-ece-with-ansible.md index adb2887f43..587aa6f4eb 100644 --- a/deploy-manage/deploy/cloud-enterprise/alternative-install-ece-with-ansible.md +++ b/deploy-manage/deploy/cloud-enterprise/alternative-install-ece-with-ansible.md @@ -9,7 +9,7 @@ navigation_title: Ansible playbook # Alternative: Install ECE with Ansible [ece-configure-ansible] -If you already use Ansible in your business for provisioning, configuration management, and application deployment, you can use the ECE Ansible playbook to get up and running with Elastic Cloud Enterprise faster, on any cloud provider. +If you already use Ansible in your business for provisioning, configuration management, and application deployment, you can use the ECE Ansible playbook to get up and running with {{ece}} faster, on any cloud provider. Please note that the ECE Ansible playbook is a community project, supported by Elastic, available on GitHub: [elastic/ansible-elastic-cloud-enterprise](https://github.com/elastic/ansible-elastic-cloud-enterprise). Elastic welcomes all community contributions to the repository and will validate any changes on a best-effort basis. diff --git a/deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md b/deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md index 21a2045f68..b8db916ceb 100644 --- a/deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md @@ -8,20 +8,20 @@ mapped_pages: # Assign roles to hosts [ece-change-roles] -Assigning roles might be required after you [install Elastic Cloud Enterprise on hosts](install.md) to make sure the new hosts can be used for their intended purpose and to remove roles from the initial host to implement a recommended ECE installation. Similarly, if you need more processing capacity for Elasticsearch nodes in your deployment, change the role of a new runner to `allocator` to add its capacity to your installation. +Assigning roles might be required after you [install {{ece}} on hosts](install.md) to make sure the new hosts can be used for their intended purpose and to remove roles from the initial host to implement a recommended ECE installation. Similarly, if you need more processing capacity for {{es}} nodes in your deployment, change the role of a new runner to `allocator` to add its capacity to your installation. These steps describe how to assign roles from the Cloud UI. For automation purposes, assigning roles with a [token you generate](generate-roles-tokens.md) is preferred. -Each Elastic Cloud Enterprise runner can take on several roles: +Each {{ece}} runner can take on several roles: `allocator` -: Allocates the available computing resources to Elasticsearch nodes or Kibana instances. In larger installations, a majority of the machines will be allocators. +: Allocates the available computing resources to {{es}} nodes or {{kib}} instances. In larger installations, a majority of the machines will be allocators. `coordinator` : Serves as a distributed coordination system and resource scheduler. `proxy` -: Manages communication between a user and an Elasticsearch or Kibana instance. +: Manages communication between a user and an {{es}} or {{kib}} instance. `director` : Manages the ZooKeeper datastore. This role is typically shared with the coordinator role. In production deployments it can be separated from a coordinator. @@ -33,7 +33,7 @@ Once the `director` role is assigned to a runner, the Zookeeper service starts o Each role is associated with a set of Docker containers that provide the specific functionality. -There are some additional roles shown in the Cloud UI, such as the [beats-runner](/reference/glossary/index.md#glossary-beats-runner) and [services-forwarder](/reference/glossary/index.md#glossary-services-forwarder) roles, that are required by Elastic Cloud Enterprise and that you cannot modify. +There are some additional roles shown in the Cloud UI, such as the [beats-runner](/reference/glossary/index.md#glossary-beats-runner) and [services-forwarder](/reference/glossary/index.md#glossary-services-forwarder) roles, that are required by {{ece}} and that you cannot modify. To assign roles to hosts: @@ -47,5 +47,5 @@ To assign roles to hosts: 3. To update the roles, select the host IP address and then choose **Manage roles** from the **Manage host** menu. 4. Select the role assignments for the host and choose **Update roles**. -Elastic Cloud Enterprise automatically starts managing the node in its new role and makes it available for use. +{{ece}} automatically starts managing the node in its new role and makes it available for use. diff --git a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md index ddf9e27587..6a303798ec 100644 --- a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md +++ b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md @@ -9,7 +9,7 @@ mapped_pages: # Updating custom templates to support node_roles and autoscaling [ce-add-support-for-node-roles-and-autoscaling] -Templates created in older versions of ECE should be updated in order to take advantage of new Elastic Cloud Enterprise features, such as [Data tiers](../../../manage-data/lifecycle/data-tiers.md), and [Deployment autoscaling](../../autoscaling.md). By updating these templates we also ensure forward compatibility with future Elastic Cloud Enterprise versions that will require certain fields such as `node_roles` and `id` to be present in the deployment configuration. +Templates created in older versions of ECE should be updated in order to take advantage of new {{ece}} features, such as [Data tiers](../../../manage-data/lifecycle/data-tiers.md), and [Deployment autoscaling](../../autoscaling.md). By updating these templates we also ensure forward compatibility with future {{ece}} versions that will require certain fields such as `node_roles` and `id` to be present in the deployment configuration. ::::{note} System owned deployment templates are automatically updated during the ECE upgrade process to support both data tiers with `node_roles` and autoscaling. However, custom templates that you created must be manually updated by following the steps in this guide. @@ -17,11 +17,11 @@ System owned deployment templates are automatically updated during the ECE upgra ## Adding support for node_roles [ece_adding_support_for_node_roles] -The `node_roles` field defines the roles that an Elasticsearch topology element can have, which is used in place of `node_type` when a new feature such as autoscaling is enabled, or when a new data tier is added. This field is supported on [Elastic stack versions 7.10 and above](cloud://reference/cloud-enterprise/changes-to-index-allocation-api.md). +The `node_roles` field defines the roles that an {{es}} topology element can have, which is used in place of `node_type` when a new feature such as autoscaling is enabled, or when a new data tier is added. This field is supported on [{{stack}} versions 7.10 and above](cloud://reference/cloud-enterprise/changes-to-index-allocation-api.md). -There are a number of fields that need to be added to each Elasticsearch node in order to support `node_roles`: +There are a number of fields that need to be added to each {{es}} node in order to support `node_roles`: -* **id**: Unique identifier of the topology element. This field, along with the `node_roles`, identifies an Elasticsearch topology element. +* **id**: Unique identifier of the topology element. This field, along with the `node_roles`, identifies an {{es}} topology element. * **node_roles**: The list of node roles. Allowable roles are: `master`, `ingest`, `ml`, `data_hot`, `data_content`, `data_warm`, `data_cold`, `data_frozen`, `remote_cluster_client`, and `transform`. For details, check [Node roles](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#node-roles). * **topology_element_control**: Controls for the topology element. @@ -313,7 +313,7 @@ The following example is based on the `default` system owned deployment template :::: -In the reference example there are seven different *Elasticsearch topology elements*: `hot_content`, `warm`, `cold`, `frozen`, `coordinating`, `master`, and `ml`. These names map to the `id` field of each topology element. In addition, this template contains four different *resources*: `elasticsearch`, `kibana`, `apm`, and `enterprise_search`. +In the reference example there are seven different *{{es}} topology elements*: `hot_content`, `warm`, `cold`, `frozen`, `coordinating`, `master`, and `ml`. These names map to the `id` field of each topology element. In addition, this template contains four different *resources*: `elasticsearch`, `kibana`, `apm`, and `enterprise_search`. ### Requirements [ece_requirements] @@ -326,17 +326,17 @@ To add support for `node_roles`, the deployment template has to meet the followi ::::{note} :name: ece-ce-valid-topology-element-ids - The IDs `hot_content`, `warm`, `cold`, `frozen`, `coordinating`, `master`, and `ml` are the **only** ones supported in an Elasticsearch topology element. In addition, there may not be topology elements with duplicate IDs inside the `elasticsearch` resource. + The IDs `hot_content`, `warm`, `cold`, `frozen`, `coordinating`, `master`, and `ml` are the **only** ones supported in an {{es}} topology element. In addition, there may not be topology elements with duplicate IDs inside the `elasticsearch` resource. :::: * Each topology element contains the `id`, `node_roles`, and `topology_element_control` fields. -It is also recommended that all Elasticsearch topology elements have the `node_attributes` field. This field can be useful in ILM policies, when creating a deployment using a version below 7.10, that does not support `node_roles`. +It is also recommended that all {{es}} topology elements have the `node_attributes` field. This field can be useful in ILM policies, when creating a deployment using a version below 7.10, that does not support `node_roles`. Except for the `id` and `node_roles`, all fields can be configured by the user. Also, the topology elements must contain the exact same `id` and `node_roles` that are present in the reference example. ::::{note} -Although it is required for the template to contain all resources and topology elements, it is possible to disable certain components by setting their `size.value` (and `topology_element_control.size` in the case of the Elasticsearch topology elements) to `0`. +Although it is required for the template to contain all resources and topology elements, it is possible to disable certain components by setting their `size.value` (and `topology_element_control.size` in the case of the {{es}} topology elements) to `0`. :::: @@ -345,23 +345,23 @@ Although it is required for the template to contain all resources and topology e To update a custom deployment template: -1. Add the `id`, `node_roles`, `node_attributes`, and `topology_element_control` fields to the existing Elasticsearch topology elements. Keep in mind that these fields must match the Elasticsearch topology element in question: +1. Add the `id`, `node_roles`, `node_attributes`, and `topology_element_control` fields to the existing {{es}} topology elements. Keep in mind that these fields must match the {{es}} topology element in question: * If the `id` of the topology elements in the existing templates already match any of the seven mentioned in the requirements, then simply add the `node_roles` and `topology_element_control` to those elements, based on the reference example. - * Otherwise, map each of the existing topology elements to one of the seven Elasticsearch topology elements, based on their `node_type`, and add the fields accordingly. + * Otherwise, map each of the existing topology elements to one of the seven {{es}} topology elements, based on their `node_type`, and add the fields accordingly. 2. Add the `elasticsearch` topology elements that are missing. 3. Add the `resources` that are missing. ::::{note} -It is recommended to add the `id` field to each Elasticsearch topology element in the deployment plan, before updating the template. This can be performed either through a deployment update API request or using the deployment **Advanced edit** page. Refer to the [note above](#ece-ce-valid-topology-element-ids) to understand which values are available for the `id` field. +It is recommended to add the `id` field to each {{es}} topology element in the deployment plan, before updating the template. This can be performed either through a deployment update API request or using the deployment **Advanced edit** page. Refer to the [note above](#ece-ce-valid-topology-element-ids) to understand which values are available for the `id` field. :::: ### Example [ece-ce-add-support-to-node-roles-example] -The existing template contains three Elasticsearch topology elements and two resources (`elasticsearch` and `kibana`). +The existing template contains three {{es}} topology elements and two resources (`elasticsearch` and `kibana`). ::::{dropdown} Custom example without support for `node_roles` ```json @@ -465,9 +465,9 @@ The existing template contains three Elasticsearch topology elements and two res :::: -In this case we can match the three existing Elasticsearch topology elements to `hot_content`, `master`, and `ml`, respectively, based on their `node_type` field. Therefore, we can simply add the `id`, `node_roles`, `topology_element_control`, and `node_attributes` that are already associated with these topology elements in the reference example. +In this case we can match the three existing {{es}} topology elements to `hot_content`, `master`, and `ml`, respectively, based on their `node_type` field. Therefore, we can simply add the `id`, `node_roles`, `topology_element_control`, and `node_attributes` that are already associated with these topology elements in the reference example. -Then, it is only necessary to add the four Elasticsearch topology elements (`warm`, `cold`, `frozen`, and `coordinating`) and two resources (`apm` and `enterprise_search`) that are missing. These fields can also be added based on the reference example. +Then, it is only necessary to add the four {{es}} topology elements (`warm`, `cold`, `frozen`, and `coordinating`) and two resources (`apm` and `enterprise_search`) that are missing. These fields can also be added based on the reference example. After adding support for `node_roles`, the resulting deployment template should look similar to the following: @@ -757,13 +757,13 @@ After adding support for `node_roles`, the resulting deployment template should ## Adding support for autoscaling [ece_adding_support_for_autoscaling] -After adding support for `node_roles` we can then update the template to support autoscaling. Autoscaling is used to automatically adjust the available resources in the deployments. Currently, this feature is available for Elasticsearch data tiers and machine learning node in [Elastic stack versions 7.11 and above](../../autoscaling.md). +After adding support for `node_roles` we can then update the template to support autoscaling. Autoscaling is used to automatically adjust the available resources in the deployments. Currently, this feature is available for {{es}} data tiers and machine learning node in [{{stack}} versions 7.11 and above](../../autoscaling.md). There are a number of autoscaling fields that need to be added in order to support autoscaling: -* **autoscaling_min**: The default minimum size of an Elasticsearch topology element when autoscaling is enabled. This setting is currently available only for machine learning nodes, since these are the only nodes that support scaling down. -* **autoscaling_max**: The default maximum size of an Elasticsearch topology element when autoscaling is enabled. This setting is currently available only for data tiers and machine learning nodes, since these are the only nodes that support scaling up. -* **autoscaling_enabled**: When set to `true`, autoscaling is enabled by default on an Elasticsearch cluster. +* **autoscaling_min**: The default minimum size of an {{es}} topology element when autoscaling is enabled. This setting is currently available only for machine learning nodes, since these are the only nodes that support scaling down. +* **autoscaling_max**: The default maximum size of an {{es}} topology element when autoscaling is enabled. This setting is currently available only for data tiers and machine learning nodes, since these are the only nodes that support scaling up. +* **autoscaling_enabled**: When set to `true`, autoscaling is enabled by default on an {{es}} cluster. ::::{note} These fields represent the default settings for the deployment. However, autoscaling can be enabled/disabled and the maximum and minimum autoscaling sizes can be adjusted in the deployment settings. @@ -1094,7 +1094,7 @@ If necessary, the values chosen for each field can be based on the reference exa To update a custom deployment template: -1. Add the `autoscaling_min` and `autoscaling_max` fields to the Elasticsearch topology elements (check [Autoscaling through the API](../../autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-api-example)). +1. Add the `autoscaling_min` and `autoscaling_max` fields to the {{es}} topology elements (check [Autoscaling through the API](../../autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-api-example)). 2. Add the `autoscaling_enabled` fields to the `elasticsearch` resource. Set this field to `true` in case you want autoscaling enabled by default, and to `false` otherwise. @@ -1736,7 +1736,7 @@ Having added support for `node_roles` and autoscaling to your custom template, i After the template is updated, you can start [creating new deployments](create-deployment.md) or [migrating existing ones to `node_roles`](#ece-migrating-a-deployment-to-node-roles). -Although `node_roles` and autoscaling are only available in more recent Elastic stack versions, an updated template can still be used with deployments that have versions below 7.10. In these cases, the data tiers and autoscaling features will only take effect once the deployment is upgraded to versions 7.10 and 7.11, respectively. +Although `node_roles` and autoscaling are only available in more recent {{stack}} versions, an updated template can still be used with deployments that have versions below 7.10. In these cases, the data tiers and autoscaling features will only take effect once the deployment is upgraded to versions 7.10 and 7.11, respectively. ## Migrating a deployment to `node_roles` [ece-migrating-a-deployment-to-node-roles] @@ -1753,7 +1753,7 @@ If you do not intend to perform any of these actions, the migration can only be 1. Go to the deployment **Edit** page. 2. Get the deployment update payload by clicking **Equivalent API request** at the bottom of the page. -3. Update the payload by replacing `node_type` with `node_roles` in each Elasticsearch topology element. To know which `node_roles` to add to each topology element, refer to the [custom template example](#ece-ce-add-support-to-node-roles-example) where support for `node_roles` is added. +3. Update the payload by replacing `node_type` with `node_roles` in each {{es}} topology element. To know which `node_roles` to add to each topology element, refer to the [custom template example](#ece-ce-add-support-to-node-roles-example) where support for `node_roles` is added. 4. Send a `PUT` request with the updated deployment payload to conclude the migration. Check the [Update Deployment](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-update-deployment) API documentation for more details. **Using the Advanced edit:** @@ -1765,7 +1765,7 @@ To follow this approach you need to have administrator privileges. 1. Go to the deployment **Edit** page. 2. Click **Advanced edit** at the bottom of the page. -3. Update the **Deployment configuration** by replacing `node_type` with `node_roles` in each Elasticsearch topology element. To know which `node_roles` to add to each topology element, refer to the [custom template example](#ece-ce-add-support-to-node-roles-example) where support for `node_roles` is added. +3. Update the **Deployment configuration** by replacing `node_type` with `node_roles` in each {{es}} topology element. To know which `node_roles` to add to each topology element, refer to the [custom template example](#ece-ce-add-support-to-node-roles-example) where support for `node_roles` is added. 4. Click **Save** to conclude the migration. ::::{warning} diff --git a/deploy-manage/deploy/cloud-enterprise/change-ece-api-url.md b/deploy-manage/deploy/cloud-enterprise/change-ece-api-url.md index 958d83b017..240979f66e 100644 --- a/deploy-manage/deploy/cloud-enterprise/change-ece-api-url.md +++ b/deploy-manage/deploy/cloud-enterprise/change-ece-api-url.md @@ -8,7 +8,7 @@ mapped_pages: # Change the ECE API URL [ece-config-api-base-url] -You can configure the HTTPS URL used to access the ECE API. You can specify either a DNS host name or an IP address. The primary use for this is to enable [single sign-on](../../users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md) on your deployments, so you can log into Kibana automatically once logged in to ECE. +You can configure the HTTPS URL used to access the ECE API. You can specify either a DNS host name or an IP address. The primary use for this is to enable [single sign-on](../../users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md) on your deployments, so you can log into {{kib}} automatically once logged in to ECE. To change the ECE API URL in the Cloud UI: diff --git a/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md b/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md index ccaf2d0891..73cf0b3631 100644 --- a/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md +++ b/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md @@ -25,13 +25,13 @@ To change endpoints in the Cloud UI: 1. [Log into the Cloud UI](log-into-cloud-ui.md). 2. From the **Platform** menu, select **Settings**. -3. Specify the deployment domain name (`CNAME`) value for your cluster and Kibana endpoints. +3. Specify the deployment domain name (`CNAME`) value for your cluster and {{kib}} endpoints. 4. Select **Update Deployment endpoints**. The new endpoint becomes effective immediately. To find your deployment endpoints, select a deployment and review the information on the **Elasticsearch** and **Kibana** pages. ::::{tip} -If you install Elastic Cloud Enterprise on AWS, you likely need to modify the cluster endpoint. To learn more, check [Endpoint URLs Inaccessible on AWS](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). +If you install {{ece}} on AWS, you likely need to modify the cluster endpoint. To learn more, check [Endpoint URLs Inaccessible on AWS](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). :::: ::::{tip} diff --git a/deploy-manage/deploy/cloud-enterprise/configure-allocator-affinity.md b/deploy-manage/deploy/cloud-enterprise/configure-allocator-affinity.md index d224c8a661..ba7f24762f 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-allocator-affinity.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-allocator-affinity.md @@ -8,12 +8,12 @@ mapped_pages: # Configure allocator affinity [ece-configuring-allocator-affinity] -One of the benefits of the ECE platform is its robust deployment instance distribution logic that maximizes the utilization of the underlying resources you deploy the Elastic Stack on. In ECE 2.4 and later, you can customize how Elastic Stack deployments get distributed across the available set of allocators in your ECE installation, which is known as *allocator affinity*. +One of the benefits of the ECE platform is its robust deployment instance distribution logic that maximizes the utilization of the underlying resources you deploy the {{stack}} on. In ECE 2.4 and later, you can customize how {{stack}} deployments get distributed across the available set of allocators in your ECE installation, which is known as *allocator affinity*. ## Before you begin [ece_before_you_begin_6] -Configuring allocator affinity is an optional post-installation task that changes the behavior of Elastic Cloud Enterprise. If you do not explicitly set an affinity strategy, all instances use the [`fill-anti-affinity`](#fill-anti-affinity) strategy by default. +Configuring allocator affinity is an optional post-installation task that changes the behavior of {{ece}}. If you do not explicitly set an affinity strategy, all instances use the [`fill-anti-affinity`](#fill-anti-affinity) strategy by default. To follow these steps, you must be familiar with using the ECE RESTful API. The API examples in this topic use HTTPS, which requires that you have a [TLS certificate already installed](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). For testing purposes only, you can specify the `-k` option to turn off certificate verification, as shown in our examples, or use HTTP over port 12400 until you get your TLS certificate sorted out. @@ -26,7 +26,7 @@ $$$fill-anti-affinity$$$`fill-anti-affinity` (default) : Prefers to create instances of the same deployment on separate allocators, if available. This strategy tries to fill an already used allocator in a given zone before moving on to the next one, but it will prioritize separating instances of the same deployment onto different allocators. The strategy strikes a good balance between utilization and fault tolerance, as it minimizes the impact on any given deployment in case of a host failure. This strategy is the default for ECE 2.3 and later. `fill` -: Similar to the previous strategy, optimizes for maximum utilization of already used allocators before expanding to new, unused ones. Because this strategy makes sure that existing resources are fully utilized before requiring new ones to be provisioned, it is especially useful when running ECE on cloud environments where you typically pay only for provisioned capacity. With this strategy, new Elasticsearch nodes and Kibana instances for a deployment are created on the least empty allocator in a given zone, even if multiple instances end up on the same allocator, making sure to fill it first before moving on to the next allocator in that zone. The trade-off is that you potentially give up host-level high availability (HA) if an allocator gets filled with multiple instances from the same deployment. This strategy was the default for ECE 2.2 and earlier. +: Similar to the previous strategy, optimizes for maximum utilization of already used allocators before expanding to new, unused ones. Because this strategy makes sure that existing resources are fully utilized before requiring new ones to be provisioned, it is especially useful when running ECE on cloud environments where you typically pay only for provisioned capacity. With this strategy, new {{es}} nodes and {{kib}} instances for a deployment are created on the least empty allocator in a given zone, even if multiple instances end up on the same allocator, making sure to fill it first before moving on to the next allocator in that zone. The trade-off is that you potentially give up host-level high availability (HA) if an allocator gets filled with multiple instances from the same deployment. This strategy was the default for ECE 2.2 and earlier. `distribute` : This strategy optimizes for distributing the deployment instances as evenly as possible across all available resources in a given availability zone, creating new deployment instances on the least used allocators. This stategy is useful in scenarios where the hardware resources are already provisioned, typically in on-premise datacenters, and you want to use as much of them as possible. Available in ECE 2.4 and later. diff --git a/deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md b/deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md index ebc80e6503..f66a48cc18 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md @@ -8,18 +8,18 @@ mapped_pages: # Manage deployment templates [ece-configuring-ece-templates] -Deployment templates combine components of the Elastic Stack, such as Elasticsearch nodes and Kibana instances, for different use cases. Compared to a one-size-fits-all approach to deploying the Elastic Stack, templates provide much greater flexibility and ensure that your deployments have the resources they need to support your use cases. To make the most of deployment templates, you must configure ECE for them. +Deployment templates combine components of the {{stack}}, such as {{es}} nodes and {{kib}} instances, for different use cases. Compared to a one-size-fits-all approach to deploying the {{stack}}, templates provide much greater flexibility and ensure that your deployments have the resources they need to support your use cases. To make the most of deployment templates, you must configure ECE for them. After installing or upgrading to ECE version 2.0 or later: -1. [Tag your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for Elastic Stack deployments. -2. [Edit the default instance configurations](ece-configuring-ece-instance-configurations-edit.md) to match components of the Elastic Stack to your tagged allocators. +1. [Tag your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for {{stack}} deployments. +2. [Edit the default instance configurations](ece-configuring-ece-instance-configurations-edit.md) to match components of the {{stack}} to your tagged allocators. -If you do not perform these steps, Elastic Cloud Enterprise will behave just as it did in versions before 2.0 and deploy the Elastic Stack wherever there is space on allocators. +If you do not perform these steps, {{ece}} will behave just as it did in versions before 2.0 and deploy the {{stack}} wherever there is space on allocators. Have a use case that isn’t addressed by the ECE default templates? You can also: -* [Create your own instance configurations](ece-configuring-ece-instance-configurations-create.md) to match components of the Elastic Stack to allocators, tailoring what resources they get and what sizes they support. +* [Create your own instance configurations](ece-configuring-ece-instance-configurations-create.md) to match components of the {{stack}} to allocators, tailoring what resources they get and what sizes they support. * [Create your own deployment templates](ece-configuring-ece-create-templates.md) to solve your own use cases better. @@ -31,10 +31,10 @@ Allocator tag : Indicates what kind of hardware resources you have available. Used by instance configurations to find suitable allocators. Instance configuration -: Matches components of the Elastic Stack to allocators for deployment and tailors how memory and storage resources get sized relative to each other, and what sizes are available. Used as a building block for deployment templates. +: Matches components of the {{stack}} to allocators for deployment and tailors how memory and storage resources get sized relative to each other, and what sizes are available. Used as a building block for deployment templates. Deployment template -: Solves a specific use case with the Elastic Stack, such as a search or a logging use case. ECE provides some deployment templates out of the box to get you started, or you can create your own. +: Solves a specific use case with the {{stack}}, such as a search or a logging use case. ECE provides some deployment templates out of the box to get you started, or you can create your own. diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md index 177b09a0ba..c2c2779794 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md @@ -268,7 +268,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec sudo install -o elastic -g elastic -d -m 700 /mnt/data ``` -21. As a sudoers user, modify the entry for the XFS volume in the `/etc/fstab` file to add `pquota,prjquota`. The default filesystem path used by Elastic Cloud Enterprise is `/mnt/data`. +21. As a sudoers user, modify the entry for the XFS volume in the `/etc/fstab` file to add `pquota,prjquota`. The default filesystem path used by {{ece}} is `/mnt/data`. ::::{note} Replace `/dev/nvme1n1` in the following example with the corresponding device on your host, and add this example configuration as a single line to `/etc/fstab`. @@ -322,7 +322,7 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. # See /deploy-manage/deploy/self-managed/system-config-tcpretries.md net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md b/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md index 5fd99dd82b..e0f85efc5e 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-suse.md @@ -87,9 +87,9 @@ Make sure to use a combination of Linux distribution and Docker version that is ## Set up XFS on SLES [ece-xfs-setup-sles12] -XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On SLES 12 and 15, you need to set up an XFS file system and have quotas enabled. +XFS is required to support disk space quotas for {{es}} data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On SLES 12 and 15, you need to set up an XFS file system and have quotas enabled. -Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. +Disk space quotas set a limit on the amount of disk space an {{es}} cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. ::::{note} Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, nor is it provided as part of supporting ECE. @@ -159,7 +159,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. # See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early @@ -168,7 +168,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage ``` ::::{important} - The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`. + The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than {{es}} clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`. :::: @@ -181,7 +181,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage 4. Adjust the system limits. - Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for ECE as well. + Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the {{ecloud}} hosted offering and should be used for ECE as well. ::::{tip} If you are using a user name other than `elastic`, adjust the configuration values accordingly. diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md index 37d4362d31..00c416fdde 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md @@ -63,12 +63,12 @@ Docker 25 and higher are not compatible with ECE 3.7. ## Set up XFS quotas [ece-xfs-setup-ubuntu] -XFS is required to support disk space quotas for Elasticsearch data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On Ubuntu, you need to set up an XFS file system and have quotas enabled. +XFS is required to support disk space quotas for {{es}} data directories. Some Linux distributions such as RHEL and Rocky Linux already provide XFS as the default file system. On Ubuntu, you need to set up an XFS file system and have quotas enabled. -Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. +Disk space quotas set a limit on the amount of disk space an {{es}} cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. ::::{note} -Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, and it is not supported by Elastic Cloud Enterprise. +Using LVM, `mdadm`, or a combination of the two for block device management is possible, but the configuration is not covered here, and it is not supported by {{ece}}. :::: @@ -91,7 +91,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage sudo install -o $USER -g $USER -d -m 700 /mnt/data ``` -3. Add an entry to the `/etc/fstab` file for the new XFS volume. The default filesystem path used by Elastic Cloud Enterprise is `/mnt/data`. +3. Add an entry to the `/etc/fstab` file for the new XFS volume. The default filesystem path used by {{ece}} is `/mnt/data`. ```sh /dev/xvdg1 /mnt/data xfs defaults,nofail,x-systemd.automount,prjquota,pquota 0 2 @@ -136,7 +136,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. # See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-config-tcpretries.html net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early @@ -145,7 +145,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage ``` ::::{important} - The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than Elasticsearch clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`. + The `net.ipv4.tcp_retries2` setting applies to all TCP connections and affects the reliability of communication with systems other than {{es}} clusters too. If your clusters communicate with external systems over a low quality network then you may need to select a higher value for `net.ipv4.tcp_retries2`. :::: @@ -157,7 +157,7 @@ You must use XFS and have quotas enabled on all allocators, otherwise disk usage 4. Adjust the system limits. - Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the Elastic Cloud hosted offering and should be used for Elastic Cloud Enterprise as well. + Add the following configuration values to the `/etc/security/limits.conf` file. These values are derived from our experience with the {{ecloud}} hosted offering and should be used for {{ece}} as well. ::::{tip} If you are using a user name other than `elastic`, adjust the configuration values accordingly. @@ -311,4 +311,4 @@ You can specify `--log-opt max-size` and `--log-opt max-file` to define the Dock If the command returns `Docker Root Dir: /var/lib/docker`, then you need to troubleshoot the previous configuration steps until the Docker settings are applied successfully before continuing with the installation process. For more information, check [Custom Docker daemon options](https://docs.docker.com/engine/admin/systemd/#/custom-docker-daemon-options) in the Docker documentation. -8. Repeat these steps on other hosts that you want to use with Elastic Cloud Enterprise or follow the steps in the next section to start installing Elastic Cloud Enterprise. +8. Repeat these steps on other hosts that you want to use with {{ece}} or follow the steps in the next section to start installing {{ece}}. diff --git a/deploy-manage/deploy/cloud-enterprise/configure-operating-system.md b/deploy-manage/deploy/cloud-enterprise/configure-operating-system.md index 6cf39a8f64..55ed61ca09 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-operating-system.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-operating-system.md @@ -9,7 +9,7 @@ mapped_pages: # Configure your operating system [ece-configure-os] -Before installing Elastic Cloud Enterprise, you have to prepare your hosts with one of the following Linux distributions: +Before installing {{ece}}, you have to prepare your hosts with one of the following Linux distributions: * [Ubuntu 20.04 LTS (Focal Fossa) and Ubuntu 22.04 LTS (Jammy Jellyfish)](configure-host-ubuntu.md) * [Red Hat Enterprise Linux (RHEL) 8 and 9](configure-host-rhel.md) diff --git a/deploy-manage/deploy/cloud-enterprise/configure.md b/deploy-manage/deploy/cloud-enterprise/configure.md index 12ed8b7040..9c88666568 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure.md +++ b/deploy-manage/deploy/cloud-enterprise/configure.md @@ -16,7 +16,7 @@ Now that you have {{ece}} up and running, take a look at some of the additional * [Assign roles to hosts](../../../deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md) - Make sure new hosts can be used for their intended purpose after you install ECE on them. * [System deployments configuration](system-deployments-configuration.md) - Best practices for ECE system deployments to ensure a highly available and resilient setup. * [Configure deployment templates](configure-deployment-templates.md) – Define the resources, topology, hardware, and configurations that will be applied to your deployments. -* [Manage Elastic Stack versions](./manage-elastic-stack-versions.md) - Upload or remove Elastic Stack packs. +* [Manage {{stack}} versions](./manage-elastic-stack-versions.md) - Upload or remove {{stack}} packs. * [Change the ECE API URL](./change-ece-api-url.md) - Configure the HTTPS URL used to access the ECE API. * [Change endpoint URLs](change-endpoint-urls.md) - Configure the URLs to access {{es}} and {{kib}} deployments to match your [domain name](./ece-wildcard-dns.md) and [proxy certificate](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). * [Enable custom endpoint aliases](./enable-custom-endpoint-aliases.md) - This feature allows to use aliases in the endpoint URLs instead of cluster UUIDs. @@ -25,12 +25,12 @@ Other sections of the documentation describe important ECE features to consider: * [Configure ECE users and roles](../../users-roles/cloud-enterprise-orchestrator.md) - Manage authentication and authorization at ECE platform level. * [Manage security certificates](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md) - Configure Cloud UI and Proxy TLS/SSL certificates. -* [Manage licenses](../../license/manage-your-license-in-ece.md) - Keep Elastic Cloud Enterprise current with a valid license. -* [Manage snapshot repositories](../../tools/snapshot-and-restore/cloud-enterprise.md) - To back up your Elasticsearch clusters automatically, you need to configure a snapshot repository. +* [Manage licenses](../../license/manage-your-license-in-ece.md) - Keep {{ece}} current with a valid license. +* [Manage snapshot repositories](../../tools/snapshot-and-restore/cloud-enterprise.md) - To back up your {{es}} clusters automatically, you need to configure a snapshot repository. ## Advanced configuration procedures -* [Configure allocator affinity](configure-allocator-affinity.md) - Determine how ECE distributes your Elastic Stack deployments across allocators. +* [Configure allocator affinity](configure-allocator-affinity.md) - Determine how ECE distributes your {{stack}} deployments across allocators. * [Change allocator disconnect timeout](change-allocator-disconnect-timeout.md) - Configure how long ECE waits before considering allocators to be disconnected. * [Migrate ECE to Podman hosts](./migrate-ece-to-podman-hosts.md) - If you are running a Docker based installation and you need to migrate to Podman. * [Migrate ECE on Podman hosts to SELinux in enforcing mode](../../security/secure-your-elastic-cloud-enterprise-installation/migrate-ece-on-podman-hosts-to-selinux-enforce.md) - Migrate ECE to SELinux in `enforcing` mode using Podman. @@ -41,6 +41,6 @@ Refer to [ECE maintenance](../../maintenance/ece.md) for important maintenance a * [Scale out your installation](../../../deploy-manage/maintenance/ece/scale-out-installation.md) - Need to add more capacity? Here’s how. * [Enable maintenance mode](../../../deploy-manage/maintenance/ece/enable-maintenance-mode.md) - Perform administrative actions on allocators safely by putting them into maintenance mode first. -* [Move nodes from allocators](../../../deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md) - Moves all Elasticsearch clusters and Kibana instances to another allocator, so that the allocator is no longer used for handling user requests. +* [Move nodes from allocators](../../../deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md) - Moves all {{es}} clusters and {{kib}} instances to another allocator, so that the allocator is no longer used for handling user requests. * [Perform host maintenance](../../../deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md) - Apply operating system patches and other maintenance to hosts safely without removing them from your ECE installation. * [Delete hosts](../../../deploy-manage/maintenance/ece/delete-ece-hosts.md) - Remove a host from your ECE installation, either because it is no longer needed or because it is faulty. diff --git a/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md b/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md index 1d1de5714a..4ab9b768c1 100644 --- a/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md +++ b/deploy-manage/deploy/cloud-enterprise/connect-elasticsearch.md @@ -8,7 +8,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-connect.html --- -# Connect to Elasticsearch [ece-connect] +# Connect to {{es}} [ece-connect] This section explains how to connect client applications to an {{es}} deployment running on ECE. You can use the [endpoint URL](#ece-connect-endpoint) available in the deployment UI, or the [Cloud ID](#ece-cloud-id) for a simplified setup with compatible clients such as Beats and Logstash. @@ -103,12 +103,12 @@ To get started with Metricbeat and {{ece}}: 2. [Create a new deployment](create-deployment.md) and copy down the password for the `elastic` user. 3. On the deployment overview page, copy down the Cloud ID. 4. Set up the Beat of your choice, such as [Metricbeat](beats://reference/metricbeat/metricbeat-installation-configuration.md). -5. [Configure the Beat output to send to Elastic Cloud](beats://reference/metricbeat/configure-cloud-id.md). +5. [Configure the Beat output to send to {{ecloud}}](beats://reference/metricbeat/configure-cloud-id.md). ::::{note} Make sure you replace the values for `cloud.id` and `cloud.auth` with your own information. :::: -6. Open Kibana and explore! +6. Open {{kib}} and explore! -Metricbeat creates a data view (formerly *index pattern*) with defined fields, searches, visualizations, and dashboards that you can start exploring in Kibana. Look for information related to system metrics, such as CPU usage, utilization rates for memory and disk, and details for processes. \ No newline at end of file +Metricbeat creates a data view (formerly *index pattern*) with defined fields, searches, visualizations, and dashboards that you can start exploring in {{kib}}. Look for information related to system metrics, such as CPU usage, utilization rates for memory and disk, and details for processes. \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-enterprise/create-deployment.md b/deploy-manage/deploy/cloud-enterprise/create-deployment.md index 631f477afc..d9ae750e23 100644 --- a/deploy-manage/deploy/cloud-enterprise/create-deployment.md +++ b/deploy-manage/deploy/cloud-enterprise/create-deployment.md @@ -8,7 +8,7 @@ mapped_pages: # Create a deployment -An ECE deployment is a fully managed Elastic Stack environment running on {{ece}}. It includes {{es}}, {{kib}}, and optional features like Machine Learning or an Integrations (Fleet & APM) Server. +An ECE deployment is a fully managed {{stack}} environment running on {{ece}}. It includes {{es}}, {{kib}}, and optional features like Machine Learning or an Integrations (Fleet & APM) Server. Each deployment is based on a [deployment template](./deployment-templates.md), which defines its resources, default topology, scaling policies, and available features. Deployments can be customized based on workload requirements, snapshot settings, and security configurations. @@ -41,7 +41,7 @@ To create a deployment in ECE: 7. Select **Create deployment**. It takes a few minutes before your deployment gets created. - While you're waiting, you will be prompted to save the admin credentials for your deployment, which grant superuser access to Elasticsearch. Write down the password for the `elastic` user and keep it somewhere safe. These credentials also help you [add data using Kibana](../../../manage-data/ingest.md). If you need to refresh these credentials, you can [reset the password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md) at any time. + While you're waiting, you will be prompted to save the admin credentials for your deployment, which grant superuser access to {{es}}. Write down the password for the `elastic` user and keep it somewhere safe. These credentials also help you [add data using Kibana](../../../manage-data/ingest.md). If you need to refresh these credentials, you can [reset the password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md) at any time. 8. Once the deployment is ready, select **Continue** to open the deployment’s main page. diff --git a/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md b/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md index 9ac6e41fdc..236227a33a 100644 --- a/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md +++ b/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md @@ -8,7 +8,7 @@ mapped_pages: # Default system deployment versions [ece-system-deployment-versions] -When installing or upgrading {{ece}}, its associated system deployments run by default on the Elastic Stack versions listed in the following table. +When installing or upgrading {{ece}}, its associated system deployments run by default on the {{stack}} versions listed in the following table. Note that since version 2.7.0, system deployments are automatically upgraded when upgrading {{ece}}. Don’t attempt to upgrade system deployments manually. Instead, follow the [ECE upgrade instructions](../../upgrade/orchestrator/upgrade-cloud-enterprise.md). diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md index 51624138a7..0829c2c44d 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md @@ -5,7 +5,7 @@ applies_to: --- # Deploy an orchestrator -Elastic Cloud Enterprise (ECE) provides a centralized platform that allows organizations to run Elasticsearch, Kibana, and other Elastic Stack components across multiple machines, whether in a private or public cloud, virtual machines, or your own premises. +{{ece}} (ECE) provides a centralized platform that allows organizations to run {{es}}, {{kib}}, and other {{stack}} components across multiple machines, whether in a private or public cloud, virtual machines, or your own premises. ::::{note} This section focuses on deploying the ECE orchestrator. If you want to deploy {{es}}, {{kib}} or other {{stack}} applications on ECE, refer to [](./working-with-deployments.md). diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md b/deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md index b4930e3ed7..a9cb06d72b 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-large-installation.md @@ -27,7 +27,7 @@ This type of installation is recommended for deployments with significant overal ## Important considerations [ece_before_you_start_3] -Note that the large-sized Elastic Cloud Enterprise installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). +Note that the large-sized {{ece}} installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). **Check the recommended JVM Heap sizes** @@ -50,7 +50,7 @@ Make sure you have completed all prerequisites and environment preparations desc ## Installation steps [ece_installation_steps_3] -1. Install Elastic Cloud Enterprise on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation, but you will remove some of its roles in a later step. +1. Install {{ece}} on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation, but you will remove some of its roles in a later step. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"},"zookeeper":{"xms":"24G","xmx":"24G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"24G","xmx":"24G"}}' @@ -64,7 +64,7 @@ Make sure you have completed all prerequisites and environment preparations desc curl -k -H 'Content-Type: application/json' -u admin:PASSWORD https://localhost:12443/api/v1/platform/configuration/security/enrollment-tokens -d '{ "persistent": false, "roles": ["director", "coordinator", "proxy"] }' ``` -3. Install Elastic Cloud Enterprise on a second and third host, placing them into a second and a third availability zone, and assign them the `director` and `coordinator` roles. Do not assign the `allocator` or the `proxy` role, as these hosts should not handle or route any user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. +3. Install {{ece}} on a second and third host, placing them into a second and a third availability zone, and assign them the `director` and `coordinator` roles. Do not assign the `allocator` or the `proxy` role, as these hosts should not handle or route any user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "director,coordinator" --availability-zone MY_ZONE-2 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"zookeeper":{"xms":"24G","xmx":"24G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"24G","xmx":"24G"}}' @@ -74,7 +74,7 @@ Make sure you have completed all prerequisites and environment preparations desc bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "director,coordinator" --availability-zone MY_ZONE-3 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"zookeeper":{"xms":"24G","xmx":"24G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"24G","xmx":"24G"}}' ``` -4. To handle the Elasticsearch and Kibana workload, install Elastic Cloud Enterprise on three or more hosts, distributing them evenly across the existing three availability zones, or on however many hosts you think you need initially, and assign them the `allocator` role. Make sure you include the coordinator host IP information and allocator roles token from step 1. +4. To handle the {{es}} and {{kib}} workload, install {{ece}} on three or more hosts, distributing them evenly across the existing three availability zones, or on however many hosts you think you need initially, and assign them the `allocator` role. Make sure you include the coordinator host IP information and allocator roles token from step 1. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'ALLOCATOR_TOKEN' --roles "allocator" --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"}}' @@ -84,7 +84,7 @@ Make sure you have completed all prerequisites and environment preparations desc bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'ALLOCATOR_TOKEN' --roles "allocator" --availability-zone MY_ZONE-3 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"}}' ``` -5. To handle the routing of user requests to Elasticsearch, install Elastic Cloud Enterprise on a three additional hosts, distributing them evenly across the existing three availability zones, and assign them the `proxy` role. Do not assign any other roles, as these hosts should only route user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. +5. To handle the routing of user requests to {{es}}, install {{ece}} on a three additional hosts, distributing them evenly across the existing three availability zones, and assign them the `proxy` role. Do not assign any other roles, as these hosts should only route user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "proxy" --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"}}' diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-medium-installation.md b/deploy-manage/deploy/cloud-enterprise/deploy-medium-installation.md index a0c574ef8a..d38d3f6eca 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-medium-installation.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-medium-installation.md @@ -22,7 +22,7 @@ This type of installation is recommended for many production setups. You need: ## Important considerations [ece_before_you_start_2] * Monitor the load on proxies and make sure the volume of user requests routed by the proxies does not affect the resources available to the ECE management services. -* Note that the medium-sized Elastic Cloud Enterprise installation separates the allocator from the director and coordinator roles (ECE management services) and the proxy roles. +* Note that the medium-sized {{ece}} installation separates the allocator from the director and coordinator roles (ECE management services) and the proxy roles. **Check the recommended JVM Heap sizes** @@ -45,7 +45,7 @@ Make sure you have completed all prerequisites and environment preparations desc ## Installation steps [ece_installation_steps_2] -1. Install Elastic Cloud Enterprise on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation, but you will remove some of its roles in a later step. +1. Install {{ece}} on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation, but you will remove some of its roles in a later step. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"},"zookeeper":{"xms":"8G","xmx":"8G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"8G","xmx":"8G"}}' @@ -59,7 +59,7 @@ Make sure you have completed all prerequisites and environment preparations desc curl -k -H 'Content-Type: application/json' -u admin:PASSWORD https://localhost:12443/api/v1/platform/configuration/security/enrollment-tokens -d '{ "persistent": false, "roles": ["director", "coordinator", "proxy"] }' ``` -3. Install Elastic Cloud Enterprise on a second and third host, placing them into a second and a third availability zone, and assign them the `director`, `coordinator`, and `proxy` roles. Do not assign the `allocator` role, as these hosts should not handle any user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. +3. Install {{ece}} on a second and third host, placing them into a second and a third availability zone, and assign them the `director`, `coordinator`, and `proxy` roles. Do not assign the `allocator` role, as these hosts should not handle any user requests. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "director,coordinator,proxy" --availability-zone MY_ZONE-2 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"zookeeper":{"xms":"8G","xmx":"8G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"8G","xmx":"8G"}}' @@ -69,7 +69,7 @@ Make sure you have completed all prerequisites and environment preparations desc bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "director,coordinator,proxy" --availability-zone MY_ZONE-3 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"zookeeper":{"xms":"8G","xmx":"8G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"8G","xmx":"8G"}}' ``` -4. To handle the Elasticsearch and Kibana workloads, install Elastic Cloud Enterprise on a fourth, fifth, and sixth host, distributing them evenly across the existing three availability zones and assign them the `allocator` role. Make sure you include the coordinator host IP information and allocator roles token from step 1. +4. To handle the {{es}} and {{kib}} workloads, install {{ece}} on a fourth, fifth, and sixth host, distributing them evenly across the existing three availability zones and assign them the `allocator` role. Make sure you include the coordinator host IP information and allocator roles token from step 1. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'ALLOCATOR_TOKEN' --roles "allocator" --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"}}' diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-small-installation.md b/deploy-manage/deploy/cloud-enterprise/deploy-small-installation.md index 981796ebab..6192a87d8b 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-small-installation.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-small-installation.md @@ -46,7 +46,7 @@ Make sure you have completed all prerequisites and environment preparations desc ## Installation steps [ece_installation_steps] -1. Install Elastic Cloud Enterprise on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation. +1. Install {{ece}} on the first host to start a new installation with your first availability zone. This first host holds all roles to help bootstrap the rest of the installation. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --availability-zone MY_ZONE-1 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"},"zookeeper":{"xms":"4G","xmx":"4G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"4G","xmx":"4G"}}' @@ -60,7 +60,7 @@ Make sure you have completed all prerequisites and environment preparations desc curl -k -H 'Content-Type: application/json' -u admin:PASSWORD https://localhost:12443/api/v1/platform/configuration/security/enrollment-tokens -d '{ "persistent": false, "roles": ["director", "coordinator", "proxy", "allocator"] }' ``` -3. Install Elastic Cloud Enterprise on a second and third host, placing them into a second and a third availability zone, and assign them the same roles and memory settings as the first host. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. +3. Install {{ece}} on a second and third host, placing them into a second and a third availability zone, and assign them the same roles and memory settings as the first host. Make sure you include the coordinator host IP information from step 1 and the new roles token from step 2. ```sh bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) install --coordinator-host HOST_IP --roles-token 'MY_TOKEN' --roles "director,coordinator,proxy,allocator" --availability-zone MY_ZONE-2 --memory-settings '{"runner":{"xms":"1G","xmx":"1G"},"allocator":{"xms":"4G","xmx":"4G"},"zookeeper":{"xms":"4G","xmx":"4G"},"director":{"xms":"1G","xmx":"1G"},"constructor":{"xms":"4G","xmx":"4G"},"admin-console":{"xms":"4G","xmx":"4G"}}' diff --git a/deploy-manage/deploy/cloud-enterprise/ece-architecture.md b/deploy-manage/deploy/cloud-enterprise/ece-architecture.md index 2036fd3010..82c9d79150 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-architecture.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-architecture.md @@ -9,14 +9,14 @@ mapped_pages: # Service-oriented architecture [ece-architecture] -Elastic Cloud Enterprise has a service-oriented architecture that lets you: +{{ece}} has a service-oriented architecture that lets you: * Scale each service separately, with different reliability and performance requirements. * Access services through the API. * Deploy each service independently in its own Docker container. :::{image} /deploy-manage/images/cloud-enterprise-ece-architecture.png -:alt: Elastic Cloud Enterprise high level architecture +:alt: {{ece}} high level architecture ::: ## Control plane [ece_control_plane] @@ -51,18 +51,18 @@ Provide web and API access for administrators to manage and monitor the ECE inst ## Proxies [ece_proxies] -* Handle user requests, mapping deployment IDs that are passed in request URLs for the container to the actual Elasticsearch cluster nodes and other instances. The association of deployment IDs to a container is stored in ZooKeeper, cached by the proxies. In the event of ZooKeeper downtime, the platform can still service the requests to existing deployments by using the cache. -* Keep track of the state and availability of zones, if you have a highly available Elasticsearch cluster. If one of the zones goes down, the proxy will not route any requests there. +* Handle user requests, mapping deployment IDs that are passed in request URLs for the container to the actual {{es}} cluster nodes and other instances. The association of deployment IDs to a container is stored in ZooKeeper, cached by the proxies. In the event of ZooKeeper downtime, the platform can still service the requests to existing deployments by using the cache. +* Keep track of the state and availability of zones, if you have a highly available {{es}} cluster. If one of the zones goes down, the proxy will not route any requests there. * Help with no-downtime scaling and upgrades. Before performing an upgrade, a snapshot is taken, and data is migrated to the new nodes. When the migration is complete, a proxy switches the traffic to the new nodes and disconnects the old ones. * Multiple proxies are usually configured behind a load balancer to ensure that the system remains available. ## Allocators [ece-architecture-allocators] -* Run on all the machines that host Elasticsearch nodes and Kibana instances. +* Run on all the machines that host {{es}} nodes and {{kib}} instances. * Control the lifecycle of cluster nodes by: - * Creating new containers and starting Elasticsearch nodes when requested + * Creating new containers and starting {{es}} nodes when requested * Restarting a node if it becomes unresponsive * Removing a node if it is no longer needed @@ -74,7 +74,7 @@ Services are deployed as Docker containers, which simplifies the operational eff * **Shares of resources** - Each cluster node is run within a Docker container to make sure that all of the nodes have access to a guaranteed share of host resources. This mitigates the *noisy neighbor effect* where one busy deployment can overwhelm the entire host. The CPU resources are relative to the size of the Elasticsearch cluster they get assigned to. For example, a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. + Each cluster node is run within a Docker container to make sure that all of the nodes have access to a guaranteed share of host resources. This mitigates the *noisy neighbor effect* where one busy deployment can overwhelm the entire host. The CPU resources are relative to the size of the {{es}} cluster they get assigned to. For example, a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. * **Better security** diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md b/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md index ff260f4899..6cfd2dabb0 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md @@ -9,7 +9,7 @@ mapped_pages: # Configure index management for deployment templates [ece-configure-templates-index-management] -If you create a deployment template that includes more than one data configuration, you must also specify how Elastic Cloud Enterprise should manage indices for your users when they create their deployments. For time-series use cases such as logging, metrics, and APM, providing a template that enables index management ensures that data is being stored in the most cost-effective way possible as it ages. +If you create a deployment template that includes more than one data configuration, you must also specify how {{ece}} should manage indices for your users when they create their deployments. For time-series use cases such as logging, metrics, and APM, providing a template that enables index management ensures that data is being stored in the most cost-effective way possible as it ages. In a template that creates a hot-warm architecture, you can use index curation to specify where new indices are created initially and where they are moved to later on. However, index curation has been deprecated in favor of index lifecycle management, which offers additional features and more fine-grained control over indices. For instance, using ILM you can enable automatic roll-over of index aliases to new indices when existing indices become too large or too old, and you can set indices to be deleted when they are no longer useful. @@ -28,13 +28,13 @@ To configure index management when you create a deployment template: 1. On the **Index Management** page, configure the index curation methods that you want to be available when your users create deployments: Index lifecycle management - : Uses the ILM feature of the Elastic Stack that provides an integrated and streamlined way to manage time-based data, making it easier to follow best practices for managing your indices. Compared to index curation, ILM gives you more fine-grained control over the lifecycle of each index. + : Uses the ILM feature of the {{stack}} that provides an integrated and streamlined way to manage time-based data, making it easier to follow best practices for managing your indices. Compared to index curation, ILM gives you more fine-grained control over the lifecycle of each index. To configure index lifecycle management: 1. Specify the node attributes for your data configurations. - Node attributes are simple key-value pairs, such as `node_type: hot`, `node_type: warm`, and `node_type: cold`. These node attributes add defining metadata attributes to each data configuration in your template that tell your users what they can be used for. What you define here should help guide your users when they set up their index lifecycle management policy in Kibana, such as a hot-warm policy. + Node attributes are simple key-value pairs, such as `node_type: hot`, `node_type: warm`, and `node_type: cold`. These node attributes add defining metadata attributes to each data configuration in your template that tell your users what they can be used for. What you define here should help guide your users when they set up their index lifecycle management policy in {{kib}}, such as a hot-warm policy. 1. Specify an attribute key-value pair in the **Node attributes** field, with the key and value separated by a colon. 2. Repeat the previous step until you have added all the node attributes that you want to be available to your users when they create an index lifecycle policy later on. @@ -43,7 +43,7 @@ To configure index management when you create a deployment template: Index curation : Creates new indices on hot nodes first and moves them to warm nodes later on, based on the data views (formerly *index patterns*) you specify. Also manages replica counts for you, so that all shards of an index can fit on the right data nodes. Compared to index lifecycle management, index curation for time-based indices supports only one action, to move indices from nodes on one data configuration to another, but it is more straightforward to set up initially and all setup can be done directly from the Cloud UI. - If your user need to delete indices once they are no longer useful to them, they can run [Curator](curator://reference/index.md) on-premise to manage indices for Elasticsearch clusters hosted on Elastic Cloud Enterprise. + If your user need to delete indices once they are no longer useful to them, they can run [Curator](curator://reference/index.md) on-premise to manage indices for {{es}} clusters hosted on {{ece}}. To configure index curation: diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md index b5a6fae738..b05662eff3 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-create-templates.md @@ -9,14 +9,14 @@ mapped_pages: # Create deployment templates [ece-configuring-ece-create-templates] -Elastic Cloud Enterprise comes with some deployment templates already built in, but you can create new deployment templates to address particular use cases that you might have. +{{ece}} comes with some deployment templates already built in, but you can create new deployment templates to address particular use cases that you might have. -For example: You might decide to create a new deployment template, if you have a specific search use case that requires Elasticsearch data nodes in a specific configuration that also includes machine learning for anomaly detection. If you need to create these deployments fairly frequently, you can create a deployment template once and deploy it as many times as you like. Or, create a single template for both your test and production deployments to ensure they are exactly the same. +For example: You might decide to create a new deployment template, if you have a specific search use case that requires {{es}} data nodes in a specific configuration that also includes machine learning for anomaly detection. If you need to create these deployments fairly frequently, you can create a deployment template once and deploy it as many times as you like. Or, create a single template for both your test and production deployments to ensure they are exactly the same. ## Before you begin [ece_before_you_begin_3] -Before you start creating your own deployment templates, you should have: [tagged your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for Elastic Stack deployments. If the default instance configurations don’t provide what you need, you might also need to [create your own instance configurations](ece-configuring-ece-instance-configurations-create.md) first. +Before you start creating your own deployment templates, you should have: [tagged your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for {{stack}} deployments. If the default instance configurations don’t provide what you need, you might also need to [create your own instance configurations](ece-configuring-ece-instance-configurations-create.md) first. ## Create deployment templates in the UI [ece-configuring-ece-create-templates-ui] @@ -63,7 +63,7 @@ Before you start creating your own deployment templates, you should have: [tagge :alt: screencapture of the "Availability zones" radio buttons ::: - * Add user settings to configure how Elasticsearch and other components run. Check [Editing your user settings](edit-stack-settings.md) for details about what settings are available. + * Add user settings to configure how {{es}} and other components run. Check [Editing your user settings](edit-stack-settings.md) for details about what settings are available. :::{image} /deploy-manage/images/cloud-enterprise-ece-create-template-user-settings.png :alt: screencapture of the "User settings" expandable section @@ -73,12 +73,12 @@ Before you start creating your own deployment templates, you should have: [tagge If a data tier or component is not required for your particular use case, you can simply set its initial size per zone to `0`. You can enable a tier or component anytime you need it just by scaling up the size. If autoscaling is enabled, data tiers and machine learning nodes are sized up automatically when they’re needed. For example, when you configure your first machine learning job, ML nodes are enabled by the autoscaling process. Similarly, if you choose to create a cold data phase as part of your deployment’s index lifecycle management (ILM) policy, a cold data node is enabled automatically without your needing to configure it. 8. Select **Manage indices**. -9. On this page you can [configure index management](ece-configure-templates-index-management.md) by assigning attributes to each of the data nodes in the deployment template. In Kibana, you can configure an index lifecycle management (ILM) policy, based on the node attributes, to control how data moves across the nodes in your deployment. +9. On this page you can [configure index management](ece-configure-templates-index-management.md) by assigning attributes to each of the data nodes in the deployment template. In {{kib}}, you can configure an index lifecycle management (ILM) policy, based on the node attributes, to control how data moves across the nodes in your deployment. 10. Select **Stack features**. 11. You can select a [snapshot repository](../../tools/snapshot-and-restore/cloud-enterprise.md) to be used by default for deployment backups. -12. You can choose to [enable logging and monitoring](../../monitor/stack-monitoring/ece-ech-stack-monitoring.md) by default, so that deployment logs and metrics are send to a dedicated monitoring deployment, and so that additional log types, retention options, and Kibana visualizations are available on all deployments created using this template. +12. You can choose to [enable logging and monitoring](../../monitor/stack-monitoring/ece-ech-stack-monitoring.md) by default, so that deployment logs and metrics are send to a dedicated monitoring deployment, and so that additional log types, retention options, and {{kib}} visualizations are available on all deployments created using this template. 13. Select **Extensions**. -14. Select any Elasticsearch extensions that you would like to be available automatically to all deployments created using the template. +14. Select any {{es}} extensions that you would like to be available automatically to all deployments created using the template. 15. Select **Save and create template**. @@ -92,7 +92,7 @@ Before you start creating your own deployment templates, you should have: [tagge 2. Post the JSON for your new deployment template. - The following example creates a deployment template that defaults to a highly available Elasticsearch cluster with 4 GB per hot node, a 16 GB machine learning node, 3 dedicated master nodes of 1 GB each, a 1 GB Kibana instance, and a 1 GB dedicated coordinating node that is tasked with handling and coordinating all incoming requests for the cluster. Elasticsearch and Kibana use the default instance configurations, but the machine learning node is based on the custom instance configuration in our previous example. + The following example creates a deployment template that defaults to a highly available {{es}} cluster with 4 GB per hot node, a 16 GB machine learning node, 3 dedicated master nodes of 1 GB each, a 1 GB {{kib}} instance, and a 1 GB dedicated coordinating node that is tasked with handling and coordinating all incoming requests for the cluster. {{es}} and {{kib}} use the default instance configurations, but the machine learning node is based on the custom instance configuration in our previous example. ```sh curl -k -X POST -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/templates?region=ece-region -H 'content-type: application/json' -d '{ @@ -403,7 +403,7 @@ Before you start creating your own deployment templates, you should have: [tagge ::::{note} -When specifying `node_roles` in the Elasticsearch plan of the deployment template, the template must contain all resource types and all Elasticsearch tiers. The deployment template must contain exactly one entry for each resource type. It must have one Elasticsearch, one Kibana, and one APM. On top of that, it must also include all supported Elasticsearch tiers in the Elasticsearch plan. The supported tiers are identified by the IDs `hot_content`, `warm`, `cold`, `frozen`, `master`, `coordinating` and `ml`. +When specifying `node_roles` in the {{es}} plan of the deployment template, the template must contain all resource types and all {{es}} tiers. The deployment template must contain exactly one entry for each resource type. It must have one {{es}}, one {{kib}}, and one APM. On top of that, it must also include all supported {{es}} tiers in the {{es}} plan. The supported tiers are identified by the IDs `hot_content`, `warm`, `cold`, `frozen`, `master`, `coordinating` and `ml`. :::: diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-create.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-create.md index b5401110aa..30a92ee5f7 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-create.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-create.md @@ -8,12 +8,12 @@ mapped_pages: # Create instance configurations [ece-configuring-ece-instance-configurations-create] -If you plan to [create your own templates](ece-configuring-ece-create-templates.md) and the [default instance configurations](./ece-configuring-ece-instance-configurations-default.md) that ship with ECE don’t quite suit your purpose, it’s generally easier and safer to create your own custom instance configurations first. Instance configurations match components of the Elastic Stack to allocators and tailor how memory and storage resources get sized relative to each other, and what sizes are available. +If you plan to [create your own templates](ece-configuring-ece-create-templates.md) and the [default instance configurations](./ece-configuring-ece-instance-configurations-default.md) that ship with ECE don’t quite suit your purpose, it’s generally easier and safer to create your own custom instance configurations first. Instance configurations match components of the {{stack}} to allocators and tailor how memory and storage resources get sized relative to each other, and what sizes are available. ## Before you begin [ece_before_you_begin_2] -Before you start creating your own instance configurations, you should have [tagged your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for Elastic Stack deployments. If you do not tag your allocators, templates that use these instance configurations will deploy wherever there is space rather than on specific allocators. +Before you start creating your own instance configurations, you should have [tagged your allocators](ece-configuring-ece-tag-allocators.md) to tell ECE what kind of hardware you have available for {{stack}} deployments. If you do not tag your allocators, templates that use these instance configurations will deploy wherever there is space rather than on specific allocators. ## Create an instance configuration in the UI [ece_create_an_instance_configuration_in_the_ui] @@ -35,13 +35,13 @@ Before you start creating your own instance configurations, you should have [tag 3. Check the list of allocators that get matched by your query: - * If you are satisfied that your query matches all the allocators where the component(s) of the Elastic Stack can be deployed, move on to the next step. + * If you are satisfied that your query matches all the allocators where the component(s) of the {{stack}} can be deployed, move on to the next step. * If you need to refine your query further, continue to adjust your outer or inner clauses. If you are unsure what to do, we recommend keeping your initial query simple. You can always refine the query later on by re-editing the instance configuration. 5. Select **Instance types**. -6. Pick the products and features of the Elastic Stack that can get deployed on the allocators you identified in the previous step. For products such as Elasticsearch, you can also select some additional options, such as the specific node types that can be deployed. +6. Pick the products and features of the {{stack}} that can get deployed on the allocators you identified in the previous step. For products such as {{es}}, you can also select some additional options, such as the specific node types that can be deployed. - Note that not all combinations of Elasticsearch node types are available. You can create either a general purpose Elasticsearch node that includes all three of data, master, and coordinating, or a dedicated node that includes any one of these types. Machine learning is also available as a separate instance type. + Note that not all combinations of {{es}} node types are available. You can create either a general purpose {{es}} node that includes all three of data, master, and coordinating, or a dedicated node that includes any one of these types. Machine learning is also available as a separate instance type. 7. Select **Sizes**. 8. Adjust how memory and storage resources get sized relative to each other and set the available sizes, including the default size. Size your instance configuration so that it will use the available memory and storage on your allocators efficiently, without leaving hardware resources unused. Keep in mind that very small sizes might not provide adequate performance for some use cases. @@ -135,7 +135,7 @@ Before you start creating your own instance configurations, you should have [tag }' ``` - 1. Note, that not all combinations of Elasticsearch node types are allowed here. You can create either a general purpose Elasticsearch node that includes all three of `data`, `master`, and `ingest`, or a dedicated node, that includes any one of these types or `ml`. + 1. Note, that not all combinations of {{es}} node types are allowed here. You can create either a general purpose {{es}} node that includes all three of `data`, `master`, and `ingest`, or a dedicated node, that includes any one of these types or `ml`. After you have created your new instance configuration, you can use it when you [create new deployment templates](ece-configuring-ece-create-templates.md) or when you edit existing ones. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-default.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-default.md index caba12ad81..c8ecec2984 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-default.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-default.md @@ -8,7 +8,7 @@ mapped_pages: # Default instance configurations [ece-configuring-ece-instance-configurations-default] -Elastic Cloud Enterprise ships with a number of default instance configurations: +{{ece}} ships with a number of default instance configurations: | Instance configuration | Instance types / node types | Default size | Memory sizes | Memory to storage multiplier | | --- | --- | --- | --- | --- | @@ -21,7 +21,7 @@ Elastic Cloud Enterprise ships with a number of default instance configurations: | `data.highstorage` | Data, Master, Coordinating | 2 GB | 1, 2, 4, 8, 16, 32, 64 | 64 | | `enterprise.search` | Application server, Connector, Worker | 4 GB | 2, 4, 8 | 2 | | `integrations.server` | Integrations Server | 512 MB | 512, 1, 2, 4, 8 | 4 | -| `kibana` | Kibana | 1 GB | 1, 2, 4, 8 | 4 | +| `kibana` | {{kib}} | 1 GB | 1, 2, 4, 8 | 4 | | `master` | Master | 1 GB | 1, 2, 4, 8, 16, 32, 64 | 4 | | `ml` | Machine Learning | 1 GB | 1, 2, 4, 8, 16, 32, 64 | 4 | diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-edit.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-edit.md index 4ffbbcdb0a..1421f8d8fa 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-edit.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-instance-configurations-edit.md @@ -8,15 +8,15 @@ mapped_pages: # Edit instance configurations [ece-configuring-ece-instance-configurations-edit] -Instance configurations enable components of the Elastic Stack to be matched to allocators for a specific use case. The matching is accomplished by defining a query that filters possible allocators based on their tags. For existing instance configurations, you can edit the query to change how allocators get matched, which in turn changes what components of the Elastic Stack get hosted on the matching allocators when creating or changing a deployment. +Instance configurations enable components of the {{stack}} to be matched to allocators for a specific use case. The matching is accomplished by defining a query that filters possible allocators based on their tags. For existing instance configurations, you can edit the query to change how allocators get matched, which in turn changes what components of the {{stack}} get hosted on the matching allocators when creating or changing a deployment. You might need to edit instance configurations under the following circumstances: -* After you upgrade to or install Elastic Cloud Enterprise 2.0 or later and [have tagged your allocators](ece-configuring-ece-tag-allocators.md), to indicate how you want to use these tagged allocators. +* After you upgrade to or install {{ece}} 2.0 or later and [have tagged your allocators](ece-configuring-ece-tag-allocators.md), to indicate how you want to use these tagged allocators. * If tagged allocators in your ECE installation are not being used as expected when you create or change deployments. Editing an instance configuration affects all deployments that depend on it, and tagged allocators that do not get matched by an instance configuration will not be used. If this happens, you can edit your instance configurations to create less restrictive queries. ::::{tip} -If you edit instance configurations, so that they match fewer allocators, instances of the Elastic Stack that were previously matched to those allocators might be relocated. Keep this in mind when making queries more restrictive. +If you edit instance configurations, so that they match fewer allocators, instances of the {{stack}} that were previously matched to those allocators might be relocated. Keep this in mind when making queries more restrictive. :::: ## Steps [ece_steps] @@ -26,21 +26,21 @@ If you edit instance configurations, so that they match fewer allocators, instan 3. Select the **Instance configurations** tab to check the [default instance configurations](./ece-configuring-ece-instance-configurations-default.md) that ship with ECE. 4. Choose one of the instance configurations and select **Edit instance configuration**. - For example: Select to edit the `data.default` default instance configuration, so that you can specify where Elasticsearch data nodes for incoming data should be deployed. In a hot-warm architecture, this will determine where your hot data gets sent to. + For example: Select to edit the `data.default` default instance configuration, so that you can specify where {{es}} data nodes for incoming data should be deployed. In a hot-warm architecture, this will determine where your hot data gets sent to. 5. In the **Input** section, construct a query that filters on specific allocator tags. The following steps assume that no query exists, as is the case when you edit the [default instance configurations](./ece-configuring-ece-instance-configurations-default.md) for the first time after installing ECE version 2.0 or later. You can also edit an existing query by modifying the inner and outer clauses. ::::{tip} - An *outer clause* ANDs or ORs your main filtering criteria. You use outer clauses to find the allocators that you tagged earlier. An *inner clause* modifies an outer clause and let’s you refine your filtering criteria further. If you are unsure how the process works, try searching on some of the allocator tags that you added and check how the query results change. If you are editing the `data.default` instance configuration, you want your query to return all allocators on which Elasticsearch data nodes for incoming data can be placed. + An *outer clause* ANDs or ORs your main filtering criteria. You use outer clauses to find the allocators that you tagged earlier. An *inner clause* modifies an outer clause and let’s you refine your filtering criteria further. If you are unsure how the process works, try searching on some of the allocator tags that you added and check how the query results change. If you are editing the `data.default` instance configuration, you want your query to return all allocators on which {{es}} data nodes for incoming data can be placed. :::: 1. Select **And** or **Or** to add a first outer clause. 2. Enter a key-value pair in the **Key** and **Value** fields that you previously [tagged your allocators](ece-configuring-ece-tag-allocators.md) with. - For example: Enter `SSD` and `true`, if you tagged your allocators with this tag, or enter whatever tag you are using to identify allocators that can host Elasticsearch data nodes for incoming data. + For example: Enter `SSD` and `true`, if you tagged your allocators with this tag, or enter whatever tag you are using to identify allocators that can host {{es}} data nodes for incoming data. :::{image} /deploy-manage/images/cloud-enterprise-ece-query-ui.png :alt: Creating a query that filters on allocator tags @@ -48,7 +48,7 @@ If you edit instance configurations, so that they match fewer allocators, instan 3. Check the list of allocators that get matched by your query: - * If you are satisfied that your query matches all the allocators where the component(s) of the Elastic Stack can be deployed, move on to the next step. For the `data.default` instance configuration, this means all the allocators where Elasticsearch data nodes for incoming data should be deployed, for example. + * If you are satisfied that your query matches all the allocators where the component(s) of the {{stack}} can be deployed, move on to the next step. For the `data.default` instance configuration, this means all the allocators where {{es}} data nodes for incoming data should be deployed, for example. * If you need to refine your query further, continue to adjust your outer or inner clauses. If you are unsure what to do, keep your initial query simple. You can always refine the query later on by re-editing the instance configuration. 6. Select **Save changes**. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md index 94f676b21b..d3c3585315 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md @@ -8,7 +8,7 @@ mapped_pages: # Tag your allocators [ece-configuring-ece-tag-allocators] -You tag allocators to indicate what kind of hardware you have available. These tags matter, because they enable instance configurations to filter on allocators where components of the Elastic Stack should get deployed. Without tags, instance configurations will allow Elastic Stack components to get deployed wherever there is space on allocators. With tags, instance configurations can filter out those allocators that are best suited to deploy specific components of the Elastic Stack and make sure they get deployed there. +You tag allocators to indicate what kind of hardware you have available. These tags matter, because they enable instance configurations to filter on allocators where components of the {{stack}} should get deployed. Without tags, instance configurations will allow {{stack}} components to get deployed wherever there is space on allocators. With tags, instance configurations can filter out those allocators that are best suited to deploy specific components of the {{stack}} and make sure they get deployed there. Allocator tags are a simple way of characterizing the hardware resources that you have in your ECE installation, such as: @@ -19,12 +19,12 @@ Allocator tags are a simple way of characterizing the hardware resources that yo You should tag your allocators under the following circumstances: -* After you upgrade to or install Elastic Cloud Enterprise 2.0 or later, to characterize what kind of hardware you have available in your installation. +* After you upgrade to or install {{ece}} 2.0 or later, to characterize what kind of hardware you have available in your installation. * Before you create your own instance configurations and your own deployment templates, to indicate what hardware resources you can work with. * After you add new allocators to your installation, to indicate what kind of hardware resources they provide. ::::{tip} -You can also delete tags, if you have no more use for them. Keep in mind that removing tags from allocators can in turn affect what allocators get matched. Removing a tag might prompt ECE to move instances of the Elastic Stack to other allocators. +You can also delete tags, if you have no more use for them. Keep in mind that removing tags from allocators can in turn affect what allocators get matched. Removing a tag might prompt ECE to move instances of the {{stack}} to other allocators. :::: @@ -44,7 +44,7 @@ $$$allocator-sample-tags$$$Tags are simple key-value pairs. A small sampling of `instanceFamily: i3`, `instanceFamily: m5` : Indicates the host type, used extensively on {{ech}} to identify hosts with specific hardware characteristics. If you run your own hardware on-premise and have standardized on several specific host configurations, you could use similar tags. If you are deploying ECE on another cloud platform, you could use the instance type or machine type names from your provider. -Avoid tags that describe a particular use case or an Elastic Stack component you plan to run on these allocators. Examples of tags to avoid include `elasticsearch: false` or `kibana: true`. You should define the intended use at the level of instance configurations instead and tag your allocators only to describe hardware characteristics. +Avoid tags that describe a particular use case or an {{stack}} component you plan to run on these allocators. Examples of tags to avoid include `elasticsearch: false` or `kibana: true`. You should define the intended use at the level of instance configurations instead and tag your allocators only to describe hardware characteristics. ::::{tip} If you have an allocator that meets several criteria, such as an allocator with multi-purpose hardware, consider assigning it a single tag that identifies its multipurpose view, such as the `instanceFamily: i3` example mentioned earlier. While it is not wrong to assign multiple tags to an allocator, filtering on the allocator when you create or edit instance configurations will be simpler with a single tag. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-hardware-prereq.md b/deploy-manage/deploy/cloud-enterprise/ece-hardware-prereq.md index 8a34c97de8..e190ec20de 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-hardware-prereq.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-hardware-prereq.md @@ -35,7 +35,7 @@ ECE installations with **spinning disks** are not supported when you run allocat | **Medium deployment**2 | 32 GB RAM | 32 GB RAM | 16 GB RAM | 256 GB RAM
| | **Large deployment**3 | 128 GB RAM | 128 GB RAM | 16 GB RAM | 256 GB RAM
| -1 Allocators must be sized to support your Elasticsearch clusters and Kibana instances. We recommend host machines that provide between 128 GB and 256 GB of memory. While smaller hosts might not pack larger Elasticsearch clusters and Kibana instances as efficiently, larger hosts might provide fewer CPU resources per GB of RAM on average. For example, running 64 * 2GB nodes on a 128GB host with 16 vCPUs means that each node will get 2/128 of the total CPU time. This is 1/4 core on average, and might not be sufficient. We recommend inspecting both what is the expected number and size of the nodes you plan to run on your hosts in order to understand which hardware will work best in your environment. +1 Allocators must be sized to support your {{es}} clusters and {{kib}} instances. We recommend host machines that provide between 128 GB and 256 GB of memory. While smaller hosts might not pack larger {{es}} clusters and {{kib}} instances as efficiently, larger hosts might provide fewer CPU resources per GB of RAM on average. For example, running 64 * 2GB nodes on a 128GB host with 16 vCPUs means that each node will get 2/128 of the total CPU time. This is 1/4 core on average, and might not be sufficient. We recommend inspecting both what is the expected number and size of the nodes you plan to run on your hosts in order to understand which hardware will work best in your environment. 2 For high availability, requires three hosts each of the capacities indicated, spread across three availability zones. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md b/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md index 97964752d0..9ba95caf0b 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-include-additional-kibana-plugin.md @@ -1,5 +1,5 @@ --- -navigation_title: Include additional Kibana plugins +navigation_title: Include additional {{kib}} plugins applies_to: deployment: ece: all @@ -7,31 +7,31 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-include-additional-kibana-plugin.html --- -# Include additional Kibana plugins [ece-include-additional-kibana-plugin] +# Include additional {{kib}} plugins [ece-include-additional-kibana-plugin] -In certain cases you may choose to expand the Kibana Docker image included in an Elastic Stack pack to include one or more additional plugins that are not bundled in the image by default. Plugins can extend the features included in Kibana, for example to include specialized visualizations. Adding plugins allows you to tailor your ECE deployments that include Kibana to suit your specific use cases. +In certain cases you may choose to expand the {{kib}} Docker image included in an {{stack}} pack to include one or more additional plugins that are not bundled in the image by default. Plugins can extend the features included in {{kib}}, for example to include specialized visualizations. Adding plugins allows you to tailor your ECE deployments that include {{kib}} to suit your specific use cases. The process involves two main steps: -1. [Extend an existing Kibana Docker image to include the additional plugins.](#ece-create-modified-docker-image) -2. [Update the Elastic Stack pack included in your ECE installation to point to your modified Docker image.](#ece-modify-stack-pack) +1. [Extend an existing {{kib}} Docker image to include the additional plugins.](#ece-create-modified-docker-image) +2. [Update the {{stack}} pack included in your ECE installation to point to your modified Docker image.](#ece-modify-stack-pack) ## Before you begin [ece_before_you_begin_5] Note the following restrictions: -* These instructions have been tested for Elastic Stack versions starting with 6.7.0 and may not work for earlier versions. -* Plugins that you bundle yourself to be included in the Elastic Stack are not covered by Elastic Customer Support and include no guarantee from Elastic. -* After uploading a modified version of an Elastic Stack pack, if you reapply the original stack the changes will be lost and new Kibana instances will use the original Docker image provided by Elastic. +* These instructions have been tested for {{stack}} versions starting with 6.7.0 and may not work for earlier versions. +* Plugins that you bundle yourself to be included in the {{stack}} are not covered by Elastic Customer Support and include no guarantee from Elastic. +* After uploading a modified version of an {{stack}} pack, if you reapply the original stack the changes will be lost and new {{kib}} instances will use the original Docker image provided by Elastic. * The Dockerfile used in this example includes an optimization process that is relatively expensive and may require a machine with several GB of RAM to run successfully. -## Extend a Kibana Docker image to include additional plugins [ece-create-modified-docker-image] +## Extend a {{kib}} Docker image to include additional plugins [ece-create-modified-docker-image] -This example runs a Dockerfile to install the [analyze_api_ui plugin](https://github.com/johtani/analyze-api-ui-plugin) or [kibana-enhanced-table](https://github.com/fbaligand/kibana-enhanced-table) into different versions of Kibana Docker image. The contents of the Dockerfile varies depending on the version of the Elastic Stack pack that you want to modify. +This example runs a Dockerfile to install the [analyze_api_ui plugin](https://github.com/johtani/analyze-api-ui-plugin) or [kibana-enhanced-table](https://github.com/fbaligand/kibana-enhanced-table) into different versions of {{kib}} Docker image. The contents of the Dockerfile varies depending on the version of the {{stack}} pack that you want to modify. -1. Choose a directory on your ECE installation and save the Dockerfile code for your Elastic Stack version as a file named `Dockerfile`. +1. Choose a directory on your ECE installation and save the Dockerfile code for your {{stack}} version as a file named `Dockerfile`. ```sh FROM docker.elastic.co/cloud-release/kibana-cloud:8.13.1 @@ -47,7 +47,7 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi * The plugin name and version number ::::{important} - When you modify a Kibana Docker image, make sure you maintain the original image structure and only add the additional plugins. + When you modify a {{kib}} Docker image, make sure you maintain the original image structure and only add the additional plugins. :::: 3. Build the modified Docker image, specifying an image name and version number. If you are using your own Docker repository, the `docker.elastic.co/cloud-assets` section must match your specific configuration. The image build process can take several minutes. @@ -73,11 +73,11 @@ This example runs a Dockerfile to install the [analyze_api_ui plugin](https://gi -## Modify the Elastic Stack pack to point to your modified image [ece-modify-stack-pack] +## Modify the {{stack}} pack to point to your modified image [ece-modify-stack-pack] -Follow these steps to update the Elastic Stack pack zip files in your ECE setup to point to your modified Docker image: +Follow these steps to update the {{stack}} pack zip files in your ECE setup to point to your modified Docker image: -1. Download to a local directory the [Elastic Stack pack](manage-elastic-stack-versions.md) that you want to modify. +1. Download to a local directory the [{{stack}} pack](manage-elastic-stack-versions.md) that you want to modify. 2. Save the following bash script with the name `change-kibana-image.sh`: ```sh @@ -85,7 +85,7 @@ Follow these steps to update the Elastic Stack pack zip files in your ECE setup set -eo pipefail - # Repack a stackpack to modify the Kibana image it points to + # Repack a stackpack to modify the {{kib}} image it points to NO_COLOR='\033[0m' ERROR_COLOR='\033[1;31m' @@ -138,17 +138,17 @@ Follow these steps to update the Elastic Stack pack zip files in your ECE setup sudo chmod 755 change-kibana-image.sh ``` -4. Run the script to update the Elastic Stack pack, where `FILE_PATH` is the location where you downloaded the Elastic Stack pack zip file: +4. Run the script to update the {{stack}} pack, where `FILE_PATH` is the location where you downloaded the {{stack}} pack zip file: ```sh ./change-kibana-image.sh FILE_PATH/8.13.1.zip docker.elastic.co/cloud-assets/kibana-with-plugin:8.13.1 ``` -5. Upload the modified Elastic Stack pack to your ECE installation: +5. Upload the modified {{stack}} pack to your ECE installation: 1. [Log into the Cloud UI](log-into-cloud-ui.md). - 2. Go to **Platform** and then **Elastic Stack**. - 3. Select **Upload Elastic Stack pack** to add the new Elastic Stack pack or replace an existing one. You can create a new deployment using the new or updated Elastic stack pack. When you launch Kibana the additional plugin is available. + 2. Go to **Platform** and then **{{stack}}**. + 3. Select **Upload {{stack}} pack** to add the new {{stack}} pack or replace an existing one. You can create a new deployment using the new or updated {{stack}} pack. When you launch {{kib}} the additional plugin is available. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md index c07e936a50..5d563e22be 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-images.md @@ -8,11 +8,11 @@ mapped_pages: # Available Docker images [ece-install-offline-images] -Versions of the Elastic Stack, containing Elasticsearch, Kibana, and other products, are available as downloadable Docker images. +Versions of the {{stack}}, containing {{es}}, {{kib}}, and other products, are available as downloadable Docker images. -The first table contains the stack versions that shipped with the 4.0 version of Elastic Cloud Enterprise. You can also check the [most recent stack packs and Docker images](#ece-recent-download-list), which might have released after the 4.0 version of ECE, as well as the [full list of available stack packs and Docker images](#ece-full-download-list). +The first table contains the stack versions that shipped with the 4.0 version of {{ece}}. You can also check the [most recent stack packs and Docker images](#ece-recent-download-list), which might have released after the 4.0 version of ECE, as well as the [full list of available stack packs and Docker images](#ece-full-download-list). -| Docker images included with Elastic Cloud Enterprise 4.0.0 | +| Docker images included with {{ece}} 4.0.0 | | --- | | docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:4.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 | @@ -23,12 +23,12 @@ The first table contains the stack versions that shipped with the 4.0 version of | docker.elastic.co/cloud-release/kibana-cloud:9.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 | -$$$ece-all-stacks$$$Additional Elastic Stack versions are available as Docker images that you can use with ECE. For offline installations, you need to download both the Elastic Stack pack and the Docker images for the same version. +$$$ece-all-stacks$$$Additional {{stack}} versions are available as Docker images that you can use with ECE. For offline installations, you need to download both the {{stack}} pack and the Docker images for the same version. -To learn more about adding the stack pack to ECE, check [Manage Elastic Stack Versions](manage-elastic-stack-versions.md). +To learn more about adding the stack pack to ECE, check [Manage {{stack}} Versions](manage-elastic-stack-versions.md). -## Most recent Elastic Stack packs and Docker images [ece-recent-download-list] +## Most recent {{stack}} packs and Docker images [ece-recent-download-list] :::{important} Enterprise Search is not available in versions 9.0+. @@ -36,1156 +36,1156 @@ Enterprise Search is not available in versions 9.0+. | Required downloads | Minimum required ECE version | | --- | --- | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:9.0.0 | ECE 4.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:9.0.0 | ECE 4.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 | ECE 4.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:9.0.0 | ECE 4.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.18.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.18.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.27-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.23-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.23-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.23-0 | ECE 2.1.0 | | | | -## All available Elastic Stack packs and Docker images [ece-full-download-list] +## All available {{stack}} packs and Docker images [ece-full-download-list] ::::{dropdown} Expand to view the full list | Required downloads | Minimum required ECE version | | --- | --- | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.4](https://download.elastic.co/cloud-enterprise/versions/8.17.4.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.4](https://download.elastic.co/cloud-enterprise/versions/8.17.4.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.17.4 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.17.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.17.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.17.4 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.3](https://download.elastic.co/cloud-enterprise/versions/8.17.3.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.3](https://download.elastic.co/cloud-enterprise/versions/8.17.3.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.17.3 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.17.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.17.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.17.3 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.2](https://download.elastic.co/cloud-enterprise/versions/8.17.2.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.2](https://download.elastic.co/cloud-enterprise/versions/8.17.2.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.17.2 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.17.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.17.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.17.2 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.1](https://download.elastic.co/cloud-enterprise/versions/8.17.1.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.1](https://download.elastic.co/cloud-enterprise/versions/8.17.1.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.17.1 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.17.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.17.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.17.1 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.0](https://download.elastic.co/cloud-enterprise/versions/8.17.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.0](https://download.elastic.co/cloud-enterprise/versions/8.17.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.17.0 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.17.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.17.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.17.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.3](https://download.elastic.co/cloud-enterprise/versions/8.16.3.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.3](https://download.elastic.co/cloud-enterprise/versions/8.16.3.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.16.3 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.16.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.16.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.16.3 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.2](https://download.elastic.co/cloud-enterprise/versions/8.16.2.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.2](https://download.elastic.co/cloud-enterprise/versions/8.16.2.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.16.2 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.16.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.16.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.16.2 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.1](https://download.elastic.co/cloud-enterprise/versions/8.16.1.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.1](https://download.elastic.co/cloud-enterprise/versions/8.16.1.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.16.1 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.16.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.16.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.16.1 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.0](https://download.elastic.co/cloud-enterprise/versions/8.16.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.0](https://download.elastic.co/cloud-enterprise/versions/8.16.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.16.0 | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | | docker.elastic.co/cloud-release/kibana-cloud:8.16.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.16.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.16.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.5](https://download.elastic.co/cloud-enterprise/versions/8.15.5.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.5](https://download.elastic.co/cloud-enterprise/versions/8.15.5.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.5 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.5 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.5 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.5 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.4](https://download.elastic.co/cloud-enterprise/versions/8.15.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.4](https://download.elastic.co/cloud-enterprise/versions/8.15.4.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.4 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.3](https://download.elastic.co/cloud-enterprise/versions/8.15.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.3](https://download.elastic.co/cloud-enterprise/versions/8.15.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.3 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.2](https://download.elastic.co/cloud-enterprise/versions/8.15.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.2](https://download.elastic.co/cloud-enterprise/versions/8.15.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.2 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.1](https://download.elastic.co/cloud-enterprise/versions/8.15.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.1](https://download.elastic.co/cloud-enterprise/versions/8.15.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.1 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.0](https://download.elastic.co/cloud-enterprise/versions/8.15.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.0](https://download.elastic.co/cloud-enterprise/versions/8.15.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.15.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.15.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.15.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.15.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.3](https://download.elastic.co/cloud-enterprise/versions/8.14.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.3](https://download.elastic.co/cloud-enterprise/versions/8.14.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.14.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.14.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.14.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.14.3 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.2](https://download.elastic.co/cloud-enterprise/versions/8.14.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.2](https://download.elastic.co/cloud-enterprise/versions/8.14.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.14.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.14.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.14.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.14.2 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.1](https://download.elastic.co/cloud-enterprise/versions/8.14.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.1](https://download.elastic.co/cloud-enterprise/versions/8.14.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.14.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.14.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.14.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.14.1 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.0](https://download.elastic.co/cloud-enterprise/versions/8.14.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.0](https://download.elastic.co/cloud-enterprise/versions/8.14.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.14.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.14.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.14.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.14.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.4](https://download.elastic.co/cloud-enterprise/versions/8.13.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.4](https://download.elastic.co/cloud-enterprise/versions/8.13.4.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.13.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.13.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.13.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.13.4 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.3](https://download.elastic.co/cloud-enterprise/versions/8.13.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.3](https://download.elastic.co/cloud-enterprise/versions/8.13.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.13.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.13.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.13.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.13.3 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.2](https://download.elastic.co/cloud-enterprise/versions/8.13.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.2](https://download.elastic.co/cloud-enterprise/versions/8.13.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.13.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.13.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.13.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.13.2 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.1](https://download.elastic.co/cloud-enterprise/versions/8.13.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.1](https://download.elastic.co/cloud-enterprise/versions/8.13.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.13.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.13.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.13.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.13.1 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.0](https://download.elastic.co/cloud-enterprise/versions/8.13.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.0](https://download.elastic.co/cloud-enterprise/versions/8.13.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.13.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.13.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.13.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.13.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.2](https://download.elastic.co/cloud-enterprise/versions/8.12.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.2](https://download.elastic.co/cloud-enterprise/versions/8.12.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.12.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.12.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.12.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.12.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.1](https://download.elastic.co/cloud-enterprise/versions/8.12.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.1](https://download.elastic.co/cloud-enterprise/versions/8.12.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.12.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.12.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.12.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.12.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.0](https://download.elastic.co/cloud-enterprise/versions/8.12.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.0](https://download.elastic.co/cloud-enterprise/versions/8.12.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.12.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.12.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.12.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.12.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.4](https://download.elastic.co/cloud-enterprise/versions/8.11.4.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.4](https://download.elastic.co/cloud-enterprise/versions/8.11.4.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.11.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.11.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.11.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.11.4 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.3](https://download.elastic.co/cloud-enterprise/versions/8.11.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.3](https://download.elastic.co/cloud-enterprise/versions/8.11.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.11.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.11.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.11.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.11.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.2](https://download.elastic.co/cloud-enterprise/versions/8.11.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.2](https://download.elastic.co/cloud-enterprise/versions/8.11.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.11.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.11.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.11.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.11.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.1](https://download.elastic.co/cloud-enterprise/versions/8.11.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.1](https://download.elastic.co/cloud-enterprise/versions/8.11.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.11.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.11.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.11.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.11.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.0](https://download.elastic.co/cloud-enterprise/versions/8.11.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.0](https://download.elastic.co/cloud-enterprise/versions/8.11.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.11.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.11.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.11.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.11.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.4](https://download.elastic.co/cloud-enterprise/versions/8.10.4.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.4](https://download.elastic.co/cloud-enterprise/versions/8.10.4.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.10.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.10.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.10.4 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.10.4 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.3](https://download.elastic.co/cloud-enterprise/versions/8.10.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.3](https://download.elastic.co/cloud-enterprise/versions/8.10.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.10.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.10.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.10.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.10.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.2](https://download.elastic.co/cloud-enterprise/versions/8.10.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.2](https://download.elastic.co/cloud-enterprise/versions/8.10.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.10.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.10.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.10.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.10.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.1](https://download.elastic.co/cloud-enterprise/versions/8.10.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.1](https://download.elastic.co/cloud-enterprise/versions/8.10.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.10.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.10.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.10.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.10.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.0](https://download.elastic.co/cloud-enterprise/versions/8.10.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.0](https://download.elastic.co/cloud-enterprise/versions/8.10.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.10.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.10.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.10.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.10.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.2](https://download.elastic.co/cloud-enterprise/versions/8.9.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.2](https://download.elastic.co/cloud-enterprise/versions/8.9.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.9.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.9.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.9.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.9.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.1](https://download.elastic.co/cloud-enterprise/versions/8.9.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.1](https://download.elastic.co/cloud-enterprise/versions/8.9.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.9.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.9.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.9.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.9.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.0](https://download.elastic.co/cloud-enterprise/versions/8.9.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.0](https://download.elastic.co/cloud-enterprise/versions/8.9.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.9.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.9.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.9.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.9.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.2](https://download.elastic.co/cloud-enterprise/versions/8.8.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.2](https://download.elastic.co/cloud-enterprise/versions/8.8.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.8.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.8.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.8.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.8.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.1](https://download.elastic.co/cloud-enterprise/versions/8.8.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.1](https://download.elastic.co/cloud-enterprise/versions/8.8.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.8.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.8.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.8.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.8.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.0](https://download.elastic.co/cloud-enterprise/versions/8.8.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.0](https://download.elastic.co/cloud-enterprise/versions/8.8.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.8.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.8.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.8.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.8.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.7.1](https://download.elastic.co/cloud-enterprise/versions/8.7.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.7.1](https://download.elastic.co/cloud-enterprise/versions/8.7.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.7.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.7.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.7.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.7.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.7.0](https://download.elastic.co/cloud-enterprise/versions/8.7.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.7.0](https://download.elastic.co/cloud-enterprise/versions/8.7.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.7.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.7.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.7.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.7.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.2](https://download.elastic.co/cloud-enterprise/versions/8.6.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.2](https://download.elastic.co/cloud-enterprise/versions/8.6.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.6.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.6.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.6.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.6.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.1](https://download.elastic.co/cloud-enterprise/versions/8.6.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.1](https://download.elastic.co/cloud-enterprise/versions/8.6.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.6.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.6.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.6.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.6.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.0](https://download.elastic.co/cloud-enterprise/versions/8.6.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.0](https://download.elastic.co/cloud-enterprise/versions/8.6.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.6.0-2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.6.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.6.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.6.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.3](https://download.elastic.co/cloud-enterprise/versions/8.5.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.3](https://download.elastic.co/cloud-enterprise/versions/8.5.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.5.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.5.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.5.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.5.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.2](https://download.elastic.co/cloud-enterprise/versions/8.5.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.2](https://download.elastic.co/cloud-enterprise/versions/8.5.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.5.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.5.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.5.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.5.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.1](https://download.elastic.co/cloud-enterprise/versions/8.5.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.1](https://download.elastic.co/cloud-enterprise/versions/8.5.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.5.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.5.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.5.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.5.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.0](https://download.elastic.co/cloud-enterprise/versions/8.5.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.0](https://download.elastic.co/cloud-enterprise/versions/8.5.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.5.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.5.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.5.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.5.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.3](https://download.elastic.co/cloud-enterprise/versions/8.4.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.3](https://download.elastic.co/cloud-enterprise/versions/8.4.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.4.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.4.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.4.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.4.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.2](https://download.elastic.co/cloud-enterprise/versions/8.4.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.2](https://download.elastic.co/cloud-enterprise/versions/8.4.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.4.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.4.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.4.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.4.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.1](https://download.elastic.co/cloud-enterprise/versions/8.4.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.1](https://download.elastic.co/cloud-enterprise/versions/8.4.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.4.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.4.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.4.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.4.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.0](https://download.elastic.co/cloud-enterprise/versions/8.4.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.0](https://download.elastic.co/cloud-enterprise/versions/8.4.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.4.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.4.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.4.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.4.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.3](https://download.elastic.co/cloud-enterprise/versions/8.3.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.3](https://download.elastic.co/cloud-enterprise/versions/8.3.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.3.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.3.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.3.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.3.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.2](https://download.elastic.co/cloud-enterprise/versions/8.3.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.2](https://download.elastic.co/cloud-enterprise/versions/8.3.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.3.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.3.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.3.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.3.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.1](https://download.elastic.co/cloud-enterprise/versions/8.3.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.1](https://download.elastic.co/cloud-enterprise/versions/8.3.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.3.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.3.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.3.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.3.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.0](https://download.elastic.co/cloud-enterprise/versions/8.3.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.0](https://download.elastic.co/cloud-enterprise/versions/8.3.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.3.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.3.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.3.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.3.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.3](https://download.elastic.co/cloud-enterprise/versions/8.2.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.3](https://download.elastic.co/cloud-enterprise/versions/8.2.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.2.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.2.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.2.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.2.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.2](https://download.elastic.co/cloud-enterprise/versions/8.2.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.2](https://download.elastic.co/cloud-enterprise/versions/8.2.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.2.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.2.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.2.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.2.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.1](https://download.elastic.co/cloud-enterprise/versions/8.2.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.1](https://download.elastic.co/cloud-enterprise/versions/8.2.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.2.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.2.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.2.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.2.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.0](https://download.elastic.co/cloud-enterprise/versions/8.2.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.0](https://download.elastic.co/cloud-enterprise/versions/8.2.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.2.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.2.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.2.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.2.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.3](https://download.elastic.co/cloud-enterprise/versions/8.1.3.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.3](https://download.elastic.co/cloud-enterprise/versions/8.1.3.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.1.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.1.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.1.3 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.1.3 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.2](https://download.elastic.co/cloud-enterprise/versions/8.1.2.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.2](https://download.elastic.co/cloud-enterprise/versions/8.1.2.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.1.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.1.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.1.2 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.1.2 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.1](https://download.elastic.co/cloud-enterprise/versions/8.1.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.1](https://download.elastic.co/cloud-enterprise/versions/8.1.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.1.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.1.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.1.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.1.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.0](https://download.elastic.co/cloud-enterprise/versions/8.1.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.0](https://download.elastic.co/cloud-enterprise/versions/8.1.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.1.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.1.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.1.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.1.0 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.0.1](https://download.elastic.co/cloud-enterprise/versions/8.0.1.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.0.1](https://download.elastic.co/cloud-enterprise/versions/8.0.1.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.0.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.0.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.0.1 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.0.1 | ECE 3.0.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.0.0](https://download.elastic.co/cloud-enterprise/versions/8.0.0.zip) | ECE 3.0.0 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.0.0](https://download.elastic.co/cloud-enterprise/versions/8.0.0.zip) | ECE 3.0.0 | | docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.0.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/kibana-cloud:8.0.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/elastic-agent-cloud:8.0.0 | ECE 3.0.0 | | docker.elastic.co/cloud-release/enterprise-search-cloud:8.0.0 | ECE 3.0.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.27-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.27-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.26](https://download.elastic.co/cloud-enterprise/versions/7.17.26.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.26](https://download.elastic.co/cloud-enterprise/versions/7.17.26.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.26-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.26-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.26-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.26-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.25](https://download.elastic.co/cloud-enterprise/versions/7.17.25.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.25](https://download.elastic.co/cloud-enterprise/versions/7.17.25.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.25-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.25-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.25-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.25-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.24](https://download.elastic.co/cloud-enterprise/versions/7.17.24.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.24](https://download.elastic.co/cloud-enterprise/versions/7.17.24.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.24-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.24-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.24-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.24-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.23](https://download.elastic.co/cloud-enterprise/versions/7.17.23.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.23](https://download.elastic.co/cloud-enterprise/versions/7.17.23.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.23-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.23-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.23-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.23-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.22](https://download.elastic.co/cloud-enterprise/versions/7.17.22.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.22](https://download.elastic.co/cloud-enterprise/versions/7.17.22.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.22-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.22-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.22-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.22-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.21](https://download.elastic.co/cloud-enterprise/versions/7.17.21.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.21](https://download.elastic.co/cloud-enterprise/versions/7.17.21.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.21-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.21-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.21-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.21-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.20](https://download.elastic.co/cloud-enterprise/versions/7.17.20.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.20](https://download.elastic.co/cloud-enterprise/versions/7.17.20.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.20-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.20-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.20-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.20-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.19](https://download.elastic.co/cloud-enterprise/versions/7.17.19.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.19](https://download.elastic.co/cloud-enterprise/versions/7.17.19.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.19-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.19-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.19-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.19-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.18](https://download.elastic.co/cloud-enterprise/versions/7.17.18.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.18](https://download.elastic.co/cloud-enterprise/versions/7.17.18.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.18-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.18-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.18-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.18-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.17](https://download.elastic.co/cloud-enterprise/versions/7.17.17.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.17](https://download.elastic.co/cloud-enterprise/versions/7.17.17.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.17-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.17-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.17-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.17-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.16](https://download.elastic.co/cloud-enterprise/versions/7.17.16.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.16](https://download.elastic.co/cloud-enterprise/versions/7.17.16.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.16-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.16-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.16-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.16-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.15](https://download.elastic.co/cloud-enterprise/versions/7.17.15.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.15](https://download.elastic.co/cloud-enterprise/versions/7.17.15.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.15-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.15-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.15-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.15-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.14](https://download.elastic.co/cloud-enterprise/versions/7.17.14.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.14](https://download.elastic.co/cloud-enterprise/versions/7.17.14.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.14-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.14-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.14-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.14-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.13](https://download.elastic.co/cloud-enterprise/versions/7.17.13.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.13](https://download.elastic.co/cloud-enterprise/versions/7.17.13.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.13-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.13-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.13-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.13-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.12](https://download.elastic.co/cloud-enterprise/versions/7.17.12.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.12](https://download.elastic.co/cloud-enterprise/versions/7.17.12.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.12-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.12-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.12-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.12-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.11](https://download.elastic.co/cloud-enterprise/versions/7.17.11.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.11](https://download.elastic.co/cloud-enterprise/versions/7.17.11.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.11-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.11-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.11-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.11-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.10](https://download.elastic.co/cloud-enterprise/versions/7.17.10.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.10](https://download.elastic.co/cloud-enterprise/versions/7.17.10.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.10-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.10-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.10-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.10-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.9](https://download.elastic.co/cloud-enterprise/versions/7.17.9.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.9](https://download.elastic.co/cloud-enterprise/versions/7.17.9.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.9-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.9-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.9-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.9-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.8](https://download.elastic.co/cloud-enterprise/versions/7.17.8.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.8](https://download.elastic.co/cloud-enterprise/versions/7.17.8.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.8-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.8-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.8-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.8-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.7](https://download.elastic.co/cloud-enterprise/versions/7.17.7.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.7](https://download.elastic.co/cloud-enterprise/versions/7.17.7.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.7-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.7-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.7-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.7-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.6](https://download.elastic.co/cloud-enterprise/versions/7.17.6.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.6](https://download.elastic.co/cloud-enterprise/versions/7.17.6.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.6-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.6-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.6-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.6-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.5](https://download.elastic.co/cloud-enterprise/versions/7.17.5.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.5](https://download.elastic.co/cloud-enterprise/versions/7.17.5.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.5-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.5-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.5-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.5-0 | ECE 2.6.0 | | | | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.4](https://download.elastic.co/cloud-enterprise/versions/7.17.4.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.4](https://download.elastic.co/cloud-enterprise/versions/7.17.4.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.4-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.4-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.4-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.4-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.3](https://download.elastic.co/cloud-enterprise/versions/7.17.3.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.3](https://download.elastic.co/cloud-enterprise/versions/7.17.3.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.3-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.2](https://download.elastic.co/cloud-enterprise/versions/7.17.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.2](https://download.elastic.co/cloud-enterprise/versions/7.17.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.2-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.1](https://download.elastic.co/cloud-enterprise/versions/7.17.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.1](https://download.elastic.co/cloud-enterprise/versions/7.17.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.0](https://download.elastic.co/cloud-enterprise/versions/7.17.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.0](https://download.elastic.co/cloud-enterprise/versions/7.17.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.17.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.17.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.17.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.17.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.3](https://download.elastic.co/cloud-enterprise/versions/7.16.3.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.3](https://download.elastic.co/cloud-enterprise/versions/7.16.3.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.16.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.16.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.16.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.16.3-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.2](https://download.elastic.co/cloud-enterprise/versions/7.16.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.2](https://download.elastic.co/cloud-enterprise/versions/7.16.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.16.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.16.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.16.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.16.2-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.1](https://download.elastic.co/cloud-enterprise/versions/7.16.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.1](https://download.elastic.co/cloud-enterprise/versions/7.16.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.16.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.16.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.16.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.16.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.0](https://download.elastic.co/cloud-enterprise/versions/7.16.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.0](https://download.elastic.co/cloud-enterprise/versions/7.16.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.16.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.16.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.16.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.16.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.2](https://download.elastic.co/cloud-enterprise/versions/7.15.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.2](https://download.elastic.co/cloud-enterprise/versions/7.15.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.15.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.15.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.15.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.15.2-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.1](https://download.elastic.co/cloud-enterprise/versions/7.15.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.1](https://download.elastic.co/cloud-enterprise/versions/7.15.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.15.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.15.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.15.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.15.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.0](https://download.elastic.co/cloud-enterprise/versions/7.15.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.0](https://download.elastic.co/cloud-enterprise/versions/7.15.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.15.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.15.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.15.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.15.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.2](https://download.elastic.co/cloud-enterprise/versions/7.14.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.2](https://download.elastic.co/cloud-enterprise/versions/7.14.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.14.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.14.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.14.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.14.2-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.1](https://download.elastic.co/cloud-enterprise/versions/7.14.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.1](https://download.elastic.co/cloud-enterprise/versions/7.14.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.14.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.14.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.14.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.14.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.0](https://download.elastic.co/cloud-enterprise/versions/7.14.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.0](https://download.elastic.co/cloud-enterprise/versions/7.14.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.14.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.14.0-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.14.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.14.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.4](https://download.elastic.co/cloud-enterprise/versions/7.13.4.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.4](https://download.elastic.co/cloud-enterprise/versions/7.13.4.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.13.4-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.13.4-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.13.4-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.13.4-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.3](https://download.elastic.co/cloud-enterprise/versions/7.13.3.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.3](https://download.elastic.co/cloud-enterprise/versions/7.13.3.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.13.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.13.3-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.13.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.13.3-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.2](https://download.elastic.co/cloud-enterprise/versions/7.13.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.2](https://download.elastic.co/cloud-enterprise/versions/7.13.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.13.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.13.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.13.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.13.2-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.1](https://download.elastic.co/cloud-enterprise/versions/7.13.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.1](https://download.elastic.co/cloud-enterprise/versions/7.13.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.13.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.13.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.13.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.13.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.0](https://download.elastic.co/cloud-enterprise/versions/7.13.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.0](https://download.elastic.co/cloud-enterprise/versions/7.13.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.13.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.13.0-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.13.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.13.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.12.1](https://download.elastic.co/cloud-enterprise/versions/7.12.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.12.1](https://download.elastic.co/cloud-enterprise/versions/7.12.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.12.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.12.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.12.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/enterprise-search:7.12.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.12.0](https://download.elastic.co/cloud-enterprise/versions/7.12.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.12.0](https://download.elastic.co/cloud-enterprise/versions/7.12.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.12.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.12.0-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.12.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.12.0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.2](https://download.elastic.co/cloud-enterprise/versions/7.11.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.2](https://download.elastic.co/cloud-enterprise/versions/7.11.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.11.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.11.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.11.2-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.11.2 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.1](https://download.elastic.co/cloud-enterprise/versions/7.11.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.1](https://download.elastic.co/cloud-enterprise/versions/7.11.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.11.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.11.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.11.1-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.11.1 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.0](https://download.elastic.co/cloud-enterprise/versions/7.11.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.0](https://download.elastic.co/cloud-enterprise/versions/7.11.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.11.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.11.0-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.11.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.11.0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.2](https://download.elastic.co/cloud-enterprise/versions/7.10.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.2](https://download.elastic.co/cloud-enterprise/versions/7.10.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.10.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.10.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.10.2-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.10.2 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.1](https://download.elastic.co/cloud-enterprise/versions/7.10.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.1](https://download.elastic.co/cloud-enterprise/versions/7.10.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.10.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.10.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.10.1-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.10.1 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.0](https://download.elastic.co/cloud-enterprise/versions/7.10.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.0](https://download.elastic.co/cloud-enterprise/versions/7.10.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.10.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.10.0-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.10.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.10.0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.3](https://download.elastic.co/cloud-enterprise/versions/7.9.3.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.3](https://download.elastic.co/cloud-enterprise/versions/7.9.3.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.9.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.9.3-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.9.3-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.9.3 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.2](https://download.elastic.co/cloud-enterprise/versions/7.9.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.2](https://download.elastic.co/cloud-enterprise/versions/7.9.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.9.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.9.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.9.2-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.9.2 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.1](https://download.elastic.co/cloud-enterprise/versions/7.9.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.1](https://download.elastic.co/cloud-enterprise/versions/7.9.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.9.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.9.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.9.1-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.9.1 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.0](https://download.elastic.co/cloud-enterprise/versions/7.9.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.0](https://download.elastic.co/cloud-enterprise/versions/7.9.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.9.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.9.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.9.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.9.0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.8.1](https://download.elastic.co/cloud-enterprise/versions/7.8.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.8.1](https://download.elastic.co/cloud-enterprise/versions/7.8.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.8.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.8.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.8.1-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.8.1 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.8.0](https://download.elastic.co/cloud-enterprise/versions/7.8.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.8.0](https://download.elastic.co/cloud-enterprise/versions/7.8.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.8.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.8.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.8.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.8.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.7.1](https://download.elastic.co/cloud-enterprise/versions/7.7.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.7.1](https://download.elastic.co/cloud-enterprise/versions/7.7.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.7.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.7.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.7.1-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.7.1-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.7.0](https://download.elastic.co/cloud-enterprise/versions/7.7.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.7.0](https://download.elastic.co/cloud-enterprise/versions/7.7.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.7.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.7.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.7.0-0 | ECE 2.2.2 | | docker.elastic.co/enterprise-search/enterprise-search:7.7.0-0 | ECE 2.6.0 | | | | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.2](https://download.elastic.co/cloud-enterprise/versions/7.6.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and App Search stack pack: 7.6.2](https://download.elastic.co/cloud-enterprise/versions/7.6.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.6.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.6.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.6.2-0 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.6.2 | ECE 2.4.0 | | | | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.1](https://download.elastic.co/cloud-enterprise/versions/7.6.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and App Search stack pack: 7.6.1](https://download.elastic.co/cloud-enterprise/versions/7.6.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.6.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.6.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.6.1-0 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.6.1 | ECE 2.4.0 | | | | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.0](https://download.elastic.co/cloud-enterprise/versions/7.6.0.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and App Search stack pack: 7.6.0](https://download.elastic.co/cloud-enterprise/versions/7.6.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.6.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.6.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.6.0-0 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.6.0 | ECE 2.4.0 | | | | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.2](https://download.elastic.co/cloud-enterprise/versions/7.5.2.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and App Search stack pack: 7.5.2](https://download.elastic.co/cloud-enterprise/versions/7.5.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.5.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.5.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.5.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/app-search:7.5.2-0 | ECE 2.4.0 | | | | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.1](https://download.elastic.co/cloud-enterprise/versions/7.5.1.zip) | ECE 2.2.2 | +| [ {{es}}, {{kib}}, APM, and App Search stack pack: 7.5.1](https://download.elastic.co/cloud-enterprise/versions/7.5.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.5.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.5.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.5.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/app-search:7.5.1-0 | ECE 2.4.0 | | | | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.0](https://download.elastic.co/cloud-enterprise/versions/7.5.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.5.0](https://download.elastic.co/cloud-enterprise/versions/7.5.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.5.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.5.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.5.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/app-search:7.5.0-0 | ECE 2.4.0 | | | | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.2](https://download.elastic.co/cloud-enterprise/versions/7.4.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.2](https://download.elastic.co/cloud-enterprise/versions/7.4.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.4.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.4.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.4.2-0 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.4.2 | ECE 2.4.0 | | | | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.1](https://download.elastic.co/cloud-enterprise/versions/7.4.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.1](https://download.elastic.co/cloud-enterprise/versions/7.4.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.4.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.4.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.4.1-1 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.4.1 | ECE 2.4.0 | | | | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.0](https://download.elastic.co/cloud-enterprise/versions/7.4.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.0](https://download.elastic.co/cloud-enterprise/versions/7.4.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.4.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.4.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.4.0-0 | ECE 2.2.2 | | docker.elastic.co/app-search/app-search:7.4.0 | ECE 2.4.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.2](https://download.elastic.co/cloud-enterprise/versions/7.3.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.2](https://download.elastic.co/cloud-enterprise/versions/7.3.2.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.3.2-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.3.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.3.2-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.1](https://download.elastic.co/cloud-enterprise/versions/7.3.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.1](https://download.elastic.co/cloud-enterprise/versions/7.3.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.3.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.3.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.3.1-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.0](https://download.elastic.co/cloud-enterprise/versions/7.3.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.0](https://download.elastic.co/cloud-enterprise/versions/7.3.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.3.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.3.2-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.3.0-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.2.1](https://download.elastic.co/cloud-enterprise/versions/7.2.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.2.1](https://download.elastic.co/cloud-enterprise/versions/7.2.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.2.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.2.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.2.1-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.2.0](https://download.elastic.co/cloud-enterprise/versions/7.2.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.2.0](https://download.elastic.co/cloud-enterprise/versions/7.2.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.2.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.2.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.2.0-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.1.1](https://download.elastic.co/cloud-enterprise/versions/7.1.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.1.1](https://download.elastic.co/cloud-enterprise/versions/7.1.1.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.1.1-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.1.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.1.1-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.1.0](https://download.elastic.co/cloud-enterprise/versions/7.1.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.1.0](https://download.elastic.co/cloud-enterprise/versions/7.1.0.zip) | ECE 2.2.2 | | docker.elastic.co/cloud-assets/elasticsearch:7.1.0-0 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/kibana:7.1.1-1 | ECE 2.2.2 | | docker.elastic.co/cloud-assets/apm:7.1.0-0 | ECE 2.2.2 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.0.1](https://download.elastic.co/cloud-enterprise/versions/7.0.1.zip) | ECE 2.2.0 | +| [{{es}}, {{kib}}, and APM stack pack: 7.0.1](https://download.elastic.co/cloud-enterprise/versions/7.0.1.zip) | ECE 2.2.0 | | docker.elastic.co/cloud-assets/elasticsearch:7.0.1-0 | ECE 2.2.0 | | docker.elastic.co/cloud-assets/kibana:7.0.1-1 | ECE 2.2.0 | | docker.elastic.co/cloud-assets/apm:7.0.1-0 | ECE 2.2.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 7.0.0](https://download.elastic.co/cloud-enterprise/versions/7.0.0.zip) | ECE 2.2.0 | +| [{{es}}, {{kib}}, and APM stack pack: 7.0.0](https://download.elastic.co/cloud-enterprise/versions/7.0.0.zip) | ECE 2.2.0 | | docker.elastic.co/cloud-assets/elasticsearch:7.0.0-0 | ECE 2.2.0 | | docker.elastic.co/cloud-assets/kibana:7.0.1-1 | ECE 2.2.0 | | docker.elastic.co/cloud-assets/apm:7.0.0-0 | ECE 2.2.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.23-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.23-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.23-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.22](https://download.elastic.co/cloud-enterprise/versions/6.8.22.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.22](https://download.elastic.co/cloud-enterprise/versions/6.8.22.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.22-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.22-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.22-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.21](https://download.elastic.co/cloud-enterprise/versions/6.8.21.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.21](https://download.elastic.co/cloud-enterprise/versions/6.8.21.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.21-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.21-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.21-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.20](https://download.elastic.co/cloud-enterprise/versions/6.8.20.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.20](https://download.elastic.co/cloud-enterprise/versions/6.8.20.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.20-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.20-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.20-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.19](https://download.elastic.co/cloud-enterprise/versions/6.8.19.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.19](https://download.elastic.co/cloud-enterprise/versions/6.8.19.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.19-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.19-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.19-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.18](https://download.elastic.co/cloud-enterprise/versions/6.8.18.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.18](https://download.elastic.co/cloud-enterprise/versions/6.8.18.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.18-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.18-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.18-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.17](https://download.elastic.co/cloud-enterprise/versions/6.8.17.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.17](https://download.elastic.co/cloud-enterprise/versions/6.8.17.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.17-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.17-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.17-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.16](https://download.elastic.co/cloud-enterprise/versions/6.8.16.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.16](https://download.elastic.co/cloud-enterprise/versions/6.8.16.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.16-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.16-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.16-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.15](https://download.elastic.co/cloud-enterprise/versions/6.8.15.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.15](https://download.elastic.co/cloud-enterprise/versions/6.8.15.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.15-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.15-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.15-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.14](https://download.elastic.co/cloud-enterprise/versions/6.8.14.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.14](https://download.elastic.co/cloud-enterprise/versions/6.8.14.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.14-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.14-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.14-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.13](https://download.elastic.co/cloud-enterprise/versions/6.8.13.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.13](https://download.elastic.co/cloud-enterprise/versions/6.8.13.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.13-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.13-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.13-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.12](https://download.elastic.co/cloud-enterprise/versions/6.8.12.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.12](https://download.elastic.co/cloud-enterprise/versions/6.8.12.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.12-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.12-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.12-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.11](https://download.elastic.co/cloud-enterprise/versions/6.8.11.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.11](https://download.elastic.co/cloud-enterprise/versions/6.8.11.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.11-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.11-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.11-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.10](https://download.elastic.co/cloud-enterprise/versions/6.8.10.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.10](https://download.elastic.co/cloud-enterprise/versions/6.8.10.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.10-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.10-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.10-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.9](https://download.elastic.co/cloud-enterprise/versions/6.8.9.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.9](https://download.elastic.co/cloud-enterprise/versions/6.8.9.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.9-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.8](https://download.elastic.co/cloud-enterprise/versions/6.8.8.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.8](https://download.elastic.co/cloud-enterprise/versions/6.8.8.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.8-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.8-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.7](https://download.elastic.co/cloud-enterprise/versions/6.8.7.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.7](https://download.elastic.co/cloud-enterprise/versions/6.8.7.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.7-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.7-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.6](https://download.elastic.co/cloud-enterprise/versions/6.8.6.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.6](https://download.elastic.co/cloud-enterprise/versions/6.8.6.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.6-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.6-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.5](https://download.elastic.co/cloud-enterprise/versions/6.8.5.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.5](https://download.elastic.co/cloud-enterprise/versions/6.8.5.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.5-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.5-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.4](https://download.elastic.co/cloud-enterprise/versions/6.8.4.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.4](https://download.elastic.co/cloud-enterprise/versions/6.8.4.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.4-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.4-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.3](https://download.elastic.co/cloud-enterprise/versions/6.8.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.3](https://download.elastic.co/cloud-enterprise/versions/6.8.3.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.3-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.3-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.2](https://download.elastic.co/cloud-enterprise/versions/6.8.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.2](https://download.elastic.co/cloud-enterprise/versions/6.8.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.1](https://download.elastic.co/cloud-enterprise/versions/6.8.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.1](https://download.elastic.co/cloud-enterprise/versions/6.8.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.0](https://download.elastic.co/cloud-enterprise/versions/6.8.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.0](https://download.elastic.co/cloud-enterprise/versions/6.8.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.8.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.8.9-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.8.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.2](https://download.elastic.co/cloud-enterprise/versions/6.7.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.2](https://download.elastic.co/cloud-enterprise/versions/6.7.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.7.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.7.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.7.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.1](https://download.elastic.co/cloud-enterprise/versions/6.7.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.1](https://download.elastic.co/cloud-enterprise/versions/6.7.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.7.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.7.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.7.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.0](https://download.elastic.co/cloud-enterprise/versions/6.7.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.0](https://download.elastic.co/cloud-enterprise/versions/6.7.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.7.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.7.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.7.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.2](https://download.elastic.co/cloud-enterprise/versions/6.6.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.2](https://download.elastic.co/cloud-enterprise/versions/6.6.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.6.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.6.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.6.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.1](https://download.elastic.co/cloud-enterprise/versions/6.6.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.1](https://download.elastic.co/cloud-enterprise/versions/6.6.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.6.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.6.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.6.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.0](https://download.elastic.co/cloud-enterprise/versions/6.6.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.0](https://download.elastic.co/cloud-enterprise/versions/6.6.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.6.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.6.2-1 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.6.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.4](https://download.elastic.co/cloud-enterprise/versions/6.5.4.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.4](https://download.elastic.co/cloud-enterprise/versions/6.5.4.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.5.4-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.5.4-2 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.5.4-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.3](https://download.elastic.co/cloud-enterprise/versions/6.5.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.3](https://download.elastic.co/cloud-enterprise/versions/6.5.3.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.5.3-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.5.4-2 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.5.3-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.2](https://download.elastic.co/cloud-enterprise/versions/6.5.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.2](https://download.elastic.co/cloud-enterprise/versions/6.5.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.5.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.5.4-2 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.5.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.1](https://download.elastic.co/cloud-enterprise/versions/6.5.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.1](https://download.elastic.co/cloud-enterprise/versions/6.5.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.5.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.5.4-2 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.5.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.0](https://download.elastic.co/cloud-enterprise/versions/6.5.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.0](https://download.elastic.co/cloud-enterprise/versions/6.5.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.5.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.5.4-2 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.5.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.3](https://download.elastic.co/cloud-enterprise/versions/6.4.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.3](https://download.elastic.co/cloud-enterprise/versions/6.4.3.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.4.3-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.4.3-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.4.3-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.2](https://download.elastic.co/cloud-enterprise/versions/6.4.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.2](https://download.elastic.co/cloud-enterprise/versions/6.4.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.4.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.4.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.4.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.1](https://download.elastic.co/cloud-enterprise/versions/6.4.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.1](https://download.elastic.co/cloud-enterprise/versions/6.4.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.4.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.4.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.4.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.0](https://download.elastic.co/cloud-enterprise/versions/6.4.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.0](https://download.elastic.co/cloud-enterprise/versions/6.4.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.4.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.4.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.4.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.2](https://download.elastic.co/cloud-enterprise/versions/6.3.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.2](https://download.elastic.co/cloud-enterprise/versions/6.3.2.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.3.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.3.2-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.3.2-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.1](https://download.elastic.co/cloud-enterprise/versions/6.3.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.1](https://download.elastic.co/cloud-enterprise/versions/6.3.1.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.3.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.3.1-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.3.1-0 | ECE 2.1.0 | | | | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.0](https://download.elastic.co/cloud-enterprise/versions/6.3.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.0](https://download.elastic.co/cloud-enterprise/versions/6.3.0.zip) | ECE 1.1.4 | | docker.elastic.co/cloud-assets/elasticsearch:6.3.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/kibana:6.3.0-0 | ECE 1.1.4 | | docker.elastic.co/cloud-assets/apm:6.3.0-0 | ECE 2.1.0 | | | | -| [Elasticsearch and Kibana stack pack: 6.2.4](https://download.elastic.co/cloud-enterprise/versions/6.2.4.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.2.4](https://download.elastic.co/cloud-enterprise/versions/6.2.4.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.2.4-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.2.4-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.2.3](https://download.elastic.co/cloud-enterprise/versions/6.2.3.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.2.3](https://download.elastic.co/cloud-enterprise/versions/6.2.3.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.2.3-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.2.3-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.2.2](https://download.elastic.co/cloud-enterprise/versions/6.2.2.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.2.2](https://download.elastic.co/cloud-enterprise/versions/6.2.2.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.2.2-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.2.2-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.1.4](https://download.elastic.co/cloud-enterprise/versions/6.1.4.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.1.4](https://download.elastic.co/cloud-enterprise/versions/6.1.4.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.1.4-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.1.4-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.1.3](https://download.elastic.co/cloud-enterprise/versions/6.1.3.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.1.3](https://download.elastic.co/cloud-enterprise/versions/6.1.3.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.1.3-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.1.3-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.0.1](https://download.elastic.co/cloud-enterprise/versions/6.0.1.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.0.1](https://download.elastic.co/cloud-enterprise/versions/6.0.1.zip) | ECE 1.1.2 | | docker.elastic.co/cloud-assets/elasticsearch:6.0.1-0 | ECE 1.1.2 | | docker.elastic.co/cloud-assets/kibana:6.0.1-0 | ECE 1.1.2 | | | | -| [Elasticsearch and Kibana stack pack: 6.0.0](https://download.elastic.co/cloud-enterprise/versions/6.0.0.zip) | ECE 1.1.0 | +| [{{es}} and {{kib}} stack pack: 6.0.0](https://download.elastic.co/cloud-enterprise/versions/6.0.0.zip) | ECE 1.1.0 | | docker.elastic.co/cloud-assets/elasticsearch:6.0.0-0 | ECE 1.1.0 | | docker.elastic.co/cloud-assets/kibana:6.0.0-0 | ECE 1.1.0 | | | | -| [Elasticsearch and Kibana stack pack: 5.6.16](https://download.elastic.co/cloud-enterprise/versions/5.6.16.zip) | ECE 1.1.0 | +| [{{es}} and {{kib}} stack pack: 5.6.16](https://download.elastic.co/cloud-enterprise/versions/5.6.16.zip) | ECE 1.1.0 | | docker.elastic.co/cloud-assets/elasticsearch:5.6.16-0 | ECE 1.1.0 | | docker.elastic.co/cloud-assets/kibana:5.6.16-0 | ECE 1.1.0 | | | | -| [Elasticsearch and Kibana stack pack: 2.4.6](https://download.elastic.co/cloud-enterprise/versions/2.4.6.zip) | ECE 1.0.0 | +| [{{es}} and {{kib}} stack pack: 2.4.6](https://download.elastic.co/cloud-enterprise/versions/2.4.6.zip) | ECE 1.0.0 | | docker.elastic.co/cloud-assets/elasticsearch:2.4.6-1 | ECE 1.0.0 | | docker.elastic.co/cloud-assets/kibana:4.6.6-2 | ECE 1.0.0 | | | | -| [Elasticsearch and Kibana stack pack: 2.4.5](https://download.elastic.co/cloud-enterprise/versions/2.4.5.zip) | ECE 1.0.0 | +| [{{es}} and {{kib}} stack pack: 2.4.5](https://download.elastic.co/cloud-enterprise/versions/2.4.5.zip) | ECE 1.0.0 | | docker.elastic.co/cloud-assets/elasticsearch:2.4.5-1 | ECE 1.0.0 | | docker.elastic.co/cloud-assets/kibana:4.6.4-0 | ECE 1.0.0 | diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md index 54d9e696e0..d4abfb047e 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md @@ -11,7 +11,7 @@ mapped_pages: To perform an offline installation without a private Docker registry, you have to download the available Docker Images on each host. -1. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md). Note that for ECE version 4.0, if you want to use Elastic Stack version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images (the version 8.x images are required for system deployments). +1. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md). Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images (the version 8.x images are required for system deployments). ```sh docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:4.0.0 @@ -24,10 +24,10 @@ To perform an offline installation without a private Docker registry, you have t docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 ``` - For example, for Elastic Cloud Enterprise 4.0.0 and the Elastic Stack versions it shipped with, you need: + For example, for {{ece}} 4.0.0 and the {{stack}} versions it shipped with, you need: - * Elastic Cloud Enterprise 4.0.0 - * Elasticsearch 9.0.0, Kibana 9.0.0, and APM 9.0.0 + * {{ece}} 4.0.0 + * {es} 9.0.0, {{kib}} 9.0.0, and APM 9.0.0 2. Create .tar files of the images: @@ -42,7 +42,7 @@ To perform an offline installation without a private Docker registry, you have t docker save -o apm.9.0.0.tar docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 ``` -3. Copy the .tar files to a location on your network where they are available to each host where you plan to install Elastic Cloud Enterprise. Alternatively, you can copy the .tar files to each host directly. +3. Copy the .tar files to a location on your network where they are available to each host where you plan to install {{ece}}. Alternatively, you can copy the .tar files to each host directly. 4. On each host, load the images into Docker, replacing `FILE_PATH` with the correct path to the .tar files: ```sh @@ -63,7 +63,7 @@ To perform an offline installation without a private Docker registry, you have t curl -L -O https://download.elastic.co/cloud/elastic-cloud-enterprise.sh ``` -7. Copy the installation script to each host where you plan to install Elastic Cloud Enterprise or make it available on your network. +7. Copy the installation script to each host where you plan to install {{ece}} or make it available on your network. 8. Invoke the installation script on each host: ::::{note} diff --git a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md index cf8acd0ba2..a244e8cc0d 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-install-offline-with-registry.md @@ -17,7 +17,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau As part of the ECE [high availability](ece-ha.md) strategy, it’s a good idea to make sure that your Docker registry server is available to all ECE allocators, so that it can continue to be accessed in the event of a network partition or zone outage. Allocators attempting to start instances requiring Docker images that have not yet been pulled from a custom Docker registry will fail to start if the registry is unavailable. :::: -2. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md) and push them to your private Docker registry. Note that for ECE version 4.0, if you want to use Elastic Stack version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images. +2. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md) and push them to your private Docker registry. Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images. ```sh docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:4.0.0 @@ -30,10 +30,10 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0 ``` - For example, for Elastic Cloud Enterprise 4.0.0 and the Elastic Stack versions it shipped with, you need: + For example, for {{ece}} 4.0.0 and the {{stack}} versions it shipped with, you need: - * Elastic Cloud Enterprise 4.0.0 - * Elasticsearch 9.0.0, Kibana 9.0.0, APM 9.0.0 + * {{ece}} 4.0.0 + * {{es}} 9.0.0, {{kib}} 9.0.0, APM 9.0.0 :::{important} Enterprise Search is not available in versions 9.0+. @@ -71,7 +71,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau curl -L -O https://download.elastic.co/cloud/elastic-cloud-enterprise.sh ``` -6. Copy the installation script to each host where you plan to install Elastic Cloud Enterprise or make it available on your network. +6. Copy the installation script to each host where you plan to install {{ece}} or make it available on your network. 7. Invoke the installation script on each host with the `--docker-registry REGISTRY` parameter, replacing `REGISTRY` with your actual registry URL (for example `my.private.repo:5000`): diff --git a/deploy-manage/deploy/cloud-enterprise/ece-integrations-server-api-example.md b/deploy-manage/deploy/cloud-enterprise/ece-integrations-server-api-example.md index f4515ada77..ba9f320a28 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-integrations-server-api-example.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-integrations-server-api-example.md @@ -8,14 +8,14 @@ mapped_pages: # Enable Integrations Server through the API [ece-integrations-server-api-example] -This example demonstrates how to use the Elastic Cloud Enterprise RESTful API to create a deployment with Integrations Server enabled. +This example demonstrates how to use the {{ece}} RESTful API to create a deployment with Integrations Server enabled. For more information on how to manage Integrations Server from the UI, check [Manage your Integrations Server](manage-integrations-server.md) ## Requirements [ece_requirements_4] -Integrations Server can be enabled only on new deployments, starting with Elastic Stack version 8.0. +Integrations Server can be enabled only on new deployments, starting with {{stack}} version 8.0. It’s possible to enable Integrations Server on an existing deployment with version 8.0 only if [APM & Fleet Server](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-apm-and-fleet.html) hasn’t been previously enabled on the deployment. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-load-balancers.md b/deploy-manage/deploy/cloud-enterprise/ece-load-balancers.md index 63340ca37a..9a1173ff22 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-load-balancers.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-load-balancers.md @@ -13,16 +13,16 @@ mapped_pages: Use the following recommendations when configuring your load balancer: * **High availability**: The exact number of load balancers depends on the utilization rate for your clusters. In a highly available installation, use at least two load balancers for each availability zone in your installation. -* **Inbound ports**: Load balancers require that inbound traffic is open on the ports used by Elasticsearch, Kibana, and the transport client. +* **Inbound ports**: Load balancers require that inbound traffic is open on the ports used by {{es}}, {{kib}}, and the transport client. * **X-found-cluster**: ECE proxy uses the header `X-found-cluster` to know which cluster’s UUID (Universally Unique Identifier) the traffic needs to be routed to. If the load balancer rewrites a URL, make sure the HTTP header `X-Found-Cluster` gets added. For example: `X-found-cluster: d59109b8d542c5c4845679e597810796`. * **X-Forwarded-For**: Configure load balancers to strip inbound `X-Forwarded-For` headers and to replace them with the client source IP as seen by the load balancer. This is required to prevent clients from spoofing their IP addresses. {{ece}} uses `X-Forwarded-For` for logging client IP addresses and, if you have implemented IP filtering, for traffic management. * **HTTP**: Use *HTTP mode* for ports 9200/9243 (HTTP traffic to clusters) and also for ports 12400/12443 (adminconsole traffic). * **TCP**: Use *TCP mode* for ports 9300/9343 (transport client traffic to clusters) and the load balancer should enable the proxy protocol support. * **TCP**: Use *TCP mode* for port 9400 for TLS authenticated passthrough between clusters for cross-cluster search (CCS) and replication (CCR), if used. The load balancer should **not** enable the proxy protocol support. * **TCP**: Use *HTTP mode* for port 9443 for API key authenticated traffic between clusters for cross-cluster search (CCS) and replication (CCR), if used. Make sure that all load balancers or proxies sending this traffic to deployments hosted on {{ece}} are sending HTTP/1.1 traffic. -* **Deployment traffic and Admin traffic**: Create separate load balancers for Deployment traffic (Elasticsearch and Kibana traffic) and Admin traffic (Cloud UI Console and Admin API). This separation allows you to migrate to a large installation topology without reconfiguring or creating an additional load balancer. +* **Deployment traffic and Admin traffic**: Create separate load balancers for Deployment traffic ({{es}} and {{kib}} traffic) and Admin traffic (Cloud UI Console and Admin API). This separation allows you to migrate to a large installation topology without reconfiguring or creating an additional load balancer. * **Traffic across proxies**: Balance traffic evenly across all proxies. Proxies are constantly updated with the internal routing information on how to direct requests to clusters on allocators that are hosting their nodes across zones. Proxies prefer cluster nodes in their local zone and route requests primarily to nodes in their own zone. -* **Network**: Use network that is fast enough from a latency and throughput perspective to be considered local for the Elasticsearch clustering requirement. There shouldn’t be a major advantage in "preferring local" from a load balancer perspective (rather than a proxy perspective), it might even lead to potential hot spotting on specific proxies, so it should be avoided. +* **Network**: Use network that is fast enough from a latency and throughput perspective to be considered local for the {{es}} clustering requirement. There shouldn’t be a major advantage in "preferring local" from a load balancer perspective (rather than a proxy perspective), it might even lead to potential hot spotting on specific proxies, so it should be avoided. * **TCP Timeout**: Use the default (or required) TCP timeout value from the cloud provider and do not to set a timeout for the load balancer. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md b/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md index a92bcc15ca..1c65c3d476 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md @@ -8,7 +8,7 @@ mapped_pages: # Manage your allocators capacity [ece-manage-capacity] -In {{ece}} (ECE), every host is a runner. Depending on the size of your platform, runners can have [one or more roles](ece-roles.md): Coordinator, director, proxy, and allocator. While planning the capacity of your ECE installation, you have to properly size the capacity for all roles. However, the allocator role deserves particular attention, as it hosts the Elasticsearch, Kibana, and APM nodes, and the relevant services. +In {{ece}} (ECE), every host is a runner. Depending on the size of your platform, runners can have [one or more roles](ece-roles.md): Coordinator, director, proxy, and allocator. While planning the capacity of your ECE installation, you have to properly size the capacity for all roles. However, the allocator role deserves particular attention, as it hosts the {{es}}, {{kib}}, and APM nodes, and the relevant services. This section focuses on the allocator role, and explains how to plan its capacity in terms of memory, CPU, `processors` setting, and storage. @@ -85,15 +85,15 @@ Those percentages represent the upper limit of the % of the total CPU resources In addition to the [CPU quotas](#ece-alloc-cpu), the `processors` setting also plays a relevant role. -The allocated `processors` setting originates from Elasticsearch and is responsible for calculating your [thread pools](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md#node.processors). While the CPU quota defines the percentage of the total CPU resources of an allocator that are assigned to an instance, the allocated `processors` define how the thread pools are calculated in Elasticsearch, and therefore how many concurrent search and indexing requests an instance can process. In other words, the CPU ratio defines how fast a single task can be completed, while the `processors` setting defines how many different tasks can be completed at the same time. +The allocated `processors` setting originates from {{es}} and is responsible for calculating your [thread pools](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md#node.processors). While the CPU quota defines the percentage of the total CPU resources of an allocator that are assigned to an instance, the allocated `processors` define how the thread pools are calculated in {{es}}, and therefore how many concurrent search and indexing requests an instance can process. In other words, the CPU ratio defines how fast a single task can be completed, while the `processors` setting defines how many different tasks can be completed at the same time. -We rely on Elasticsearch and the `-XX:ActiveProcessorCount` JVM setting to automatically detect the allocated `processors`. +We rely on {{es}} and the `-XX:ActiveProcessorCount` JVM setting to automatically detect the allocated `processors`. -In earlier versions of ECE and Elasticsearch, the [Elasticsearch processors](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md#node.processors) setting was used to configure the allocated `processors` according to the following formula: +In earlier versions of ECE and {{es}}, the [{{es}} processors](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md#node.processors) setting was used to configure the allocated `processors` according to the following formula: `Math.min(16,Math.max(2,(16*instanceCapacity*1.0/1024/64).toInt))` -The following table gives an overview of the allocated `processors` that are used to calculate the Elasticsearch [thread pools](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md) based on the preceding formula: +The following table gives an overview of the allocated `processors` that are used to calculate the {{es}} [thread pools](elasticsearch://reference/elasticsearch/configuration-reference/thread-pool-settings.md) based on the preceding formula: | instance size | vCPU | | --- | --- | @@ -104,14 +104,14 @@ The following table gives an overview of the allocated `processors` that are use | 16384 | 4 | | 32768 | 8 | -This table also provides a rough indication of what the auto-detected value could be on newer versions of ECE and Elasticsearch. +This table also provides a rough indication of what the auto-detected value could be on newer versions of ECE and {{es}}. ## Storage [ece-alloc-storage] ECE has specific [hardware prerequisites](ece-hardware-prereq.md) for storage. Disk space is consumed by system logs, container overhead, and deployment data. -The main factor for selecting a disk quota is the deployment data, that is, data from your Elasticsearch, Kibana, and APM nodes. The biggest portion of data is consumed by the Elasticsearch nodes. +The main factor for selecting a disk quota is the deployment data, that is, data from your {{es}}, {{kib}}, and APM nodes. The biggest portion of data is consumed by the {{es}} nodes. ::::{note} ECE uses [XFS](ece-software-prereq.md#ece-xfs) to enforce specific disk space quotas to control the disk consumption for the deployment nodes running on your allocator. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-networking-prereq.md b/deploy-manage/deploy/cloud-enterprise/ece-networking-prereq.md index 394f5a015e..738631ef79 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-networking-prereq.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-networking-prereq.md @@ -29,9 +29,9 @@ When there are multiple hosts for each role, the inbound networking and ports ca | --- | --- | --- | --- | | | All | 22 | Installation and troubleshooting SSH access only (TCP)
| | 2 | Coordinator | 12300/12343, 12400/12443 | Admin API access (HTTP/HTTPS)
| -| 3 | Proxy | 9200, 9243 | Elasticsearch REST API. 9200 is plain text and 9243 is with TLS, also required by load balancers
| -| 3 | Proxy | 9300, 9343 | Elasticsearch transport client. 9300 is plain text and 9343 is with TLS, also required by load balancers
| -| 3 | Proxy | 9400, 9443 | Elasticsearch Cross Cluster Search and Cross Cluster Replication with TLS authentication (9400) or API key authentication (9443), also required by load balancers. Can be blocked if [CCR/CCS](../../remote-clusters/ece-enable-ccs.md) is not used.
| +| 3 | Proxy | 9200, 9243 | {{es}} REST API. 9200 is plain text and 9243 is with TLS, also required by load balancers
| +| 3 | Proxy | 9300, 9343 | {{es}} transport client. 9300 is plain text and 9343 is with TLS, also required by load balancers
| +| 3 | Proxy | 9400, 9443 | {{es}} Cross Cluster Search and Cross Cluster Replication with TLS authentication (9400) or API key authentication (9443), also required by load balancers. Can be blocked if [CCR/CCS](../../remote-clusters/ece-enable-ccs.md) is not used.
| | 7 | Coordinator | 12400/12443 | Cloud UI console to API (HTTP/HTTPS)
| **Inbound traffic from other ECE hosts** @@ -41,13 +41,13 @@ When there are multiple hosts for each role, the inbound networking and ports ca | 1 | Director | 2112 | ZooKeeper ensemble discovery/joining (TCP)
| | 1 | Director | 12898-12908, 13898-13908 | ZooKeeper leader and election activity | | 4 | Director | 12191-12201 | Client forwarder to ZooKeeper, one port per director (TLS tunnels)
| -| 5 | Allocator | 19000-19999 | Elasticsearch node to node and Proxy to Elasticsearch for CCR/CCS (Node Transport 6.x+/TLS 6.x+)
| +| 5 | Allocator | 19000-19999 | {{es}} node to node and Proxy to {{es}} for CCR/CCS (Node Transport 6.x+/TLS 6.x+)
| | 7 | Coordinator | 22191-22195 | Connections to initial coordinator from allocators and proxies, one port per coordinator, up to five (TCP)
| -| 9 | Proxy | 9200/9243, 9300/9343 | Kibana and Elasticsearch (HTTPS)
| -| 10 | Allocator | 18000-18999 | Constructor to Elasticsearch cluster (HTTPS)
| +| 9 | Proxy | 9200/9243, 9300/9343 | {{kib}} and {{es}} (HTTPS)
| +| 10 | Allocator | 18000-18999 | Constructor to {{es}} cluster (HTTPS)
| | 11 | Allocator | 18000-18999/20000-20999 | Proxy to Elasticsearch/Kibana/APM Server instance (HTTPS/Transport Client 6.x+/TLS 6.x+)
| | | Allocator | 21000-21999 | APM Server (Instance Monitoring)
| -| 12 | Allocator | 23000-23999 | Elasticsearch node to node and Proxy to Elasticsearch for CCR/CCS using Remote Cluster Security
| +| 12 | Allocator | 23000-23999 | {{es}} node to node and Proxy to {{es}} for CCR/CCS using Remote Cluster Security
| | 13 | Allocator | 14000 | Proxy to Allocator service endpoint (HTTPS)
| | 14 | Proxy | 14043 | API to Proxy for Allocator service traffic (HTTPS)
| @@ -71,10 +71,10 @@ Outbound traffic must also permit connections to the [snapshot repositories](../ A typical ECE installation should be contained within a single data center. We recommend that ECE installations not span different data centers, due to variations in networking latency and bandwidth that cannot be controlled. -Installation of ECE across multiple data centers might be feasible with sufficiently low latency and high bandwidth, with some restrictions around what we can support. Based on our experience with our hosted Elastic Cloud service, the following is required: +Installation of ECE across multiple data centers might be feasible with sufficiently low latency and high bandwidth, with some restrictions around what we can support. Based on our experience with our hosted {{ecloud}} service, the following is required: * A typical network latency between the data centers of less than 10ms round-trip time during pings * A network bandwidth of at least 10 Gigabit -If you choose to deploy a single ECE installation across multiple data centers, you might need to contend with additional disruptions due to bandwidth or latency issues. Both ECE and Elasticsearch are designed to be resilient to networking issues, but this resiliency is intended to handle exceptions and should not be depended on as part of normal operations. If Elastic determines during a support case that an issue is related to an installation across multiple data centers, the recommended resolution will be to consolidate your installation into a single data center, with further support limited until consolidation is complete. +If you choose to deploy a single ECE installation across multiple data centers, you might need to contend with additional disruptions due to bandwidth or latency issues. Both ECE and {{es}} are designed to be resilient to networking issues, but this resiliency is intended to handle exceptions and should not be depended on as part of normal operations. If Elastic determines during a support case that an issue is related to an installation across multiple data centers, the recommended resolution will be to consolidate your installation into a single data center, with further support limited until consolidation is complete. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md index 9d0118dd31..d816875a73 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md @@ -8,7 +8,7 @@ mapped_pages: # Custom endpoint aliases [ece-regional-deployment-aliases] -Custom aliases for your deployment endpoints on Elastic Cloud Enterprise allow you to have predictable, human-readable URLs that can be shared easily. +Custom aliases for your deployment endpoints on {{ece}} allow you to have predictable, human-readable URLs that can be shared easily. ::::{important} Before setting up your custom alias, your platform administrator must enable the feature. Check [Enable custom endpoint aliases](enable-custom-endpoint-aliases.md) for more information. @@ -41,7 +41,7 @@ To remove an alias from your deployment, or if you want to re-assign an alias to ## Using the custom endpoint URL [ece-using-regional-deployment-alias] -To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the Elasticsearch cluster in that deployment is `test-alias.es`. +To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the {{es}} cluster in that deployment is `test-alias.es`. ::::{note} You can get the application-specific custom endpoint alias by selecting **Copy endpoint** for that product. It should contain a subdomain for each application type, for example `es`, `kb`, `apm`, or `ent`. @@ -67,7 +67,7 @@ While the `TransportClient` is deprecated, your custom endpoint aliases still wo * As part of the host name: - Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to Elasticsearch. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. + Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to {{es}}. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. * As part of the **Settings**: diff --git a/deploy-manage/deploy/cloud-enterprise/ece-roles.md b/deploy-manage/deploy/cloud-enterprise/ece-roles.md index 241e35377f..933125abfb 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-roles.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-roles.md @@ -8,7 +8,7 @@ mapped_pages: # Separation of roles [ece-roles] -The separation of roles is required to group components on ECE and prevent conflicting workloads. When you install Elastic Cloud Enterprise on the first host, it is assigned many different host roles: Allocator, coordinator, director, and proxy. This role assignment is required to bring up your initial deployments. In a production environment, some of these roles need to be separated, as their loads scale differently and can create conflicting demands when placed on the same hosts. There are also certain [security implications that are addressed by separating roles](../../security/secure-your-elastic-cloud-enterprise-installation.md#ece-securing-vectors). +The separation of roles is required to group components on ECE and prevent conflicting workloads. When you install {{ece}} on the first host, it is assigned many different host roles: Allocator, coordinator, director, and proxy. This role assignment is required to bring up your initial deployments. In a production environment, some of these roles need to be separated, as their loads scale differently and can create conflicting demands when placed on the same hosts. There are also certain [security implications that are addressed by separating roles](../../security/secure-your-elastic-cloud-enterprise-installation.md#ece-securing-vectors). Roles that should not be held by the same host: diff --git a/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md b/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md index 4eeb5fc253..a4055ebddc 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-software-prereq.md @@ -18,14 +18,14 @@ To install ECE, make sure you prepare your environment with the following softwa ## Supported Linux kernel [ece-linux-kernel] -Elastic Cloud Enterprise requires 3.10.0-1160.31.1 or later on RHEL. +{{ece}} requires 3.10.0-1160.31.1 or later on RHEL. We recommend using kernel 4.15.x or later on Ubuntu. To check your kernel version, run `uname -r`. ::::{note} -Elastic Cloud Enterprise is not supported on Linux distributions that use [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html) version 2. +{{ece}} is not supported on Linux distributions that use [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html) version 2. :::: @@ -54,7 +54,7 @@ ECE requires using a supported combination of Linux distribution and Docker or P ::::{note} -Elastic Cloud Enterprise does not support Amazon Linux. +{{ece}} does not support Amazon Linux. :::: @@ -70,9 +70,9 @@ free -h ## XFS [ece-xfs] -XFS is required if you want to use disk space quotas for Elasticsearch data directories. +XFS is required if you want to use disk space quotas for {{es}} data directories. -Disk space quotas set a limit on the amount of disk space an Elasticsearch cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. +Disk space quotas set a limit on the amount of disk space an {{es}} cluster node can use. Currently, quotas are calculated by a static ratio of 1:32, which means that for every 1 GB of RAM a cluster is given, a cluster node is allowed to consume 32 GB of disk space. ::::{important} You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-wildcard-dns.md b/deploy-manage/deploy/cloud-enterprise/ece-wildcard-dns.md index 16d9788ba2..9cc136bdad 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-wildcard-dns.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-wildcard-dns.md @@ -12,9 +12,9 @@ mapped_pages: We do not recommend using `ip.es.io` for production systems. Please set up your own domain name and DNS resolver for production. We do not guarantee uptime with `ip.es.io`. :::: -By default, Elastic Cloud Enterprise uses the external `ip.es.io` service provided by Elastic to resolve virtual Elasticsearch cluster host names in compliance with RFC1918. The service works by resolving host names of the form `.ip.es.io` to ``. In the case of Elastic Cloud Enterprise, each cluster is assigned a virtual host name of the form `..ip.es.io:`, such as `6dfc65aae62341e18a8b7692dcc97186.10.8.156.132.ip.es.io:9243`. The `ip.es.io` service simply resolves the virtual host name of the cluster to the proxy address which is specified during installation, `10.8.156.132` in our example, so that client requests are sent to the proxy. The proxy then extracts the cluster ID from the virtual host name of the cluster and uses its internal routing table to route the request to the right allocator. +By default, {{ece}} uses the external `ip.es.io` service provided by Elastic to resolve virtual {{es}} cluster host names in compliance with RFC1918. The service works by resolving host names of the form `.ip.es.io` to ``. In the case of {{ece}}, each cluster is assigned a virtual host name of the form `..ip.es.io:`, such as `6dfc65aae62341e18a8b7692dcc97186.10.8.156.132.ip.es.io:9243`. The `ip.es.io` service simply resolves the virtual host name of the cluster to the proxy address which is specified during installation, `10.8.156.132` in our example, so that client requests are sent to the proxy. The proxy then extracts the cluster ID from the virtual host name of the cluster and uses its internal routing table to route the request to the right allocator. -The `ip.es.io` service is provided to help you evaluate Elastic Cloud Enterprise without having to set up DNS records for your environment. You must set up a wildcard DNS record for your production system. You typically set up a wildcard DNS record that resolves to the proxy host or to a load balancer if you set up multiple proxies fronted by a load balancer. You can create both a wildcard DNS entry for your endpoints and a wildcard TLS/SSL certificate, so that you can create multiple clusters without the need for further DNS or TSL/SSL modifications. Simply configure your DNS to point to your load balancers and install your certificates on them, so that communication with the cluster is secure. +The `ip.es.io` service is provided to help you evaluate {{ece}} without having to set up DNS records for your environment. You must set up a wildcard DNS record for your production system. You typically set up a wildcard DNS record that resolves to the proxy host or to a load balancer if you set up multiple proxies fronted by a load balancer. You can create both a wildcard DNS entry for your endpoints and a wildcard TLS/SSL certificate, so that you can create multiple clusters without the need for further DNS or TSL/SSL modifications. Simply configure your DNS to point to your load balancers and install your certificates on them, so that communication with the cluster is secure. A wildcard certificate is enabled based on the deployment domain name. For more information on modifying the deployment domain name, check [Configure endpoints](change-endpoint-urls.md). The deployment domain name also determines the endpoint URLs that are displayed in the Cloud UI. diff --git a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-apm.md b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-apm.md index cbfbebd156..20a043329f 100644 --- a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-apm.md +++ b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-apm.md @@ -34,12 +34,12 @@ Users running {{stack}} versions 7.16 or 7.17 need to manually configure TLS. Th Pick one of the following options: 1. Upload and configure a publicly signed {{es}} TLS certificates. Check [Encrypt traffic in clusters with a self-managed Fleet Server](/reference/fleet/secure-connections.md) for details. -2. Change the {{es}} hosts where {{agent}}s send data from the default public URL, to the internal URL. In {{kib}}, navigate to **Fleet** and select the **Elastic Cloud agent policy**. Click **Fleet settings** and update the {{es}} hosts URL. For example, if the current URL is `https://123abc.us-central1.gcp.foundit.no:9244`, change it to `http://123abc.containerhost:9244`. +2. Change the {{es}} hosts where {{agent}}s send data from the default public URL, to the internal URL. In {{kib}}, navigate to **Fleet** and select the **{{ecloud}} agent policy**. Click **Fleet settings** and update the {{es}} hosts URL. For example, if the current URL is `https://123abc.us-central1.gcp.foundit.no:9244`, change it to `http://123abc.containerhost:9244`. ## Edit standalone APM settings (legacy) [ece-edit-apm-standalone-settings-ece] -Elastic Cloud Enterprise supports most of the legacy APM settings. Through a YAML editor in the console, you can append your APM Server properties to the `apm-server.yml` file. Your changes to the configuration file are read on startup. +{{ece}} supports most of the legacy APM settings. Through a YAML editor in the console, you can append your APM Server properties to the `apm-server.yml` file. Your changes to the configuration file are read on startup. ::::{important} Be aware that some settings could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest APM settings and syntax](/solutions/observability/apps/configure-apm-server.md). @@ -59,7 +59,7 @@ To change APM settings: 6. Select **Save changes**. ::::{note} -If a setting is not supported by Elastic Cloud Enterprise, you get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported. +If a setting is not supported by {{ece}}, you get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported. :::: diff --git a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-elasticsearch.md b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-elasticsearch.md index 4c517d2a18..e439acf349 100644 --- a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-elasticsearch.md +++ b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-elasticsearch.md @@ -7,12 +7,12 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-add-user-settings.html --- -# Add Elasticsearch user settings [ece-add-user-settings] +# Add {{es}} user settings [ece-add-user-settings] -Change how Elasticsearch runs by providing your own user settings. User settings are appended to the `elasticsearch.yml` configuration file for your cluster and provide custom configuration options. Elastic Cloud Enterprise supports many of the user settings for the version of Elasticsearch that your cluster is running. +Change how {{es}} runs by providing your own user settings. User settings are appended to the `elasticsearch.yml` configuration file for your cluster and provide custom configuration options. {{ece}} supports many of the user settings for the version of {{es}} that your cluster is running. ::::{note} -ECE blocks the configuration of certain settings that could break your cluster if misconfigured, including some zen discovery and security settings. For a list of settings that are generally safe in cloud environments, refer to the [Elasticsearch configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). +ECE blocks the configuration of certain settings that could break your cluster if misconfigured, including some zen discovery and security settings. For a list of settings that are generally safe in cloud environments, refer to the [{{es}} configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). :::: To change {{es}} user settings: @@ -28,7 +28,7 @@ To change {{es}} user settings: 6. Select **Save changes**. ::::{warning} - If you encounter the **Edit elasticsearch.yml** carets, be sure to make your changes on all Elasticsearch node types. + If you encounter the **Edit elasticsearch.yml** carets, be sure to make your changes on all {{es}} node types. :::: ## Example: enable email notifications from Gmail [ece_enable_email_notifications_from_gmail] @@ -36,5 +36,5 @@ To change {{es}} user settings: You can configure email notifications to Gmail for a user that you specify. For details, refer to [Configuring email actions](../../../explore-analyze/alerts-cases/watcher/actions-email.md). ::::{important} -Before you add the `xpack.notification.email*` setting in Elasticsearch user settings, make sure you add the account SMTP password to the keystore as a [secret value](../../../deploy-manage/security/secure-settings.md). +Before you add the `xpack.notification.email*` setting in {{es}} user settings, make sure you add the account SMTP password to the keystore as a [secret value](../../../deploy-manage/security/secure-settings.md). :::: diff --git a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-enterprise.md b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-enterprise.md index 103fb14f2e..16623c44fa 100644 --- a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-enterprise.md +++ b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-enterprise.md @@ -15,7 +15,7 @@ Enterprise Search is not available in {{stack}} 9.0+. Change how Enterprise Search runs by providing your own user settings. User settings are appended to the `ent-search.yml` configuration file for your instance and provide custom configuration options. -Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/enterprise-search/current/configuration.html#configuration-file) in the Enterprise Search documentation for a full list of configuration settings. Settings supported on Elastic Cloud Enterprise are indicated by an {{ecloud}} icon (![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ecloud}}")). +Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/enterprise-search/current/configuration.html#configuration-file) in the Enterprise Search documentation for a full list of configuration settings. Settings supported on {{ece}} are indicated by an {{ecloud}} icon (![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ecloud}}")). To add user settings: @@ -30,7 +30,7 @@ To add user settings: 6. Select **Save changes**. ::::{note} -If a setting is not supported by Elastic Cloud Enterprise, an error message displays when you try to save your settings. +If a setting is not supported by {{ece}}, an error message displays when you try to save your settings. :::: diff --git a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-kibana.md b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-kibana.md index 5484b911fd..90dde023e7 100644 --- a/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-kibana.md +++ b/deploy-manage/deploy/cloud-enterprise/edit-stack-settings-kibana.md @@ -7,15 +7,15 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-kibana-settings.html --- -# Add Kibana user settings [ece-manage-kibana-settings] +# Add {{kib}} user settings [ece-manage-kibana-settings] -Elastic Cloud Enterprise supports most of the standard Kibana settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. +{{ece}} supports most of the standard {{kib}} settings. Through a YAML editor in the console, you can append {{kib}} properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. ::::{important} -Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest Kibana settings and syntax](kibana://reference/configuration-reference/general-settings.md). +Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest {{kib}} settings and syntax](kibana://reference/configuration-reference/general-settings.md). :::: -To change Kibana settings: +To change {{kib}} settings: 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. On the **Deployments** page, select your deployment. @@ -27,15 +27,15 @@ To change Kibana settings: 5. Update the user settings. 6. Select **Save changes**. -Saving your changes initiates a configuration plan change that restarts Kibana automatically for you. +Saving your changes initiates a configuration plan change that restarts {{kib}} automatically for you. ::::{note} -If a setting is not supported by Elastic Cloud Enterprise, you will get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported. +If a setting is not supported by {{ece}}, you will get an error message when you try to save. We suggest changing one setting with each save, so you know which one is not supported. :::: ::::{tip} -If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. Elastic Cloud Enterprise manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for Elastic Cloud Enterprise to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. +If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. {{ece}} manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for {{ece}} to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. :::: diff --git a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md index d502a5d2e5..40d80680de 100644 --- a/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/enable-custom-endpoint-aliases.md @@ -25,8 +25,8 @@ After installing or upgrading to version 2.10 or later: :::: - * For Elasticsearch, the certificate needs to allow for ***.es.** - * For Kibana, the certificate needs to allow for ***.kb.** + * For {{es}}, the certificate needs to allow for ***.es.** + * For {{kib}}, the certificate needs to allow for ***.kb.** * For APM, the certificate needs to allow for ***.apm.** * For Fleet, the certificate needs to allow for ***.fleet.** * For Universal Profiling, the certificate needs to allow for ***.profiling.** and ***.symbols.** diff --git a/deploy-manage/deploy/cloud-enterprise/fresh-installation-of-ece-using-podman-hosts.md b/deploy-manage/deploy/cloud-enterprise/fresh-installation-of-ece-using-podman-hosts.md index 4a5770dcf5..078f587b67 100644 --- a/deploy-manage/deploy/cloud-enterprise/fresh-installation-of-ece-using-podman-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/fresh-installation-of-ece-using-podman-hosts.md @@ -42,7 +42,7 @@ This section provides guidelines and recommendations to install ECE using a Podm 2. Add additional Podman hosts - Refer to the official [Install Elastic Cloud Enterprise on an additional host](install-ece-on-additional-hosts.md) and [ECE installation](./install-ece-procedures.md) documentation to adapt the command line parameters to your environment including fetching the role token. + Refer to the official [Install {{ece}} on an additional host](install-ece-on-additional-hosts.md) and [ECE installation](./install-ece-procedures.md) documentation to adapt the command line parameters to your environment including fetching the role token. [JVM heap sizes](ece-jvm.md) describes recommended JVM options. diff --git a/deploy-manage/deploy/cloud-enterprise/generate-roles-tokens.md b/deploy-manage/deploy/cloud-enterprise/generate-roles-tokens.md index c5c7c9f1a8..f9da468b59 100644 --- a/deploy-manage/deploy/cloud-enterprise/generate-roles-tokens.md +++ b/deploy-manage/deploy/cloud-enterprise/generate-roles-tokens.md @@ -33,13 +33,13 @@ Persistent token The permitted roles are the same as those you can [assign in the Cloud UI](./assign-roles-to-hosts.md): `allocator` -: Allocates the available computing resources to Elasticsearch nodes or Kibana instances. In larger installations, a majority of the machines will be allocators. +: Allocates the available computing resources to {{es}} nodes or {{kib}} instances. In larger installations, a majority of the machines will be allocators. `coordinator` : Serves as a distributed coordination system and resource scheduler. `proxy` -: Manages communication between a user and an Elasticsearch or Kibana instance. +: Manages communication between a user and an {{es}} or {{kib}} instance. `director` : Manages the ZooKeeper datastore. This role is typically shared with the coordinator role. In production deployments it can be separated from a coordinator. diff --git a/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md b/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md index e333326ae2..e76d1f341a 100644 --- a/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md +++ b/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md @@ -47,7 +47,7 @@ This type of deployment is recommended for many production setups. You need: ::: * Monitor the load on proxies and make sure the volume of user requests routed by the proxies does not affect the resources available to the ECE management services. -* Note that the large-sized Elastic Cloud Enterprise installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). +* Note that the large-sized {{ece}} installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). You can proceed with this scenario and [install ECE](./install.md). @@ -69,6 +69,6 @@ This type of deployment is recommended for deployments with significant overall :alt: A large installation with nine to twelve hosts across three availability zones ::: -Note that the large-sized Elastic Cloud Enterprise installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). +Note that the large-sized {{ece}} installation separates the allocator and proxy roles from the director and coordinator roles (ECE management services). You can proceed with this scenario and [install ECE](./install.md). \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-enterprise/install-ece-on-additional-hosts.md b/deploy-manage/deploy/cloud-enterprise/install-ece-on-additional-hosts.md index 562f4a55c0..ec34cff2e4 100644 --- a/deploy-manage/deploy/cloud-enterprise/install-ece-on-additional-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/install-ece-on-additional-hosts.md @@ -8,12 +8,12 @@ mapped_pages: # Install ECE on additional hosts [ece-installing-additional] -You can install Elastic Cloud Enterprise on additional hosts if you want: +You can install {{ece}} on additional hosts if you want: -* More processing capacity for Elasticsearch nodes in your deployment. You can add a host by installing Elastic Cloud Enterprise on it and then [assign the allocator role](assign-roles-to-hosts.md) in the Cloud UI. +* More processing capacity for {{es}} nodes in your deployment. You can add a host by installing {{ece}} on it and then [assign the allocator role](assign-roles-to-hosts.md) in the Cloud UI. * To [create a deployment](create-deployment.md) that is fault-tolerant, with enough resources available to support multiple availability zones. -To install Elastic Cloud Enterprise on additional hosts: +To install {{ece}} on additional hosts: 1. Download and run the installation script on each additional host. Include the `--coordinator-host HOST_IP` and `--roles-token 'TOKEN'` parameters provided to you when you installed on the first host, otherwise the new host will be rejected. As well, `VERSION_NAME` must match your current ECE installation version for the process to succeed. @@ -23,7 +23,7 @@ To install Elastic Cloud Enterprise on additional hosts: --roles-token 'TOKEN' --cloud-enterprise-version VERSION_NAME ``` - If you are creating a larger Elastic Cloud Enterprise installation: + If you are creating a larger {{ece}} installation: * Make your installation [fault tolerant or highly available](ece-ha.md) by determining the failure domain for each host and using the `--availability-zone ZONE_NAME` parameter to specify the name of an [availability zone](ece-ha.md). For production systems, hosts should go into three different availability zones. For example, including the parameter `--availability-zone ece-zone-1c` when you install on additional hosts will assign each host to availability zone `ece-zone-1c`. * To simplify the steps for assigning roles so that you do not have to change the roles in the Cloud UI later on, include the `--roles` parameter. For example, to bring up additional allocators to scale out your installation, specify the `--roles "allocator"` parameter. You do need to [generate a roles token](generate-roles-tokens.md) that has the right permissions for this to work; the token generated during the installation on the first host will not suffice. @@ -31,5 +31,5 @@ To install Elastic Cloud Enterprise on additional hosts: After installation completes, additional hosts come online with some roles assigned to them already. If you did not specify additional roles with the `--roles` parameter, you can [assign new roles to nodes](assign-roles-to-hosts.md) in the Cloud UI later. -For automation purposes, you can set up a DNS hostname for the coordinator host. Setting up a round robin CNAME should be enough to ensure that the value does not need to change in automation scripts. Any one coordinator can be used, including the initial coordinator (the first host you installed Elastic Cloud Enterprise on). +For automation purposes, you can set up a DNS hostname for the coordinator host. Setting up a round robin CNAME should be enough to ensure that the value does not need to change in automation scripts. Any one coordinator can be used, including the initial coordinator (the first host you installed {{ece}} on). diff --git a/deploy-manage/deploy/cloud-enterprise/install-ece-procedures.md b/deploy-manage/deploy/cloud-enterprise/install-ece-procedures.md index d1b7833df0..74a0b5536a 100644 --- a/deploy-manage/deploy/cloud-enterprise/install-ece-procedures.md +++ b/deploy-manage/deploy/cloud-enterprise/install-ece-procedures.md @@ -7,7 +7,7 @@ applies_to: # ECE installation procedures -Choose the guide for the Elastic Cloud Enterprise [deployment scenario](/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md) that best fits your business needs: +Choose the guide for the {{ece}} [deployment scenario](/deploy-manage/deploy/cloud-enterprise/identify-deployment-scenario.md) that best fits your business needs: * [Deploy a small installation](deploy-small-installation.md): For development, test, and small-scale use cases. * [Deploy a medium installation](deploy-medium-installation.md): For many production setups. diff --git a/deploy-manage/deploy/cloud-enterprise/install.md b/deploy-manage/deploy/cloud-enterprise/install.md index b9c60f5ed2..db31408105 100644 --- a/deploy-manage/deploy/cloud-enterprise/install.md +++ b/deploy-manage/deploy/cloud-enterprise/install.md @@ -11,11 +11,11 @@ mapped_pages: navigation_title: Install ECE --- -# Install Elastic Cloud Enterprise [ece-installing] +# Install {{ece}} [ece-installing] You can deploy {{ece}} (ECE) on public or private clouds, virtual machines, or on-premises. -In ECE, a host refers to any server, VM, or cloud instance where the ECE software is installed. An ECE platform consists of multiple hosts working together to orchestrate Elastic Stack applications. +In ECE, a host refers to any server, VM, or cloud instance where the ECE software is installed. An ECE platform consists of multiple hosts working together to orchestrate {{stack}} applications. For public cloud deployments, you can choose from the following providers: diff --git a/deploy-manage/deploy/cloud-enterprise/keep-track-of-deployment-activity.md b/deploy-manage/deploy/cloud-enterprise/keep-track-of-deployment-activity.md index cad919efd6..c1373f315b 100644 --- a/deploy-manage/deploy/cloud-enterprise/keep-track-of-deployment-activity.md +++ b/deploy-manage/deploy/cloud-enterprise/keep-track-of-deployment-activity.md @@ -38,7 +38,7 @@ Summary : A summary of what change was applied, when the change was performed, and how long it took. Applied by -: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the Elastic Cloud Enterprise platform. +: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the {{ece}} platform. Actions : Select **Details** for an expanded view of each step in the configuration change, including the start time, end time, and duration. You can select **Reapply** to re-run the configuration change. diff --git a/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md b/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md index 15cca25c89..cfa549fd8e 100644 --- a/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md +++ b/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md @@ -17,7 +17,7 @@ To access the Cloud UI in a web browser: https://FIRST_HOST:12443 ``` - Secure access through the HTTPS protocol is available with certificates generated during the installation of Elastic Cloud Enterprise, but will prompt you with a warning in your browser. To avoid this warning, you can add [your own TLS/SSL security certificates](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). If you are on AWS and can’t access the Cloud UI, [check if the URL points to a private IP address](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). + Secure access through the HTTPS protocol is available with certificates generated during the installation of {{ece}}, but will prompt you with a warning in your browser. To avoid this warning, you can add [your own TLS/SSL security certificates](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). If you are on AWS and can’t access the Cloud UI, [check if the URL points to a private IP address](../../../troubleshoot/deployments/cloud-enterprise/common-issues.md#ece-aws-private-ip). 2. Log in as user `admin` with the credentials provided. 3. On your first login, agree to the software license agreement to continue. You can opt out of sharing some basic usage statistics with Elastic. [Here is what we collect.](statistics-collected-by-cloud-enterprise.md) @@ -25,6 +25,6 @@ To access the Cloud UI in a web browser: The Cloud UI displays the available deployments and some important information about them. Three deployments are always shown: * `admin-console-elasticsearch`: Backs the Cloud UI itself. -* `logging-and-metrics`: Collects logs and performance metrics for your ECE installation. You must not use this deployment to index monitoring data from your own Elasticsearch clusters or use it to index data from Beats and Logstash. Always create a separate, dedicated monitoring deployment for your own use. +* `logging-and-metrics`: Collects logs and performance metrics for your ECE installation. You must not use this deployment to index monitoring data from your own {{es}} clusters or use it to index data from Beats and Logstash. Always create a separate, dedicated monitoring deployment for your own use. * `security`: Stores all security-related configurations. diff --git a/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md b/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md index 52255db525..5ca5dc305e 100644 --- a/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md +++ b/deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md @@ -6,255 +6,255 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-elastic-stack.html --- -# Manage Elastic Stack versions [ece-manage-elastic-stack] +# Manage {{stack}} versions [ece-manage-elastic-stack] -Elastic Cloud Enterprise ships with a number of different versions of the Elastic Stack containing Elasticsearch and Kibana. Periodically, you might need to manage Elastic Stack versions for one of the following reasons: +{{ece}} ships with a number of different versions of the {{stack}} containing {{es}} and {{kib}}. Periodically, you might need to manage {{stack}} versions for one of the following reasons: -* To add new versions of the Elastic Stack as they become available -* To obtain information about existing Elastic Stack versions -* To update existing versions of the Elastic Stack -* To add the Elastic Stack versions that shipped with a version of ECE that you upgraded to +* To add new versions of the {{stack}} as they become available +* To obtain information about existing {{stack}} versions +* To update existing versions of the {{stack}} +* To add the {{stack}} versions that shipped with a version of ECE that you upgraded to -New or updated versions of the Elastic Stack must be prepared to work with Elastic Cloud Enterprise and are provided as packs that you can download. +New or updated versions of the {{stack}} must be prepared to work with {{ece}} and are provided as packs that you can download. ::::{important} -Elasticsearch 7.8 and later comes with Index Lifecycle Management (ILM) always enabled. Before upgrading to 7.8 or later, to avoid any unpredictable behavior it is important to configure hot-warm clusters on Elastic Cloud Enterprise with ILM rather than index curation. Check [migrate to index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md) for existing clusters, and [configure index management](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html) for new clusters. +{{es}} 7.8 and later comes with Index Lifecycle Management (ILM) always enabled. Before upgrading to 7.8 or later, to avoid any unpredictable behavior it is important to configure hot-warm clusters on {{ece}} with ILM rather than index curation. Check [migrate to index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md) for existing clusters, and [configure index management](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-index-management.html) for new clusters. :::: -## Most recent Elastic Stack packs [ece_most_recent_elastic_stack_packs] +## Most recent {{stack}} packs [ece_most_recent_elastic_stack_packs] :::{important} Enterprise Search is not available in versions 9.0+. ::: -The following are the most recently released Elastic Stack packs for version 8.x, 7.x, and 6.x, respectively: +The following are the most recently released {{stack}} packs for version 8.x, 7.x, and 6.x, respectively: $$$ece-elastic-stack-stackpacks-recent$$$ | Required downloads | Minimum required ECE version | | --- | --- | -| [Elasticsearch, Kibana, and APM stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | -## All available Elastic Stack packs [ece-elastic-stack-stackpacks] +## All available {{stack}} packs [ece-elastic-stack-stackpacks] -Following is the full list of available packs containing Elastic Stack versions. Note that Enterprise Search was introduced with ECE 2.6.0 and requires that version or higher. +Following is the full list of available packs containing {{stack}} versions. Note that Enterprise Search was introduced with ECE 2.6.0 and requires that version or higher. ::::{dropdown} **Expand to view the full list** | Required downloads | Minimum required ECE version | | --- | --- | -| [Elasticsearch, Kibana, and APM stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.4](https://download.elastic.co/cloud-enterprise/versions/8.17.4.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.3](https://download.elastic.co/cloud-enterprise/versions/8.17.3.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.2](https://download.elastic.co/cloud-enterprise/versions/8.17.2.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.1](https://download.elastic.co/cloud-enterprise/versions/8.17.1.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.17.0](https://download.elastic.co/cloud-enterprise/versions/8.17.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.3](https://download.elastic.co/cloud-enterprise/versions/8.16.3.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.2](https://download.elastic.co/cloud-enterprise/versions/8.16.2.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.1](https://download.elastic.co/cloud-enterprise/versions/8.16.1.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.16.0](https://download.elastic.co/cloud-enterprise/versions/8.16.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.5](https://download.elastic.co/cloud-enterprise/versions/8.15.5.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.4](https://download.elastic.co/cloud-enterprise/versions/8.15.4.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.3](https://download.elastic.co/cloud-enterprise/versions/8.15.3.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.2](https://download.elastic.co/cloud-enterprise/versions/8.15.2.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.1](https://download.elastic.co/cloud-enterprise/versions/8.15.1.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.15.0](https://download.elastic.co/cloud-enterprise/versions/8.15.0.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.3](https://download.elastic.co/cloud-enterprise/versions/8.14.3.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.2](https://download.elastic.co/cloud-enterprise/versions/8.14.2.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.1](https://download.elastic.co/cloud-enterprise/versions/8.14.1.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.14.0](https://download.elastic.co/cloud-enterprise/versions/8.14.0.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.4](https://download.elastic.co/cloud-enterprise/versions/8.13.4.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.3](https://download.elastic.co/cloud-enterprise/versions/8.13.3.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.2](https://download.elastic.co/cloud-enterprise/versions/8.13.2.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.1](https://download.elastic.co/cloud-enterprise/versions/8.13.1.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.13.0](https://download.elastic.co/cloud-enterprise/versions/8.13.0.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.2](https://download.elastic.co/cloud-enterprise/versions/8.12.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.1](https://download.elastic.co/cloud-enterprise/versions/8.12.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.12.0](https://download.elastic.co/cloud-enterprise/versions/8.12.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.4](https://download.elastic.co/cloud-enterprise/versions/8.11.4.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.3](https://download.elastic.co/cloud-enterprise/versions/8.11.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.2](https://download.elastic.co/cloud-enterprise/versions/8.11.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.1](https://download.elastic.co/cloud-enterprise/versions/8.11.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.11.0](https://download.elastic.co/cloud-enterprise/versions/8.11.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.4](https://download.elastic.co/cloud-enterprise/versions/8.10.4.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.3](https://download.elastic.co/cloud-enterprise/versions/8.10.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.2](https://download.elastic.co/cloud-enterprise/versions/8.10.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.1](https://download.elastic.co/cloud-enterprise/versions/8.10.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.10.0](https://download.elastic.co/cloud-enterprise/versions/8.10.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.2](https://download.elastic.co/cloud-enterprise/versions/8.9.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.1](https://download.elastic.co/cloud-enterprise/versions/8.9.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.9.0](https://download.elastic.co/cloud-enterprise/versions/8.9.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.2](https://download.elastic.co/cloud-enterprise/versions/8.8.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.1](https://download.elastic.co/cloud-enterprise/versions/8.8.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.8.0](https://download.elastic.co/cloud-enterprise/versions/8.8.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.7.1](https://download.elastic.co/cloud-enterprise/versions/8.7.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.7.0](https://download.elastic.co/cloud-enterprise/versions/8.7.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.2](https://download.elastic.co/cloud-enterprise/versions/8.6.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.1](https://download.elastic.co/cloud-enterprise/versions/8.6.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.6.0](https://download.elastic.co/cloud-enterprise/versions/8.6.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.3](https://download.elastic.co/cloud-enterprise/versions/8.5.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.2](https://download.elastic.co/cloud-enterprise/versions/8.5.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.1](https://download.elastic.co/cloud-enterprise/versions/8.5.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.5.0](https://download.elastic.co/cloud-enterprise/versions/8.5.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.3](https://download.elastic.co/cloud-enterprise/versions/8.4.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.2](https://download.elastic.co/cloud-enterprise/versions/8.4.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.1](https://download.elastic.co/cloud-enterprise/versions/8.4.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.4.0](https://download.elastic.co/cloud-enterprise/versions/8.4.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.3](https://download.elastic.co/cloud-enterprise/versions/8.3.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.2](https://download.elastic.co/cloud-enterprise/versions/8.3.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.1](https://download.elastic.co/cloud-enterprise/versions/8.3.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.3.0](https://download.elastic.co/cloud-enterprise/versions/8.3.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.3](https://download.elastic.co/cloud-enterprise/versions/8.2.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.2](https://download.elastic.co/cloud-enterprise/versions/8.2.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.1](https://download.elastic.co/cloud-enterprise/versions/8.2.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.2.0](https://download.elastic.co/cloud-enterprise/versions/8.2.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.3](https://download.elastic.co/cloud-enterprise/versions/8.1.3.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.2](https://download.elastic.co/cloud-enterprise/versions/8.1.2.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.1](https://download.elastic.co/cloud-enterprise/versions/8.1.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.1.0](https://download.elastic.co/cloud-enterprise/versions/8.1.0.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.0.1](https://download.elastic.co/cloud-enterprise/versions/8.0.1.zip) | ECE 3.0.0 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 8.0.0](https://download.elastic.co/cloud-enterprise/versions/8.0.0.zip) | ECE 3.0.0 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.26](https://download.elastic.co/cloud-enterprise/versions/7.17.26.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.25](https://download.elastic.co/cloud-enterprise/versions/7.17.25.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.24](https://download.elastic.co/cloud-enterprise/versions/7.17.24.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.23](https://download.elastic.co/cloud-enterprise/versions/7.17.23.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.22](https://download.elastic.co/cloud-enterprise/versions/7.17.22.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.21](https://download.elastic.co/cloud-enterprise/versions/7.17.21.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.20](https://download.elastic.co/cloud-enterprise/versions/7.17.20.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.19](https://download.elastic.co/cloud-enterprise/versions/7.17.19.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.18](https://download.elastic.co/cloud-enterprise/versions/7.17.18.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.17](https://download.elastic.co/cloud-enterprise/versions/7.17.17.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.16](https://download.elastic.co/cloud-enterprise/versions/7.17.16.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.15](https://download.elastic.co/cloud-enterprise/versions/7.17.15.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.14](https://download.elastic.co/cloud-enterprise/versions/7.17.14.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.13](https://download.elastic.co/cloud-enterprise/versions/7.17.13.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.12](https://download.elastic.co/cloud-enterprise/versions/7.17.12.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.11](https://download.elastic.co/cloud-enterprise/versions/7.17.11.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.10](https://download.elastic.co/cloud-enterprise/versions/7.17.10.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.9](https://download.elastic.co/cloud-enterprise/versions/7.17.9.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.8](https://download.elastic.co/cloud-enterprise/versions/7.17.8.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.7](https://download.elastic.co/cloud-enterprise/versions/7.17.7.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.6](https://download.elastic.co/cloud-enterprise/versions/7.17.6.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.5](https://download.elastic.co/cloud-enterprise/versions/7.17.5.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.4](https://download.elastic.co/cloud-enterprise/versions/7.17.4.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.3](https://download.elastic.co/cloud-enterprise/versions/7.17.3.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.2](https://download.elastic.co/cloud-enterprise/versions/7.17.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.1](https://download.elastic.co/cloud-enterprise/versions/7.17.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.17.0](https://download.elastic.co/cloud-enterprise/versions/7.17.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.3](https://download.elastic.co/cloud-enterprise/versions/7.16.3.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.2](https://download.elastic.co/cloud-enterprise/versions/7.16.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.1](https://download.elastic.co/cloud-enterprise/versions/7.16.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.16.0](https://download.elastic.co/cloud-enterprise/versions/7.16.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.2](https://download.elastic.co/cloud-enterprise/versions/7.15.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.1](https://download.elastic.co/cloud-enterprise/versions/7.15.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.15.0](https://download.elastic.co/cloud-enterprise/versions/7.15.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.2](https://download.elastic.co/cloud-enterprise/versions/7.14.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.1](https://download.elastic.co/cloud-enterprise/versions/7.14.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.14.0](https://download.elastic.co/cloud-enterprise/versions/7.14.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.4](https://download.elastic.co/cloud-enterprise/versions/7.13.4.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.3](https://download.elastic.co/cloud-enterprise/versions/7.13.3.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.2](https://download.elastic.co/cloud-enterprise/versions/7.13.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.1](https://download.elastic.co/cloud-enterprise/versions/7.13.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.13.0](https://download.elastic.co/cloud-enterprise/versions/7.13.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.12.1](https://download.elastic.co/cloud-enterprise/versions/7.12.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.12.0](https://download.elastic.co/cloud-enterprise/versions/7.12.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.2](https://download.elastic.co/cloud-enterprise/versions/7.11.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.1](https://download.elastic.co/cloud-enterprise/versions/7.11.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.11.0](https://download.elastic.co/cloud-enterprise/versions/7.11.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.2](https://download.elastic.co/cloud-enterprise/versions/7.10.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.1](https://download.elastic.co/cloud-enterprise/versions/7.10.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.10.0](https://download.elastic.co/cloud-enterprise/versions/7.10.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.3](https://download.elastic.co/cloud-enterprise/versions/7.9.3.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.2](https://download.elastic.co/cloud-enterprise/versions/7.9.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.1](https://download.elastic.co/cloud-enterprise/versions/7.9.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.9.0](https://download.elastic.co/cloud-enterprise/versions/7.9.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.8.1](https://download.elastic.co/cloud-enterprise/versions/7.8.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.8.0](https://download.elastic.co/cloud-enterprise/versions/7.8.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.7.1](https://download.elastic.co/cloud-enterprise/versions/7.7.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and Enterprise Search stack pack: 7.7.0](https://download.elastic.co/cloud-enterprise/versions/7.7.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.2](https://download.elastic.co/cloud-enterprise/versions/7.6.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.1](https://download.elastic.co/cloud-enterprise/versions/7.6.1.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.6.0](https://download.elastic.co/cloud-enterprise/versions/7.6.0.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.2](https://download.elastic.co/cloud-enterprise/versions/7.5.2.zip) | ECE 2.2.2 | -| [ Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.1](https://download.elastic.co/cloud-enterprise/versions/7.5.1.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.5.0](https://download.elastic.co/cloud-enterprise/versions/7.5.0.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.2](https://download.elastic.co/cloud-enterprise/versions/7.4.2.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.1](https://download.elastic.co/cloud-enterprise/versions/7.4.1.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, APM, and App Search stack pack: 7.4.0](https://download.elastic.co/cloud-enterprise/versions/7.4.0.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.2](https://download.elastic.co/cloud-enterprise/versions/7.3.2.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.1](https://download.elastic.co/cloud-enterprise/versions/7.3.1.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.3.0](https://download.elastic.co/cloud-enterprise/versions/7.3.0.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.2.1](https://download.elastic.co/cloud-enterprise/versions/7.2.1.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.2.0](https://download.elastic.co/cloud-enterprise/versions/7.2.0.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.1.1](https://download.elastic.co/cloud-enterprise/versions/7.1.1.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.1.0](https://download.elastic.co/cloud-enterprise/versions/7.1.0.zip) | ECE 2.2.2 | -| [Elasticsearch, Kibana, and APM stack pack: 7.0.1](https://download.elastic.co/cloud-enterprise/versions/7.0.1.zip) | ECE 2.2.0 | -| [Elasticsearch, Kibana, and APM stack pack: 7.0.0](https://download.elastic.co/cloud-enterprise/versions/7.0.0.zip) | ECE 2.2.0 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.22](https://download.elastic.co/cloud-enterprise/versions/6.8.22.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.21](https://download.elastic.co/cloud-enterprise/versions/6.8.21.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.20](https://download.elastic.co/cloud-enterprise/versions/6.8.20.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.19](https://download.elastic.co/cloud-enterprise/versions/6.8.19.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.18](https://download.elastic.co/cloud-enterprise/versions/6.8.18.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.17](https://download.elastic.co/cloud-enterprise/versions/6.8.17.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.16](https://download.elastic.co/cloud-enterprise/versions/6.8.16.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.15](https://download.elastic.co/cloud-enterprise/versions/6.8.15.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.14](https://download.elastic.co/cloud-enterprise/versions/6.8.14.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.13](https://download.elastic.co/cloud-enterprise/versions/6.8.13.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.12](https://download.elastic.co/cloud-enterprise/versions/6.8.12.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.11](https://download.elastic.co/cloud-enterprise/versions/6.8.11.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.10](https://download.elastic.co/cloud-enterprise/versions/6.8.10.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.9](https://download.elastic.co/cloud-enterprise/versions/6.8.9.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.8](https://download.elastic.co/cloud-enterprise/versions/6.8.8.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.7](https://download.elastic.co/cloud-enterprise/versions/6.8.7.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.6](https://download.elastic.co/cloud-enterprise/versions/6.8.6.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.5](https://download.elastic.co/cloud-enterprise/versions/6.8.5.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.4](https://download.elastic.co/cloud-enterprise/versions/6.8.4.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.3](https://download.elastic.co/cloud-enterprise/versions/6.8.3.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.2](https://download.elastic.co/cloud-enterprise/versions/6.8.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.1](https://download.elastic.co/cloud-enterprise/versions/6.8.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.8.0](https://download.elastic.co/cloud-enterprise/versions/6.8.0.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.2](https://download.elastic.co/cloud-enterprise/versions/6.7.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.1](https://download.elastic.co/cloud-enterprise/versions/6.7.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.7.0](https://download.elastic.co/cloud-enterprise/versions/6.7.0.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.2](https://download.elastic.co/cloud-enterprise/versions/6.6.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.1](https://download.elastic.co/cloud-enterprise/versions/6.6.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.6.0](https://download.elastic.co/cloud-enterprise/versions/6.6.0.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.4](https://download.elastic.co/cloud-enterprise/versions/6.5.4.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.3](https://download.elastic.co/cloud-enterprise/versions/6.5.3.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.2](https://download.elastic.co/cloud-enterprise/versions/6.5.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.1](https://download.elastic.co/cloud-enterprise/versions/6.5.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.5.0](https://download.elastic.co/cloud-enterprise/versions/6.5.0.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.3](https://download.elastic.co/cloud-enterprise/versions/6.4.3.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.2](https://download.elastic.co/cloud-enterprise/versions/6.4.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.1](https://download.elastic.co/cloud-enterprise/versions/6.4.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.4.0](https://download.elastic.co/cloud-enterprise/versions/6.4.0.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.2](https://download.elastic.co/cloud-enterprise/versions/6.3.2.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.1](https://download.elastic.co/cloud-enterprise/versions/6.3.1.zip) | ECE 1.1.4 | -| [Elasticsearch, Kibana, and APM stack pack: 6.3.0](https://download.elastic.co/cloud-enterprise/versions/6.3.0.zip) | ECE 1.1.4 | -| [Elasticsearch and Kibana stack pack: 6.2.4](https://download.elastic.co/cloud-enterprise/versions/6.2.4.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.2.3](https://download.elastic.co/cloud-enterprise/versions/6.2.3.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.2.2](https://download.elastic.co/cloud-enterprise/versions/6.2.2.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.1.4](https://download.elastic.co/cloud-enterprise/versions/6.1.4.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.1.3](https://download.elastic.co/cloud-enterprise/versions/6.1.3.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.0.1](https://download.elastic.co/cloud-enterprise/versions/6.0.1.zip) | ECE 1.1.2 | -| [Elasticsearch and Kibana stack pack: 6.0.0](https://download.elastic.co/cloud-enterprise/versions/6.0.0.zip) | ECE 1.1.0 | -| [Elasticsearch and Kibana stack pack: 5.6.16](https://download.elastic.co/cloud-enterprise/versions/5.6.16.zip) | ECE 1.1.0 | -| [Elasticsearch and Kibana stack pack: 2.4.6](https://download.elastic.co/cloud-enterprise/versions/2.4.6.zip) | ECE 1.0.0 | -| [Elasticsearch and Kibana stack pack: 2.4.5](https://download.elastic.co/cloud-enterprise/versions/2.4.5.zip) | ECE 1.0.0 | +| [{{es}}, {{kib}}, and APM stack pack: 9.0.0](https://download.elastic.co/cloud-enterprise/versions/9.0.0.zip) | ECE 4.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.18.0](https://download.elastic.co/cloud-enterprise/versions/8.18.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.4](https://download.elastic.co/cloud-enterprise/versions/8.17.4.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.3](https://download.elastic.co/cloud-enterprise/versions/8.17.3.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.2](https://download.elastic.co/cloud-enterprise/versions/8.17.2.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.1](https://download.elastic.co/cloud-enterprise/versions/8.17.1.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.17.0](https://download.elastic.co/cloud-enterprise/versions/8.17.0.zip) | ECE 3.0.0
(+ docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.3](https://download.elastic.co/cloud-enterprise/versions/8.16.3.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.2](https://download.elastic.co/cloud-enterprise/versions/8.16.2.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.1](https://download.elastic.co/cloud-enterprise/versions/8.16.1.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.16.0](https://download.elastic.co/cloud-enterprise/versions/8.16.0.zip) | ECE 3.0.0
(+ Docker 20.10.10+ required for 8.16+) | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.5](https://download.elastic.co/cloud-enterprise/versions/8.15.5.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.4](https://download.elastic.co/cloud-enterprise/versions/8.15.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.3](https://download.elastic.co/cloud-enterprise/versions/8.15.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.2](https://download.elastic.co/cloud-enterprise/versions/8.15.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.1](https://download.elastic.co/cloud-enterprise/versions/8.15.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.15.0](https://download.elastic.co/cloud-enterprise/versions/8.15.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.3](https://download.elastic.co/cloud-enterprise/versions/8.14.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.2](https://download.elastic.co/cloud-enterprise/versions/8.14.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.1](https://download.elastic.co/cloud-enterprise/versions/8.14.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.14.0](https://download.elastic.co/cloud-enterprise/versions/8.14.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.4](https://download.elastic.co/cloud-enterprise/versions/8.13.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.3](https://download.elastic.co/cloud-enterprise/versions/8.13.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.2](https://download.elastic.co/cloud-enterprise/versions/8.13.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.1](https://download.elastic.co/cloud-enterprise/versions/8.13.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.13.0](https://download.elastic.co/cloud-enterprise/versions/8.13.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.2](https://download.elastic.co/cloud-enterprise/versions/8.12.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.1](https://download.elastic.co/cloud-enterprise/versions/8.12.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.12.0](https://download.elastic.co/cloud-enterprise/versions/8.12.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.4](https://download.elastic.co/cloud-enterprise/versions/8.11.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.3](https://download.elastic.co/cloud-enterprise/versions/8.11.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.2](https://download.elastic.co/cloud-enterprise/versions/8.11.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.1](https://download.elastic.co/cloud-enterprise/versions/8.11.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.11.0](https://download.elastic.co/cloud-enterprise/versions/8.11.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.4](https://download.elastic.co/cloud-enterprise/versions/8.10.4.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.3](https://download.elastic.co/cloud-enterprise/versions/8.10.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.2](https://download.elastic.co/cloud-enterprise/versions/8.10.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.1](https://download.elastic.co/cloud-enterprise/versions/8.10.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.10.0](https://download.elastic.co/cloud-enterprise/versions/8.10.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.2](https://download.elastic.co/cloud-enterprise/versions/8.9.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.1](https://download.elastic.co/cloud-enterprise/versions/8.9.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.9.0](https://download.elastic.co/cloud-enterprise/versions/8.9.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.2](https://download.elastic.co/cloud-enterprise/versions/8.8.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.1](https://download.elastic.co/cloud-enterprise/versions/8.8.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.8.0](https://download.elastic.co/cloud-enterprise/versions/8.8.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.7.1](https://download.elastic.co/cloud-enterprise/versions/8.7.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.7.0](https://download.elastic.co/cloud-enterprise/versions/8.7.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.2](https://download.elastic.co/cloud-enterprise/versions/8.6.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.1](https://download.elastic.co/cloud-enterprise/versions/8.6.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.6.0](https://download.elastic.co/cloud-enterprise/versions/8.6.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.3](https://download.elastic.co/cloud-enterprise/versions/8.5.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.2](https://download.elastic.co/cloud-enterprise/versions/8.5.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.1](https://download.elastic.co/cloud-enterprise/versions/8.5.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.5.0](https://download.elastic.co/cloud-enterprise/versions/8.5.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.3](https://download.elastic.co/cloud-enterprise/versions/8.4.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.2](https://download.elastic.co/cloud-enterprise/versions/8.4.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.1](https://download.elastic.co/cloud-enterprise/versions/8.4.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.4.0](https://download.elastic.co/cloud-enterprise/versions/8.4.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.3](https://download.elastic.co/cloud-enterprise/versions/8.3.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.2](https://download.elastic.co/cloud-enterprise/versions/8.3.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.1](https://download.elastic.co/cloud-enterprise/versions/8.3.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.3.0](https://download.elastic.co/cloud-enterprise/versions/8.3.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.3](https://download.elastic.co/cloud-enterprise/versions/8.2.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.2](https://download.elastic.co/cloud-enterprise/versions/8.2.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.1](https://download.elastic.co/cloud-enterprise/versions/8.2.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.2.0](https://download.elastic.co/cloud-enterprise/versions/8.2.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.3](https://download.elastic.co/cloud-enterprise/versions/8.1.3.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.2](https://download.elastic.co/cloud-enterprise/versions/8.1.2.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.1](https://download.elastic.co/cloud-enterprise/versions/8.1.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.1.0](https://download.elastic.co/cloud-enterprise/versions/8.1.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.0.1](https://download.elastic.co/cloud-enterprise/versions/8.0.1.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 8.0.0](https://download.elastic.co/cloud-enterprise/versions/8.0.0.zip) | ECE 3.0.0 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.27](https://download.elastic.co/cloud-enterprise/versions/7.17.27.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.26](https://download.elastic.co/cloud-enterprise/versions/7.17.26.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.25](https://download.elastic.co/cloud-enterprise/versions/7.17.25.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.24](https://download.elastic.co/cloud-enterprise/versions/7.17.24.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.23](https://download.elastic.co/cloud-enterprise/versions/7.17.23.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.22](https://download.elastic.co/cloud-enterprise/versions/7.17.22.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.21](https://download.elastic.co/cloud-enterprise/versions/7.17.21.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.20](https://download.elastic.co/cloud-enterprise/versions/7.17.20.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.19](https://download.elastic.co/cloud-enterprise/versions/7.17.19.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.18](https://download.elastic.co/cloud-enterprise/versions/7.17.18.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.17](https://download.elastic.co/cloud-enterprise/versions/7.17.17.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.16](https://download.elastic.co/cloud-enterprise/versions/7.17.16.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.15](https://download.elastic.co/cloud-enterprise/versions/7.17.15.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.14](https://download.elastic.co/cloud-enterprise/versions/7.17.14.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.13](https://download.elastic.co/cloud-enterprise/versions/7.17.13.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.12](https://download.elastic.co/cloud-enterprise/versions/7.17.12.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.11](https://download.elastic.co/cloud-enterprise/versions/7.17.11.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.10](https://download.elastic.co/cloud-enterprise/versions/7.17.10.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.9](https://download.elastic.co/cloud-enterprise/versions/7.17.9.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.8](https://download.elastic.co/cloud-enterprise/versions/7.17.8.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.7](https://download.elastic.co/cloud-enterprise/versions/7.17.7.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.6](https://download.elastic.co/cloud-enterprise/versions/7.17.6.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.5](https://download.elastic.co/cloud-enterprise/versions/7.17.5.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.4](https://download.elastic.co/cloud-enterprise/versions/7.17.4.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.3](https://download.elastic.co/cloud-enterprise/versions/7.17.3.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.2](https://download.elastic.co/cloud-enterprise/versions/7.17.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.1](https://download.elastic.co/cloud-enterprise/versions/7.17.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.17.0](https://download.elastic.co/cloud-enterprise/versions/7.17.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.3](https://download.elastic.co/cloud-enterprise/versions/7.16.3.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.2](https://download.elastic.co/cloud-enterprise/versions/7.16.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.1](https://download.elastic.co/cloud-enterprise/versions/7.16.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.16.0](https://download.elastic.co/cloud-enterprise/versions/7.16.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.2](https://download.elastic.co/cloud-enterprise/versions/7.15.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.1](https://download.elastic.co/cloud-enterprise/versions/7.15.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.15.0](https://download.elastic.co/cloud-enterprise/versions/7.15.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.2](https://download.elastic.co/cloud-enterprise/versions/7.14.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.1](https://download.elastic.co/cloud-enterprise/versions/7.14.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.14.0](https://download.elastic.co/cloud-enterprise/versions/7.14.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.4](https://download.elastic.co/cloud-enterprise/versions/7.13.4.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.3](https://download.elastic.co/cloud-enterprise/versions/7.13.3.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.2](https://download.elastic.co/cloud-enterprise/versions/7.13.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.1](https://download.elastic.co/cloud-enterprise/versions/7.13.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.13.0](https://download.elastic.co/cloud-enterprise/versions/7.13.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.12.1](https://download.elastic.co/cloud-enterprise/versions/7.12.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.12.0](https://download.elastic.co/cloud-enterprise/versions/7.12.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.2](https://download.elastic.co/cloud-enterprise/versions/7.11.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.1](https://download.elastic.co/cloud-enterprise/versions/7.11.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.11.0](https://download.elastic.co/cloud-enterprise/versions/7.11.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.2](https://download.elastic.co/cloud-enterprise/versions/7.10.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.1](https://download.elastic.co/cloud-enterprise/versions/7.10.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.10.0](https://download.elastic.co/cloud-enterprise/versions/7.10.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.3](https://download.elastic.co/cloud-enterprise/versions/7.9.3.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.2](https://download.elastic.co/cloud-enterprise/versions/7.9.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.1](https://download.elastic.co/cloud-enterprise/versions/7.9.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.9.0](https://download.elastic.co/cloud-enterprise/versions/7.9.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.8.1](https://download.elastic.co/cloud-enterprise/versions/7.8.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.8.0](https://download.elastic.co/cloud-enterprise/versions/7.8.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.7.1](https://download.elastic.co/cloud-enterprise/versions/7.7.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and Enterprise Search stack pack: 7.7.0](https://download.elastic.co/cloud-enterprise/versions/7.7.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.6.2](https://download.elastic.co/cloud-enterprise/versions/7.6.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.6.1](https://download.elastic.co/cloud-enterprise/versions/7.6.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.6.0](https://download.elastic.co/cloud-enterprise/versions/7.6.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.5.2](https://download.elastic.co/cloud-enterprise/versions/7.5.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.5.1](https://download.elastic.co/cloud-enterprise/versions/7.5.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.5.0](https://download.elastic.co/cloud-enterprise/versions/7.5.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.2](https://download.elastic.co/cloud-enterprise/versions/7.4.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.1](https://download.elastic.co/cloud-enterprise/versions/7.4.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, APM, and App Search stack pack: 7.4.0](https://download.elastic.co/cloud-enterprise/versions/7.4.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.2](https://download.elastic.co/cloud-enterprise/versions/7.3.2.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.1](https://download.elastic.co/cloud-enterprise/versions/7.3.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.3.0](https://download.elastic.co/cloud-enterprise/versions/7.3.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.2.1](https://download.elastic.co/cloud-enterprise/versions/7.2.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.2.0](https://download.elastic.co/cloud-enterprise/versions/7.2.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.1.1](https://download.elastic.co/cloud-enterprise/versions/7.1.1.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.1.0](https://download.elastic.co/cloud-enterprise/versions/7.1.0.zip) | ECE 2.2.2 | +| [{{es}}, {{kib}}, and APM stack pack: 7.0.1](https://download.elastic.co/cloud-enterprise/versions/7.0.1.zip) | ECE 2.2.0 | +| [{{es}}, {{kib}}, and APM stack pack: 7.0.0](https://download.elastic.co/cloud-enterprise/versions/7.0.0.zip) | ECE 2.2.0 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.23](https://download.elastic.co/cloud-enterprise/versions/6.8.23.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.22](https://download.elastic.co/cloud-enterprise/versions/6.8.22.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.21](https://download.elastic.co/cloud-enterprise/versions/6.8.21.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.20](https://download.elastic.co/cloud-enterprise/versions/6.8.20.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.19](https://download.elastic.co/cloud-enterprise/versions/6.8.19.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.18](https://download.elastic.co/cloud-enterprise/versions/6.8.18.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.17](https://download.elastic.co/cloud-enterprise/versions/6.8.17.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.16](https://download.elastic.co/cloud-enterprise/versions/6.8.16.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.15](https://download.elastic.co/cloud-enterprise/versions/6.8.15.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.14](https://download.elastic.co/cloud-enterprise/versions/6.8.14.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.13](https://download.elastic.co/cloud-enterprise/versions/6.8.13.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.12](https://download.elastic.co/cloud-enterprise/versions/6.8.12.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.11](https://download.elastic.co/cloud-enterprise/versions/6.8.11.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.10](https://download.elastic.co/cloud-enterprise/versions/6.8.10.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.9](https://download.elastic.co/cloud-enterprise/versions/6.8.9.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.8](https://download.elastic.co/cloud-enterprise/versions/6.8.8.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.7](https://download.elastic.co/cloud-enterprise/versions/6.8.7.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.6](https://download.elastic.co/cloud-enterprise/versions/6.8.6.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.5](https://download.elastic.co/cloud-enterprise/versions/6.8.5.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.4](https://download.elastic.co/cloud-enterprise/versions/6.8.4.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.3](https://download.elastic.co/cloud-enterprise/versions/6.8.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.2](https://download.elastic.co/cloud-enterprise/versions/6.8.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.1](https://download.elastic.co/cloud-enterprise/versions/6.8.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.8.0](https://download.elastic.co/cloud-enterprise/versions/6.8.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.2](https://download.elastic.co/cloud-enterprise/versions/6.7.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.1](https://download.elastic.co/cloud-enterprise/versions/6.7.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.7.0](https://download.elastic.co/cloud-enterprise/versions/6.7.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.2](https://download.elastic.co/cloud-enterprise/versions/6.6.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.1](https://download.elastic.co/cloud-enterprise/versions/6.6.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.6.0](https://download.elastic.co/cloud-enterprise/versions/6.6.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.4](https://download.elastic.co/cloud-enterprise/versions/6.5.4.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.3](https://download.elastic.co/cloud-enterprise/versions/6.5.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.2](https://download.elastic.co/cloud-enterprise/versions/6.5.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.1](https://download.elastic.co/cloud-enterprise/versions/6.5.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.5.0](https://download.elastic.co/cloud-enterprise/versions/6.5.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.3](https://download.elastic.co/cloud-enterprise/versions/6.4.3.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.2](https://download.elastic.co/cloud-enterprise/versions/6.4.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.1](https://download.elastic.co/cloud-enterprise/versions/6.4.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.4.0](https://download.elastic.co/cloud-enterprise/versions/6.4.0.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.2](https://download.elastic.co/cloud-enterprise/versions/6.3.2.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.1](https://download.elastic.co/cloud-enterprise/versions/6.3.1.zip) | ECE 1.1.4 | +| [{{es}}, {{kib}}, and APM stack pack: 6.3.0](https://download.elastic.co/cloud-enterprise/versions/6.3.0.zip) | ECE 1.1.4 | +| [{{es}} and {{kib}} stack pack: 6.2.4](https://download.elastic.co/cloud-enterprise/versions/6.2.4.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.2.3](https://download.elastic.co/cloud-enterprise/versions/6.2.3.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.2.2](https://download.elastic.co/cloud-enterprise/versions/6.2.2.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.1.4](https://download.elastic.co/cloud-enterprise/versions/6.1.4.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.1.3](https://download.elastic.co/cloud-enterprise/versions/6.1.3.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.0.1](https://download.elastic.co/cloud-enterprise/versions/6.0.1.zip) | ECE 1.1.2 | +| [{{es}} and {{kib}} stack pack: 6.0.0](https://download.elastic.co/cloud-enterprise/versions/6.0.0.zip) | ECE 1.1.0 | +| [{{es}} and {{kib}} stack pack: 5.6.16](https://download.elastic.co/cloud-enterprise/versions/5.6.16.zip) | ECE 1.1.0 | +| [{{es}} and {{kib}} stack pack: 2.4.6](https://download.elastic.co/cloud-enterprise/versions/2.4.6.zip) | ECE 1.0.0 | +| [{{es}} and {{kib}} stack pack: 2.4.5](https://download.elastic.co/cloud-enterprise/versions/2.4.5.zip) | ECE 1.0.0 | :::: ::::{tip} -For *offline* or *air-gapped* installations, additional steps are required to add Elastic Stack packs, as these packs do not contain any Docker images. After downloading a stack pack, you also need to pull and load the Docker images that match the Elastic Stack version. To learn more about what Docker images you need and about pulling and loading Docker images, check [Install ECE offline](air-gapped-install.md). +For *offline* or *air-gapped* installations, additional steps are required to add {{stack}} packs, as these packs do not contain any Docker images. After downloading a stack pack, you also need to pull and load the Docker images that match the {{stack}} version. To learn more about what Docker images you need and about pulling and loading Docker images, check [Install ECE offline](air-gapped-install.md). :::: @@ -264,20 +264,20 @@ For *offline* or *air-gapped* installations, additional steps are required to ad The examples shown all use HTTPS over port 12443, which requires that you have [a TLS certificate configured](../../security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md). Alternatively, you can specify the `-k` option to turn off certificate verification, as shown in our examples, or use HTTP over port 12400. -## Get Elastic Stack information [ece_get_elastic_stack_information] +## Get {{stack}} information [ece_get_elastic_stack_information] -You can obtain information about existing Elastic Stack versions that are available in your installation through the Cloud UI or through the command line. +You can obtain information about existing {{stack}} versions that are available in your installation through the Cloud UI or through the command line. -To obtain information about available Elastic Stack versions through the Cloud UI: +To obtain information about available {{stack}} versions through the Cloud UI: 1. [Log into the Cloud UI](log-into-cloud-ui.md). -2. From the **Platform** menu, select **Elastic Stack**. +2. From the **Platform** menu, select **{{stack}}**. 3. Select the version that you want. - The available Elastic Stack versions are shown. More detailed information about Docker images, plugins, and related Kibana versions are also available for each Elasticsearch version. + The available {{stack}} versions are shown. More detailed information about Docker images, plugins, and related {{kib}} versions are also available for each {{es}} version. -To obtain information about available Elastic Stack versions through the command line: +To obtain information about available {{stack}} versions through the command line: ```sh curl -X GET -u USER:PASSWORD https://COORDINATOR_HOST:12443/api/v1/stack/versions @@ -351,25 +351,25 @@ curl -X GET -u admin:4Z52y8Gq7PrxMDy47ipJPSh4ozBMynOGa9HWxcy2D3j https://10.56.1 You can also query for a specific version with a URI such as `https://COORDINATOR_HOST:12443/api/v1/stack/versions/5.3.2`, for example. -## Add Elastic Stack packs [ece-manage-elastic-stack-add] +## Add {{stack}} packs [ece-manage-elastic-stack-add] -You can add new Elastic Stack packs to your installation through the Cloud UI, through the Elastic Cloud Enterprise installation script, or through the RESTful API. +You can add new {{stack}} packs to your installation through the Cloud UI, through the {{ece}} installation script, or through the RESTful API. -To add a new Elastic Stack pack from the Cloud UI: +To add a new {{stack}} pack from the Cloud UI: -1. Download the Elastic Stack version that you want. +1. Download the {{stack}} version that you want. 2. [Log into the Cloud UI](log-into-cloud-ui.md). -3. From the **Platform** menu, select **Elastic Stack**. -4. Select **Upload Elastic Stack pack**. -5. Select a .zip file that contains an Elastic Stack pack and upload it. +3. From the **Platform** menu, select **{{stack}}**. +4. Select **Upload {{stack}} pack**. +5. Select a .zip file that contains an {{stack}} pack and upload it. - After the stack pack has been uploaded successfully, the new version appears in the list of Elastic Stack versions and can be used when you create or change a deployment. + After the stack pack has been uploaded successfully, the new version appears in the list of {{stack}} versions and can be used when you create or change a deployment. -To add a new Elastic Stack pack through the Elastic Cloud Enterprise installation script from the command line: +To add a new {{stack}} pack through the {{ece}} installation script from the command line: -1. Log into a host running Elastic Cloud Enterprise. -2. Add the Elastic Stack pack with the `add-stack-version` action: +1. Log into a host running {{ece}}. +2. Add the {{stack}} pack with the `add-stack-version` action: ```sh ./elastic-cloud-enterprise.sh add-stack-version \ @@ -377,7 +377,7 @@ To add a new Elastic Stack pack through the Elastic Cloud Enterprise installatio --version X.Y.Z <1> ``` - 1. A supported Elastic Stack version, such as `8.12.2` + 1. A supported {{stack}} version, such as `8.12.2` For example: @@ -389,10 +389,10 @@ To add a new Elastic Stack pack through the Elastic Cloud Enterprise installatio ``` -To add a new Elastic Stack pack through the RESTful API from the command line: +To add a new {{stack}} pack through the RESTful API from the command line: 1. Download the pack on an internet-connected host from Elastic and make it available locally. -2. Add the Elastic Stack pack with the following API call: +2. Add the {{stack}} pack with the following API call: ```sh curl -X POST -u USER:PASSWORD https://COORDINATOR_HOST:12443/api/v1/stack/versions \ @@ -400,7 +400,7 @@ To add a new Elastic Stack pack through the RESTful API from the command line: --data-binary "@PATH/STACK_PACK_FILE" <1> ``` - 1. The local path and the new Elastic Stack pack .zip file + 1. The local path and the new {{stack}} pack .zip file For example: @@ -413,28 +413,28 @@ To add a new Elastic Stack pack through the RESTful API from the command line: -## Update Elastic Stack packs [ece_update_elastic_stack_packs] +## Update {{stack}} packs [ece_update_elastic_stack_packs] -Updating an Elastic Stack pack might become necessary if an Elastic Stack version has been updated with security fixes, for example. You can update an existing Elastic Stack version through the Cloud UI or through the command line. +Updating an {{stack}} pack might become necessary if an {{stack}} version has been updated with security fixes, for example. You can update an existing {{stack}} version through the Cloud UI or through the command line. -Updated versions of Elasticsearch and Kibana are used when you create new Elasticsearch clusters, but they are not automatically applied to already running clusters. To update existing Elasticsearch clusters and Kibana after an updated Elastic Stack pack has been added, you need to [change the deployment configuration](working-with-deployments.md). +Updated versions of {{es}} and {{kib}} are used when you create new {{es}} clusters, but they are not automatically applied to already running clusters. To update existing {{es}} clusters and {{kib}} after an updated {{stack}} pack has been added, you need to [change the deployment configuration](working-with-deployments.md). -To update Elastic Stack packs through the Cloud UI: +To update {{stack}} packs through the Cloud UI: -1. Download the [Elastic Stack version](#ece-elastic-stack-stackpacks) that you want. +1. Download the [{{stack}} version](#ece-elastic-stack-stackpacks) that you want. 2. [Log into the Cloud UI](log-into-cloud-ui.md). -3. From the **Platform** menu, select **Elastic Stack**. +3. From the **Platform** menu, select **{{stack}}**. 4. Delete the old pack you want to replace. -5. Select **Upload Elastic Stack pack**. -6. Select a ZIP file that contains an Elastic Stack pack and upload it. +5. Select **Upload {{stack}} pack**. +6. Select a ZIP file that contains an {{stack}} pack and upload it. - After the stack pack has been uploaded successfully, the updated Elastic Stack version replaces the existing one. + After the stack pack has been uploaded successfully, the updated {{stack}} version replaces the existing one. -To update Elastic Stack packs through the RESTful API from the command line: +To update {{stack}} packs through the RESTful API from the command line: 1. Download an updated pack on an internet-connected host from Elastic and make it available locally. -2. Update the Elastic Stack pack with the following API call: +2. Update the {{stack}} pack with the following API call: ```sh curl -X PUT -u USER:PASSWORD https://COORDINATOR_HOST:12443/api/v1/stack/versions/VERSION \ <1> @@ -443,7 +443,7 @@ To update Elastic Stack packs through the RESTful API from the command line: ``` 1. The version being updated - 2. The local path and the updated Elastic Stack pack .zip file + 2. The local path and the updated {{stack}} pack .zip file For example: diff --git a/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md b/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md index 206b8c2c0f..abc3727366 100644 --- a/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md +++ b/deploy-manage/deploy/cloud-enterprise/manage-integrations-server.md @@ -8,22 +8,22 @@ mapped_pages: # Manage your Integrations Server [ece-manage-integrations-server] -For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. +For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. -As part of provisioning, the APM Server and Fleet Server are already configured to work with Elasticsearch and Kibana. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. +As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. From the deployment **Integrations Server** page you can also: * Get the URL to complete the APM agent configuration. -* Use the `elastic` credentials to go to the APM area of Kibana. Step by step instructions to configure a variety of agents are available right in Kibana. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. -* Use the `elastic` credentials to go to the Fleet area of Kibana. Step by step instructions to download and install Elastic Agent on your hosts are available right in Kibana. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. +* Use the `elastic` credentials to go to the APM area of {{kib}}. Step by step instructions to configure a variety of agents are available right in {{kib}}. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. +* Use the `elastic` credentials to go to the Fleet area of {{kib}}. Step by step instructions to download and install Elastic Agent on your hosts are available right in {{kib}}. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. * Access the Integrations Server logs and metrics. * Stop and restart your Integrations Server. -* Upgrade your Integrations Server version if it is out of sync with your Elasticsearch cluster. +* Upgrade your Integrations Server version if it is out of sync with your {{es}} cluster. * Fully remove the Integrations Server, delete it from the disk, and stop the charges. ::::{important} -The APM secret token can no longer be reset from the Elastic Cloud Enterprise UI. Check [Secret token](/solutions/observability/apps/secret-token.md) for instructions on managing a secret token. Note that resetting the token disrupts your APM service and restarts the server. When the server restarts, you’ll need to update all of your agents with the new token. +The APM secret token can no longer be reset from the {{ece}} UI. Check [Secret token](/solutions/observability/apps/secret-token.md) for instructions on managing a secret token. Note that resetting the token disrupts your APM service and restarts the server. When the server restarts, you’ll need to update all of your agents with the new token. :::: ## Routing to Fleet Server [ece-integrations-server-fleet-routing] diff --git a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md index 90f4374a14..9331012b63 100644 --- a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md @@ -298,7 +298,7 @@ Using Docker or Podman as container runtime is a configuration local to the host sudo install -o elastic -g elastic -d -m 700 /mnt/data ``` -21. As a sudoers user, modify the entry for the XFS volume in the `/etc/fstab` file to add `pquota,prjquota`. The default filesystem path used by Elastic Cloud Enterprise is `/mnt/data`. +21. As a sudoers user, modify the entry for the XFS volume in the `/etc/fstab` file to add `pquota,prjquota`. The default filesystem path used by {{ece}} is `/mnt/data`. ::::{note} Replace `/dev/nvme1n1` in the following example with the corresponding device on your host, and add this example configuration as a single line to `/etc/fstab`. @@ -352,7 +352,7 @@ Using Docker or Podman as container runtime is a configuration local to the host vm.max_map_count=262144 # enable forwarding so the Docker networking works as expected net.ipv4.ip_forward=1 - # Decrease the maximum number of TCP retransmissions to 5 as recommended for Elasticsearch TCP retransmission timeout. + # Decrease the maximum number of TCP retransmissions to 5 as recommended for {{es}} TCP retransmission timeout. # See /deploy-manage/deploy/self-managed/system-config-tcpretries.md net.ipv4.tcp_retries2=5 # Make sure the host doesn't swap too early @@ -416,7 +416,7 @@ Using Docker or Podman as container runtime is a configuration local to the host 1. Use the ECE installer script together with the `--podman` flag to add the additional host as a podman-based host. - Refer to the official [Install Elastic Cloud Enterprise on an additional host](install-ece-on-additional-hosts.md) and [Install ECE online](./install.md) documentation to adapt the command line parameters to your environment including fetching the role token. + Refer to the official [Install {{ece}} on an additional host](install-ece-on-additional-hosts.md) and [Install ECE online](./install.md) documentation to adapt the command line parameters to your environment including fetching the role token. [JVM heap sizes](ece-jvm.md) describes recommended JVM options. diff --git a/deploy-manage/deploy/cloud-enterprise/post-installation-steps.md b/deploy-manage/deploy/cloud-enterprise/post-installation-steps.md index e1ece13499..0809cec1ae 100644 --- a/deploy-manage/deploy/cloud-enterprise/post-installation-steps.md +++ b/deploy-manage/deploy/cloud-enterprise/post-installation-steps.md @@ -8,7 +8,7 @@ mapped_pages: # Post-installation steps [ece-getting-started-post-installation] -After your Elastic Cloud Enterprise installation is up, some additional steps might be required. For a complete list of configurable features in ECE, refer to [](./configure.md). +After your {{ece}} installation is up, some additional steps might be required. For a complete list of configurable features in ECE, refer to [](./configure.md). ::::{tip} To start creating {{es}} deployments directly, refer to [](./working-with-deployments.md). @@ -30,11 +30,11 @@ To start creating {{es}} deployments directly, refer to [](./working-with-deploy For example, if your proxy certificate is signed for `*.elastic-cloud-enterprise.example.com` and you have a wildcard DNS register pointing `*.elastic-cloud-enterprise.example.com` to your load balancer, you should configure `elastic-cloud-enterprise.example.com` as the **deployment domain name** in Platform → Settings. Refer to [](./change-endpoint-urls.md) for more details. :::: -* If you received a license from Elastic, [manage the licenses](../../license/manage-your-license-in-ece.md) for your Elastic Cloud Enterprise installation. +* If you received a license from Elastic, [manage the licenses](../../license/manage-your-license-in-ece.md) for your {{ece}} installation. * [Add more platform users](../../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) with role-based access control. -* [Add a snapshot repository](../../tools/snapshot-and-restore/cloud-enterprise.md) to enable regular backups of your Elasticsearch clusters. +* [Add a snapshot repository](../../tools/snapshot-and-restore/cloud-enterprise.md) to enable regular backups of your {{es}} clusters. * Consider enabling encryption-at-rest (EAR) on your hosts. diff --git a/deploy-manage/deploy/cloud-enterprise/prepare-environment.md b/deploy-manage/deploy/cloud-enterprise/prepare-environment.md index 594a2730d3..4bb2c067f5 100644 --- a/deploy-manage/deploy/cloud-enterprise/prepare-environment.md +++ b/deploy-manage/deploy/cloud-enterprise/prepare-environment.md @@ -12,7 +12,7 @@ In this section you'll find all the prerequisites and environment preparations r ## Requirements [ece-prepare-requirements] -In {{ece}}, an ECE host is the server, virtual machine, or cloud instance where the ECE software is installed. An ECE installation consists of a cluster of multiple hosts, forming the platform where Elastic Stack applications are orchestrated. +In {{ece}}, an ECE host is the server, virtual machine, or cloud instance where the ECE software is installed. An ECE installation consists of a cluster of multiple hosts, forming the platform where {{stack}} applications are orchestrated. To prepare your hosts for installation, the following prerequisites **must** be met: diff --git a/deploy-manage/deploy/cloud-enterprise/resize-deployment.md b/deploy-manage/deploy/cloud-enterprise/resize-deployment.md index c7002ba696..52e6cd8d31 100644 --- a/deploy-manage/deploy/cloud-enterprise/resize-deployment.md +++ b/deploy-manage/deploy/cloud-enterprise/resize-deployment.md @@ -8,7 +8,7 @@ mapped_pages: # Resize deployment [ece-resize-deployment] -Elasticsearch scales to whatever capacity you need and with as many nodes as the available resources can support. If you don’t have enough available resources, [add some capacity first](../../maintenance/ece/scale-out-installation.md). +{{es}} scales to whatever capacity you need and with as many nodes as the available resources can support. If you don’t have enough available resources, [add some capacity first](../../maintenance/ece/scale-out-installation.md). ::::{tip} You can also enable autoscaling on a deployment to have the available resources for components, such as [data tiers](/manage-data/lifecycle/data-tiers.md) and [machine learning](/explore-analyze/machine-learning.md) nodes, adjust automatically as the demands on the deployment change over time. Check [Deployment autoscaling](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md) to learn more. @@ -32,7 +32,7 @@ To resize a deployment: :::: RAM per instance - : Node and instance capacity should be sufficient to sustain your search workload, even if you lose an availability zone. Currently, half of the memory is assigned to the JVM heap. For example, on an Elasticsearch cluster node with 32 GB RAM, 16 GB would be allotted to heap. Up to 64 GB RAM and 1 TB storage per node are supported. + : Node and instance capacity should be sufficient to sustain your search workload, even if you lose an availability zone. Currently, half of the memory is assigned to the JVM heap. For example, on an {{es}} cluster node with 32 GB RAM, 16 GB would be allotted to heap. Up to 64 GB RAM and 1 TB storage per node are supported. Before finalizing your changes, you can review the **Architecture** summary, which shows the total number of instances per zone, with each circle color representing a different type of instance. diff --git a/deploy-manage/deploy/cloud-enterprise/search-filter-deployments.md b/deploy-manage/deploy/cloud-enterprise/search-filter-deployments.md index 8c64c747ae..8e54a65eea 100644 --- a/deploy-manage/deploy/cloud-enterprise/search-filter-deployments.md +++ b/deploy-manage/deploy/cloud-enterprise/search-filter-deployments.md @@ -8,7 +8,7 @@ mapped_pages: # Search and filter deployments [ece-find] -When you installed Elastic Cloud Enterprise and [logged into the Cloud UI](log-into-cloud-ui.md) for the first time, you were greeted by two deployments. We’ve also shown you how to [create your own first deployment](create-deployment.md), but that still only makes a few deployments. What if you had hundreds of deployments to look after or maybe even a thousand? How would you find the ones that need your attention? +When you installed {{ece}} and [logged into the Cloud UI](log-into-cloud-ui.md) for the first time, you were greeted by two deployments. We’ve also shown you how to [create your own first deployment](create-deployment.md), but that still only makes a few deployments. What if you had hundreds of deployments to look after or maybe even a thousand? How would you find the ones that need your attention? The **Deployments** page in the Cloud UI provides several ways to find deployments that might need your attention, whether that’s deployments that have a problem or deployments that are at a specific version level or really almost anything you might want to find on a complex production system: diff --git a/deploy-manage/deploy/cloud-enterprise/statistics-collected-by-cloud-enterprise.md b/deploy-manage/deploy/cloud-enterprise/statistics-collected-by-cloud-enterprise.md index a14f02ae68..c24ae957f7 100644 --- a/deploy-manage/deploy/cloud-enterprise/statistics-collected-by-cloud-enterprise.md +++ b/deploy-manage/deploy/cloud-enterprise/statistics-collected-by-cloud-enterprise.md @@ -6,11 +6,11 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-phone-home.html --- -# Statistics collected by Elastic Cloud Enterprise [ece-phone-home] +# Statistics collected by {{ece}} [ece-phone-home] -When you [log into the Cloud UI](log-into-cloud-ui.md) for the first time, you are asked to agree to the software license agreement and can opt out of sharing some basic usage statistics about Elastic Cloud Enterprise with Elastic. These statistics are never shared with anyone else. If you are unsure about opting out, the following information describes what we collect. +When you [log into the Cloud UI](log-into-cloud-ui.md) for the first time, you are asked to agree to the software license agreement and can opt out of sharing some basic usage statistics about {{ece}} with Elastic. These statistics are never shared with anyone else. If you are unsure about opting out, the following information describes what we collect. -For each Elastic Cloud Enterprise installation, we collect: +For each {{ece}} installation, we collect: * Installation ID * License information @@ -20,11 +20,11 @@ For each Elastic Cloud Enterprise installation, we collect: * The total RAM used by allocators * The availability zone each allocator belongs to * The total RAM available to and the total RAM used by each availability zone -* The number of Elasticsearch clusters +* The number of {{es}} clusters -For each Elasticsearch cluster, we collect: +For each {{es}} cluster, we collect: -* Whether a cluster has a Kibana instance associated with it +* Whether a cluster has a {{kib}} instance associated with it * Whether monitoring is configured Sharing these statistics with us can help us understand how you use our product better and can help us improve the product. diff --git a/deploy-manage/deploy/cloud-enterprise/switch-from-apm-to-integrations-server-payload.md b/deploy-manage/deploy/cloud-enterprise/switch-from-apm-to-integrations-server-payload.md index 06b361e5d5..1160f6be25 100644 --- a/deploy-manage/deploy/cloud-enterprise/switch-from-apm-to-integrations-server-payload.md +++ b/deploy-manage/deploy/cloud-enterprise/switch-from-apm-to-integrations-server-payload.md @@ -8,12 +8,12 @@ mapped_pages: # Switch from APM to Integrations Server payload [ece-integrations-server-apm-switch] -This example shows how to use the Elastic Cloud Enterprise RESTful API to switch from using [APM & Fleet Server](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-apm-and-fleet.html) to [Integrations Server](manage-integrations-server.md). +This example shows how to use the {{ece}} RESTful API to switch from using [APM & Fleet Server](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-apm-and-fleet.html) to [Integrations Server](manage-integrations-server.md). ## Requirements [ece_requirements_5] -Given a deployment that is using an APM & Fleet Server with Elastic Stack version 8.0 or later, it is possible to start using Integrations Server instead by updating the deployment with an Integrations Server payload. Switching from APM & Fleet Server to Integrations Server in this way ensures that the endpoints and credentials currently used by APM Server and Fleet Server remain the same after the switch. +Given a deployment that is using an APM & Fleet Server with {{stack}} version 8.0 or later, it is possible to start using Integrations Server instead by updating the deployment with an Integrations Server payload. Switching from APM & Fleet Server to Integrations Server in this way ensures that the endpoints and credentials currently used by APM Server and Fleet Server remain the same after the switch. In order to start using the Integrations Server payload, you first need to enable the APM integration for Elastic Agent by following the steps in [Switch to the Elastic APM integration](/solutions/observability/apps/switch-an-elastic-cloud-cluster-to-apm-integration.md). diff --git a/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md b/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md index 8f89099625..6361efc7bc 100644 --- a/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md +++ b/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md @@ -8,7 +8,7 @@ mapped_pages: # System deployments configuration [ece-system-clusters-configuration] -When installing ECE, you will notice that several Elasticsearch clusters get created as part of the installation process. Those are the *system deployments* which are part of the ECE control plane. You must make sure that they are configured and sized correctly to ensure you have a production-ready installation. +When installing ECE, you will notice that several {{es}} clusters get created as part of the installation process. Those are the *system deployments* which are part of the ECE control plane. You must make sure that they are configured and sized correctly to ensure you have a production-ready installation. We will review each cluster and provide recommendations to make sure that you are following best practices when starting your ECE journey. @@ -27,7 +27,7 @@ Logging and metrics - `logging-and-metrics` : As part of an ECE environment, a Beats sidecar with Filebeat and Metricbeat is installed on each ECE host. The logs and metrics collected by those beats are indexed in the `logging-and-metrics` cluster. This includes ECE service logs, such as proxy logs, director logs, and more. It also includes hosted deployments logs, security cluster audit logs, and metrics, such as CPU and disk usage. Data is collected from all hosts. This information is critical in order to be able to monitor ECE and troubleshoot issues. You can also use this data to configure watches to alert you in case of an issue, or machine learning jobs that can provide alerts based on anomalies or forecasting. Security - `security` -: When you enable the user management feature, you trigger the creation of a third system deployment named `security`. This cluster stores all security-related configurations, such as native users and the related native realm, integration with SAML or LDAP as external authentication providers and their role mapping, and the realm ordering. The health of this cluster is critical to provide access to the ECE Cloud UI and REST API. To learn more, check [Configure role-based access control](../../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). Beginning with Elastic Cloud Enterprise 2.5.0 the `security` cluster is created automatically for you. It is recommended to use the [dedicated API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-update-security-deployment) to manage the cluster. +: When you enable the user management feature, you trigger the creation of a third system deployment named `security`. This cluster stores all security-related configurations, such as native users and the related native realm, integration with SAML or LDAP as external authentication providers and their role mapping, and the realm ordering. The health of this cluster is critical to provide access to the ECE Cloud UI and REST API. To learn more, check [Configure role-based access control](../../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). Beginning with {{ece}} 2.5.0 the `security` cluster is created automatically for you. It is recommended to use the [dedicated API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-update-security-deployment) to manage the cluster. ## High availability [ece_high_availability] @@ -36,7 +36,7 @@ ECE supports the concept of [availability zones](ece-ha.md) and requires three a The system deployments are created when you install ECE or enable the user management feature, at which point they are not yet configured for high availability. As soon as you finish the installation process, you should change the configuration to ensure your system deployments are highly available and deployed across two or three availability zones. To configure your system deployments to be highly available, navigate to the **Edit** page for the cluster and change the number of availability zones under **Fault tolerance**. -For the `logging-and-metrics` cluster, you might want to also make sure that your Kibana instance and other components are deployed across multiple availability zones, since you will often access that cluster using Kibana. You can change the availability zones for Kibana on the same **Edit** page. +For the `logging-and-metrics` cluster, you might want to also make sure that your {{kib}} instance and other components are deployed across multiple availability zones, since you will often access that cluster using {{kib}}. You can change the availability zones for {{kib}} on the same **Edit** page. ::::{note} For the `security` cluster, the number of zones must be set to 3 for high availability, otherwise you may encounter errors when trying to upgrade ECE versions. @@ -66,7 +66,7 @@ When sizing your `logging-and-metrics` cluster, consider: In the case of the `admin-console-elasticsearch` and `security` system deployments, the team managing ECE and assigned to the platform admin role should have permission to change each system deployment configuration and also to access each cluster itself. -The `logging-and-metrics` cluster is different since, as an ECE admin, you likely want to provide users with access to the cluster in order to troubleshoot issues without your assistance, for example. In order to manage access to that cluster, you can configure roles that will provide access to the relevant indices, map those to users, and manage access to Kibana by leveraging the Elastic security integration with your authentication provider, such as LDAP, SAML, or AD. To configure one of those security realms, check [LDAP](../../users-roles/cluster-or-deployment-auth/ldap.md), [Active Directory](../../users-roles/cluster-or-deployment-auth/active-directory.md) or [SAML](../../users-roles/cluster-or-deployment-auth/saml.md). +The `logging-and-metrics` cluster is different since, as an ECE admin, you likely want to provide users with access to the cluster in order to troubleshoot issues without your assistance, for example. In order to manage access to that cluster, you can configure roles that will provide access to the relevant indices, map those to users, and manage access to {{kib}} by leveraging the Elastic security integration with your authentication provider, such as LDAP, SAML, or AD. To configure one of those security realms, check [LDAP](../../users-roles/cluster-or-deployment-auth/ldap.md), [Active Directory](../../users-roles/cluster-or-deployment-auth/active-directory.md) or [SAML](../../users-roles/cluster-or-deployment-auth/saml.md). ::::{note} The `logging-and-metrics` cluster is only intended for troubleshooting ECE deployment issues. If your use case involves modifying or normalizing logs from {{es}} or {{kib}}, use a separate [dedicated monitoring deployment](../../monitor/stack-monitoring/ece-ech-stack-monitoring.md) instead. diff --git a/deploy-manage/deploy/cloud-enterprise/tools-apis.md b/deploy-manage/deploy/cloud-enterprise/tools-apis.md index e6512264e6..b218a80554 100644 --- a/deploy-manage/deploy/cloud-enterprise/tools-apis.md +++ b/deploy-manage/deploy/cloud-enterprise/tools-apis.md @@ -44,7 +44,7 @@ Refer to [{{es}} API conventions](elasticsearch://reference/elasticsearch/rest-a :::{include} /deploy-manage/deploy/_snippets/cloud-api-console.md ::: -## Elastic Cloud Control: command-line interface for {{ecloud}} +## {{ecloud}} Control: command-line interface for {{ecloud}} :::{include} /deploy-manage/deploy/_snippets/ecctl.md ::: diff --git a/deploy-manage/deploy/cloud-enterprise/working-with-deployments.md b/deploy-manage/deploy/cloud-enterprise/working-with-deployments.md index 3dbf2892f7..6dbcd91717 100644 --- a/deploy-manage/deploy/cloud-enterprise/working-with-deployments.md +++ b/deploy-manage/deploy/cloud-enterprise/working-with-deployments.md @@ -24,7 +24,7 @@ All of these profiles are based on [deployment templates](./deployment-templates ## Creating deployments -Refer to [Create a deployment](./create-deployment.md) to launch and configure an Elastic Stack environment. +Refer to [Create a deployment](./create-deployment.md) to launch and configure an {{stack}} environment. ## Administering deployments [ece-administering-deployments] diff --git a/deploy-manage/deploy/cloud-on-k8s.md b/deploy-manage/deploy/cloud-on-k8s.md index 6f27963dbf..f14a5f6113 100644 --- a/deploy-manage/deploy/cloud-on-k8s.md +++ b/deploy-manage/deploy/cloud-on-k8s.md @@ -10,13 +10,13 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_learn_more_about_eck.html --- -# Elastic Cloud on Kubernetes [k8s-overview] +# {{eck}} [k8s-overview] -Built on the Kubernetes Operator pattern, {{eck}} (ECK) extends the basic Kubernetes orchestration capabilities to support the setup and management of Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash on Kubernetes. +Built on the Kubernetes Operator pattern, {{eck}} (ECK) extends the basic Kubernetes orchestration capabilities to support the setup and management of {{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash on Kubernetes. ## ECK overview -With Elastic Cloud on Kubernetes, you can streamline critical operations, such as: +With {{eck}}, you can streamline critical operations, such as: 1. Managing and monitoring multiple clusters 2. Scaling cluster capacity and storage @@ -24,7 +24,7 @@ With Elastic Cloud on Kubernetes, you can streamline critical operations, such a 4. Securing clusters with TLS certificates 5. Setting up hot-warm-cold architectures with availability zone awareness -This section provides everything you need to install, configure, and manage Elastic Stack applications with ECK, including: +This section provides everything you need to install, configure, and manage {{stack}} applications with ECK, including: - [](./cloud-on-k8s/deploy-an-orchestrator.md): ECK installation methods and configuration options. Deploy ECK on managed Kubernetes platforms like GKE, AKS, and EKS, on self-managed Kubernetes clusters, on OpenShift, and even in air-gapped environments. - [](./cloud-on-k8s/manage-deployments.md): Handle {{es}} clusters and {{kib}} instances through ECK. @@ -61,7 +61,7 @@ Afterwards, you can: ## Supported versions [k8s-supported] -This section outlines the supported Kubernetes and Elastic Stack versions for ECK. Check the full [Elastic support matrix](https://www.elastic.co/support/matrix#matrix_kubernetes) for more information. +This section outlines the supported Kubernetes and {{stack}} versions for ECK. Check the full [Elastic support matrix](https://www.elastic.co/support/matrix#matrix_kubernetes) for more information. ### Kubernetes compatibility @@ -76,22 +76,22 @@ ECK should work with all conformant **installers** listed in these [FAQs](https: Alpha, beta, and stable API versions follow the same [conventions used by Kubernetes](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning). -### Elastic Stack compatibility +### {{stack}} compatibility -ECK is compatible with the following Elastic Stack applications: +ECK is compatible with the following {{stack}} applications: -* Elasticsearch, Kibana, APM Server: 7.17+, 8+, 9+ +* {{es}}, {{kib}}, APM Server: 7.17+, 8+, 9+ * Enterprise Search: 7.17+, 8+ * Beats: 7.17+, 8+, 9+ * Elastic Agent: 7.10+ (standalone), 7.17+ (Fleet), 8+, 9+ * Elastic Maps Server: 7.17+, 8+, 9+ * Logstash: 8.7+, 9+ -Elastic Stack application images for the OpenShift-certified Elasticsearch (ECK) Operator are only available from version 7.10 and later. +{{stack}} application images for the OpenShift-certified {{es}} (ECK) Operator are only available from version 7.10 and later. ## Learn more about ECK [k8s_learn_more_about_eck] -* [Orchestrate Elasticsearch on Kubernetes](https://www.elastic.co/elasticsearch-kubernetes) +* [Orchestrate {{es}} on Kubernetes](https://www.elastic.co/elasticsearch-kubernetes) * [ECK post on the Elastic Blog](https://www.elastic.co/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond?elektra=products&storm=sub1) -* [Getting Started With Elastic Cloud on Kubernetes (ECK)](https://www.youtube.com/watch?v=PIJmlYBIFXM) -* [Running the Elastic Stack on Kubernetes with ECK](https://www.youtube.com/watch?v=Wf6E3vkvEFM) +* [Getting Started With {{eck}} (ECK)](https://www.youtube.com/watch?v=PIJmlYBIFXM) +* [Running the {{stack}} on Kubernetes with ECK](https://www.youtube.com/watch?v=Wf6E3vkvEFM) diff --git a/deploy-manage/deploy/cloud-on-k8s/accessing-services.md b/deploy-manage/deploy/cloud-on-k8s/accessing-services.md index e6a051c3fc..b27ecc12f7 100644 --- a/deploy-manage/deploy/cloud-on-k8s/accessing-services.md +++ b/deploy-manage/deploy/cloud-on-k8s/accessing-services.md @@ -12,7 +12,7 @@ mapped_pages: To provide access to {{es}}, {{kib}}, and other {{stack}} applications when applicable, ECK relies on [Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/). -All Elastic Stack resources deployed by the ECK operator are secured by default. The operator sets up basic authentication and TLS to encrypt network traffic to, from, and within your Elasticsearch cluster. +All {{stack}} resources deployed by the ECK operator are secured by default. The operator sets up basic authentication and TLS to encrypt network traffic to, from, and within your {{es}} cluster. This section explains how to access and customize the Kubernetes services and secrets created by ECK, covering topics such as: @@ -63,7 +63,7 @@ hulk-kb-http ClusterIP 10.19.247.151 5601/TCP 1 You can expose services in [different ways](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) by specifying an `http.service.spec.type` in the `spec` of the resource manifest. On cloud providers which support external load balancers, you can set the `type` field to `LoadBalancer` to provision a load balancer for the `Service`, and populate the column `EXTERNAL-IP` after a short delay. Depending on the cloud provider, it may incur costs. -By default, the Elasticsearch service created by ECK is configured to route traffic to all Elasticsearch nodes in the cluster. Depending on your cluster configuration, you may want more control over the set of nodes that handle different types of traffic (query, ingest, and so on). Refer to [](./requests-routing-to-elasticsearch-nodes.md) for more information. +By default, the {{es}} service created by ECK is configured to route traffic to all {{es}} nodes in the cluster. Depending on your cluster configuration, you may want more control over the set of nodes that handle different types of traffic (query, ingest, and so on). Refer to [](./requests-routing-to-elasticsearch-nodes.md) for more information. ::::{warning} When you change the `clusterIP` setting of the service, ECK will delete and re-create the service as `clusterIP` is an immutable field. Depending on your client implementation, this might result in a short disruption until the service DNS entries refresh to point to the new endpoints. @@ -93,7 +93,7 @@ hulk-kb-http LoadBalancer 10.19.247.151 35.242.197.228 5601:31380/T ## Access the endpoint [k8s-request-elasticsearch-endpoint] -You can access the Elasticsearch endpoint within or outside the Kubernetes cluster. +You can access the {{es}} endpoint within or outside the Kubernetes cluster. **Within the Kubernetes cluster** diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md index 58a7248765..ee8c01fcfc 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md @@ -13,7 +13,7 @@ This functionality is in technical preview and may be changed or removed in a fu :::: -If you already looked at the [Elasticsearch on ECK](elasticsearch-configuration.md) documentation, some of these concepts might sound familiar to you. The resource definitions in ECK share the same philosophy when you want to: +If you already looked at the [{{es}} on ECK](elasticsearch-configuration.md) documentation, some of these concepts might sound familiar to you. The resource definitions in ECK share the same philosophy when you want to: * Customize the Pod configuration * Customize the product configuration @@ -58,7 +58,7 @@ stringData: logging.level: debug ``` -Refer to [Set compute resources for Kibana, Elastic Maps Server, APM Server and Logstash](manage-compute-resources.md#k8s-compute-resources-kibana-and-apm) for adjusting compute resources for Elastic Maps Server. +Refer to [Set compute resources for {{kib}}, Elastic Maps Server, APM Server and Logstash](manage-compute-resources.md#k8s-compute-resources-kibana-and-apm) for adjusting compute resources for Elastic Maps Server. ## Scale out an Elastic Maps Server deployment [k8s-maps-scaling] diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md index 67a0ea21e1..64c00a2bfc 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md @@ -13,11 +13,11 @@ This section covers the following topics: * [Use APM Agent central configuration](#k8s-apm-agent-central-configuration) * [Customize the APM Server configuration](#k8s-apm-customize-configuration) * [Specify secure settings for your APM Server](#k8s-apm-secure-settings) -* [Reference an existing Elasticsearch cluster](#k8s-apm-existing-es) +* [Reference an existing {{es}} cluster](#k8s-apm-existing-es) ## Use APM Agent central configuration [k8s-apm-agent-central-configuration] -[APM Agent configuration management](/solutions/observability/apps/apm-agent-central-configuration.md) [7.5.1] allows you to configure your APM Agents centrally from the Kibana APM app. To use this feature, the APM Server needs to be configured with connection details of the Kibana instance. If Kibana is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: +[APM Agent configuration management](/solutions/observability/apps/apm-agent-central-configuration.md) [7.5.1] allows you to configure your APM Agents centrally from the {{kib}} APM app. To use this feature, the APM Server needs to be configured with connection details of the {{kib}} instance. If {{kib}} is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: ```yaml cat <`. To find out which support diagnostics container image matches your version of eck-diagnostics, run the tool once without arguments and it will print the default image in use. diff --git a/deploy-manage/deploy/cloud-on-k8s/apm-server.md b/deploy-manage/deploy/cloud-on-k8s/apm-server.md index 1f07e49371..1fc8758ba4 100644 --- a/deploy-manage/deploy/cloud-on-k8s/apm-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/apm-server.md @@ -10,13 +10,13 @@ mapped_pages: This section describes how to deploy, configure and access an APM Server with ECK. -* [Use an Elasticsearch cluster managed by ECK](use-an-elasticsearch-cluster-managed-by-eck.md) +* [Use an {{es}} cluster managed by ECK](use-an-elasticsearch-cluster-managed-by-eck.md) * [Advanced configuration](advanced-configuration.md) * [Use APM Agent central configuration](advanced-configuration.md#k8s-apm-agent-central-configuration) * [Customize the APM Server configuration](advanced-configuration.md#k8s-apm-customize-configuration) * [APM Secrets keystore for secure settings](advanced-configuration.md#k8s-apm-secure-settings) - * [Reference an existing Elasticsearch cluster](advanced-configuration.md#k8s-apm-existing-es) + * [Reference an existing {{es}} cluster](advanced-configuration.md#k8s-apm-existing-es) * [TLS Certificates](advanced-configuration.md#k8s-apm-tls) * [Connect to the APM Server](connect-to-apm-server.md) diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md index c628c1ee9d..9aa04b81e4 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md @@ -10,7 +10,7 @@ mapped_pages: ## Upgrade the Beat specification [k8s-beat-upgrade-specification] -You can upgrade the Beat version or change settings by editing the YAML specification. ECK applies the changes by performing a rolling restart of the Beat Pods. Depending on the specification settings that you used, ECK will set the [output](configuration-beats.md#k8s-beat-set-beat-output) part of the config, perform Kibana dashboard [setup](configuration-beats.md#k8s-beat-set-up-kibana-dashboards), restart Beats on certificates rollover and set up the Beats [keystore](configuration-beats.md#k8s-beat-secrets-keystore-for-secure-settings). +You can upgrade the Beat version or change settings by editing the YAML specification. ECK applies the changes by performing a rolling restart of the Beat Pods. Depending on the specification settings that you used, ECK will set the [output](configuration-beats.md#k8s-beat-set-beat-output) part of the config, perform {{kib}} dashboard [setup](configuration-beats.md#k8s-beat-set-up-kibana-dashboards), restart Beats on certificates rollover and set up the Beats [keystore](configuration-beats.md#k8s-beat-secrets-keystore-for-secure-settings). ## Customize Beat configuration [k8s-beat-custom-configuration] @@ -76,9 +76,9 @@ stringData: For more details, check the [Beats configuration](beats://reference/libbeat/config-file-format.md) section. -## Customize the connection to an Elasticsearch cluster [k8s-beat-connect-es] +## Customize the connection to an {{es}} cluster [k8s-beat-connect-es] -The `elasticsearchRef` element allows ECK to automatically configure Beats to establish a secured connection to a managed Elasticsearch cluster. By default it targets all nodes in your cluster. If you want to direct traffic to specific nodes of your Elasticsearch cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. +The `elasticsearchRef` element allows ECK to automatically configure Beats to establish a secured connection to a managed {{es}} cluster. By default it targets all nodes in your cluster. If you want to direct traffic to specific nodes of your {{es}} cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. ## Deploy a Beat [k8s-beat-deploy-elastic-beat] @@ -92,7 +92,7 @@ ECK supports the deployment of the following Beats: * [Packetbeat](https://www.elastic.co/beats/packetbeat) * [Journalbeat](https://www.elastic.co/guide/en/beats/journalbeat/current/index.html) -For each Beat you want to deploy, you can specify the `type` and `version` elements. ECK creates a new user in Elasticsearch with a minimal set of appropriate roles and permissions to enable the use of all Beats features. +For each Beat you want to deploy, you can specify the `type` and `version` elements. ECK creates a new user in {{es}} with a minimal set of appropriate roles and permissions to enable the use of all Beats features. ## Deploy a Community Beat [k8s-beat-deploy-community-beat] @@ -101,18 +101,18 @@ ECK supports the deployment of any Community Beat. 1. Specify the `type` and `version` elements. 2. Set the `image` element to point to the image to be deployed. -3. Make sure the following roles exist in Elasticsearch: +3. Make sure the following roles exist in {{es}}: - * If `elasticsearchRef` is provided, create the role `eck_beat_es_$type_role`, where `$type` is the Beat type. For example, when deploying `kafkabeat`, the role name is `eck_beat_es_kafkabeat_role`. This role must have the permissions required by the Beat. Check the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html) for more details. - * If `kibanaRef` is provided, create the role `eck_beat_kibana_$type_role` with the permissions required to setup Kibana dashboards. + * If `elasticsearchRef` is provided, create the role `eck_beat_es_$type_role`, where `$type` is the Beat type. For example, when deploying `kafkabeat`, the role name is `eck_beat_es_kafkabeat_role`. This role must have the permissions required by the Beat. Check the [{{es}} documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html) for more details. + * If `kibanaRef` is provided, create the role `eck_beat_kibana_$type_role` with the permissions required to setup {{kib}} dashboards. -Alternatively, create a user in Elasticsearch and include the credentials in the Beats `config` for Elasticsearch output, Kibana setup or both. If `elasticsearchRef` and `kibanaRef` are also defined, ECK will use the provided user credentials when setting up the connections. +Alternatively, create a user in {{es}} and include the credentials in the Beats `config` for {{es}} output, {{kib}} setup or both. If `elasticsearchRef` and `kibanaRef` are also defined, ECK will use the provided user credentials when setting up the connections. -## Set up Kibana dashboards [k8s-beat-set-up-kibana-dashboards] +## Set up {{kib}} dashboards [k8s-beat-set-up-kibana-dashboards] -ECK can instruct Beats to set up example dashboards packaged with the Beat. To enable this, set the `kibanaRef` element in the specification to point to ECK-managed Kibana deployment: +ECK can instruct Beats to set up example dashboards packaged with the Beat. To enable this, set the `kibanaRef` element in the specification to point to ECK-managed {{kib}} deployment: ```yaml apiVersion: beat.k8s.elastic.co/v1beta1 @@ -125,7 +125,7 @@ spec: ... ``` -ECK will create a new user in Elasticsearch with a minimal set of appropriate roles and permissions that is needed for dashboard setup. +ECK will create a new user in {{es}} with a minimal set of appropriate roles and permissions that is needed for dashboard setup. ## Secrets keystore for secure settings [k8s-beat-secrets-keystore-for-secure-settings] @@ -159,7 +159,7 @@ Check [Beats documentation](beats://reference/filebeat/keystore.md) for more det ## Set Beat output [k8s-beat-set-beat-output] -If the `elasticsearchRef` element is specified, ECK populates the output section of the Beat config. ECK creates a user with appropriate roles and permissions and uses its credentials. If required, it also mounts the CA certificate in all Beat Pods, and recreates Pods when this certificate changes. Moreover, `elasticsearchRef` element can refer to an ECK-managed Elasticsearch cluster by filling the `name`, `namespace`, `serviceName` fields accordingly, as well as to a Kubernetes secret that contains the connection information to an Elasticsearch cluster not managed by it. In the latter case, for authenticating against the Elasticsearch cluster the secret must contain the fields of `url` and either the `username` with `password` or the `api-key`. +If the `elasticsearchRef` element is specified, ECK populates the output section of the Beat config. ECK creates a user with appropriate roles and permissions and uses its credentials. If required, it also mounts the CA certificate in all Beat Pods, and recreates Pods when this certificate changes. Moreover, `elasticsearchRef` element can refer to an ECK-managed {{es}} cluster by filling the `name`, `namespace`, `serviceName` fields accordingly, as well as to a Kubernetes secret that contains the connection information to an {{es}} cluster not managed by it. In the latter case, for authenticating against the {{es}} cluster the secret must contain the fields of `url` and either the `username` with `password` or the `api-key`. Output can be set to any value that is supported by a given Beat. To use it, remove the `elasticsearchRef` element from the specification and include an appropriate output configuration in the `config` or `configRef` elements. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md index d5277b2ec0..366afb8e23 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md @@ -8,7 +8,7 @@ mapped_pages: # Configuration Examples [k8s-beat-configuration-examples] -In this section you can find manifests that address a number of common use cases and can be your starting point in exploring Beats deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain three-node Elasticsearch cluster and single Kibana instance. All Beat configurations set up Kibana dashboards if they are available for a given Beat and all required RBAC resources. +In this section you can find manifests that address a number of common use cases and can be your starting point in exploring Beats deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain three-node {{es}} cluster and single {{kib}} instance. All Beat configurations set up {{kib}} dashboards if they are available for a given Beat and all required RBAC resources. ::::{warning} The examples in this section are purely descriptive and should not be considered to be production-ready. Some of these examples use the `node.store.allow_mmap: false` setting which has performance implications and should be tuned for production workloads, as described in [Virtual memory](virtual-memory.md). @@ -33,7 +33,7 @@ Deploys Metricbeat as a DaemonSet that monitors the usage of the following resou kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover.yaml ``` -Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collects logs from Pods in every namespace and loads them to the connected Elasticsearch cluster. +Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collects logs from Pods in every namespace and loads them to the connected {{es}} cluster. ## Filebeat with autodiscover for metadata [k8s_filebeat_with_autodiscover_for_metadata] @@ -42,7 +42,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collec kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml ``` -Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from Pods that match the following criteria are shipped to the connected Elasticsearch cluster: +Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from Pods that match the following criteria are shipped to the connected {{es}} cluster: * Pod is in `log-namespace` namespace * Pod has `log-label: "true"` label @@ -57,13 +57,13 @@ kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/con Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the entire logs directory on the host as the input source. This configuration does not require any RBAC resources as no Kubernetes APIs are used. -## Elasticsearch and Kibana Stack Monitoring [k8s_elasticsearch_and_kibana_stack_monitoring] +## {{es}} and {{kib}} Stack Monitoring [k8s_elasticsearch_and_kibana_stack_monitoring] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/stack_monitoring.yaml ``` -Deploys Metricbeat configured for Elasticsearch and Kibana [Stack Monitoring](/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) and Filebeat using autodiscover. Deploys one monitored Elasticsearch cluster and one monitoring Elasticsearch cluster. You can access the Stack Monitoring app in the monitoring cluster’s Kibana. +Deploys Metricbeat configured for {{es}} and {{kib}} [Stack Monitoring](/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) and Filebeat using autodiscover. Deploys one monitored {{es}} cluster and one monitoring {{es}} cluster. You can access the Stack Monitoring app in the monitoring cluster’s {{kib}}. ::::{note} In this example, TLS verification is disabled when Metricbeat communicates with the monitored cluster, which is not secure and should not be used in production. To solve this, use custom certificates and configure Metricbeat to verify them. @@ -71,13 +71,13 @@ In this example, TLS verification is disabled when Metricbeat communicates with -## Heartbeat monitoring Elasticsearch and Kibana health [k8s_heartbeat_monitoring_elasticsearch_and_kibana_health] +## Heartbeat monitoring {{es}} and {{kib}} health [k8s_heartbeat_monitoring_elasticsearch_and_kibana_health] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/heartbeat_es_kb_health.yaml ``` -Deploys Heartbeat as a single Pod deployment that monitors the health of Elasticsearch and Kibana by TCP probing their Service endpoints. Heartbeat expects that Elasticsearch and Kibana are deployed in the `default` namespace. +Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} and {{kib}} by TCP probing their Service endpoints. Heartbeat expects that {{es}} and {{kib}} are deployed in the `default` namespace. ## Auditbeat [k8s_auditbeat] @@ -104,4 +104,4 @@ Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) tra kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/beats/openshift_monitoring.yaml ``` -Deploys Metricbeat as a DaemonSet that monitors the host resource usage (CPU, memory, network, filesystem), OpenShift resources (Nodes, Pods, Containers, Volumes), API Server and Filebeat using autodiscover. Deploys an Elasticsearch cluster and Kibana to centralize data collection. +Deploys Metricbeat as a DaemonSet that monitors the host resource usage (CPU, memory, network, filesystem), OpenShift resources (Nodes, Pods, Containers, Volumes), API Server and Filebeat using autodiscover. Deploys an {{es}} cluster and {{kib}} to centralize data collection. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md index f9536eac5b..87a4f9635c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md @@ -8,7 +8,7 @@ mapped_pages: # Configuration examples [k8s-logstash-configuration-examples] -This section contains manifests that illustrate common use cases, and can be your starting point in exploring Logstash deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain a three-node Elasticsearch cluster and a single Kibana instance. +This section contains manifests that illustrate common use cases, and can be your starting point in exploring Logstash deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain a three-node {{es}} cluster and a single {{kib}} instance. ::::{warning} The examples in this section are for illustration purposes only. They should not be considered production-ready. Some of these examples use the `node.store.allow_mmap: false` setting on {{es}} which has performance implications and should be tuned for production workloads, as described in [Virtual memory](virtual-memory.md). @@ -42,13 +42,13 @@ kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/con Deploys Logstash with a single pipeline defined in a secret, mounted as a volume, and referenced by `path.config` -## Writing to a custom Elasticsearch index [k8s-logstash-configuration-custom-index] +## Writing to a custom {{es}} index [k8s-logstash-configuration-custom-index] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-es-role.yaml ``` -Deploys Logstash and Elasticsearch, and creates an updated version of the `eck_logstash_user_role` to write to a user specified index. +Deploys Logstash and {{es}}, and creates an updated version of the `eck_logstash_user_role` to write to a user specified index. ## Creating persistent volumes for PQ and DLQ [k8s-logstash-configuration-pq-dlq] @@ -57,25 +57,25 @@ Deploys Logstash and Elasticsearch, and creates an updated version of the `eck_l kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-volumes.yaml ``` -Deploys Logstash, Beats and Elasticsearch. Logstash is configured with two pipelines: +Deploys Logstash, Beats and {{es}}. Logstash is configured with two pipelines: * a main pipeline for reading from the {{beats}} instance, which will send to the DLQ if it is unable to write to Elasticsearch * a second pipeline, that will read from the DLQ. In addition, persistent queues are set up. This example shows how to configure persistent volumes outside of the default `logstash-data` persistent volume. -## Elasticsearch and Kibana Stack Monitoring [k8s-logstash-configuration-stack-monitoring] +## {{es}} and {{kib}} Stack Monitoring [k8s-logstash-configuration-stack-monitoring] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-monitored.yaml ``` -Deploys an Elasticsearch and Kibana monitoring cluster, and a Logstash that will send its monitoring information to this cluster. You can view the stack monitoring information in the monitoring cluster’s Kibana +Deploys an {{es}} and {{kib}} monitoring cluster, and a Logstash that will send its monitoring information to this cluster. You can view the stack monitoring information in the monitoring cluster’s Kibana -## Multiple pipelines/multiple Elasticsearch clusters [k8s-logstash-configuration-multiple-pipelines] +## Multiple pipelines/multiple {{es}} clusters [k8s-logstash-configuration-multiple-pipelines] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/logstash/logstash-multi.yaml ``` -Deploys Elasticsearch in prod and qa configurations, running in separate namespaces. Logstash is configured with a multiple pipeline→pipeline configuration, with a source pipeline routing to `prod` and `qa` pipelines. +Deploys {{es}} in prod and qa configurations, running in separate namespaces. Logstash is configured with a multiple pipeline→pipeline configuration, with a source pipeline routing to `prod` and `qa` pipelines. diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md index 89f2a372b1..8d7a866161 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md @@ -8,7 +8,7 @@ mapped_pages: # Configuration examples [k8s-elastic-agent-configuration-examples] -This section contains manifests that illustrate common use cases, and can be your starting point in exploring Elastic Agent deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain a three-node Elasticsearch cluster and a single Kibana instance. Add the corresponding integration package to Kibana to install the dashboards, visualizations and other assets for each of these examples as described in [the Elastic Agent documentation](/reference/fleet/install-elastic-agents.md). +This section contains manifests that illustrate common use cases, and can be your starting point in exploring Elastic Agent deployed with ECK. These manifests are self-contained and work out-of-the-box on any non-secured Kubernetes cluster. They all contain a three-node {{es}} cluster and a single {{kib}} instance. Add the corresponding integration package to {{kib}} to install the dashboards, visualizations and other assets for each of these examples as described in [the Elastic Agent documentation](/reference/fleet/install-elastic-agents.md). ::::{warning} The examples in this section are for illustration purposes only and should not be considered to be production-ready. Some of these examples use the `node.store.allow_mmap: false` setting which has performance implications and should be tuned for production workloads, as described in [Virtual memory](virtual-memory.md). @@ -33,13 +33,13 @@ kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/con Deploys Elastic Agent as a DaemonSet in standalone mode with Kubernetes integration enabled. Collects API server, Container, Event, Node, Pod, Volume and system metrics. -## Multiple Elasticsearch clusters output [k8s_multiple_elasticsearch_clusters_output] +## Multiple {{es}} clusters output [k8s_multiple_elasticsearch_clusters_output] ```sh kubectl apply -f https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/config/recipes/elastic-agent/multi-output.yaml ``` -Deploys two Elasticsearch clusters and two Kibana instances together with single Elastic Agent DaemonSet in standalone mode with System integration enabled. System metrics are sent to the `elasticsearch` cluster. Elastic Agent monitoring data is sent to `elasticsearch-mon` cluster. +Deploys two {{es}} clusters and two {{kib}} instances together with single Elastic Agent DaemonSet in standalone mode with System integration enabled. System metrics are sent to the `elasticsearch` cluster. Elastic Agent monitoring data is sent to `elasticsearch-mon` cluster. ## Storing local state in host path volume [k8s_storing_local_state_in_host_path_volume] diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md index e6461f4b18..de336e6829 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md @@ -393,13 +393,13 @@ spec: ``` -## Using Elasticsearch in Logstash pipelines [k8s-logstash-pipelines-es] +## Using {{es}} in Logstash pipelines [k8s-logstash-pipelines-es] ### `elasticsearchRefs` for establishing a secured connection [k8s-logstash-esref] -The `spec.elasticsearchRefs` section provides a mechanism to help configure Logstash to establish a secured connection to one or more ECK managed Elasticsearch clusters. By default, each `elasticsearchRef` will target all nodes in its referenced Elasticsearch cluster. If you want to direct traffic to specific nodes of your Elasticsearch cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. +The `spec.elasticsearchRefs` section provides a mechanism to help configure Logstash to establish a secured connection to one or more ECK managed {{es}} clusters. By default, each `elasticsearchRef` will target all nodes in its referenced {{es}} cluster. If you want to direct traffic to specific nodes of your {{es}} cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. -When you use `elasticsearchRefs` in a Logstash pipeline, the Logstash operator creates the necessary resources from the associated Elasticsearch cluster, and provides environment variables to allow these resources to be accessed from the pipeline configuration. Environment variables are replaced at runtime with the appropriate values. The environment variables have a fixed naming convention: +When you use `elasticsearchRefs` in a Logstash pipeline, the Logstash operator creates the necessary resources from the associated {{es}} cluster, and provides environment variables to allow these resources to be accessed from the pipeline configuration. Environment variables are replaced at runtime with the appropriate values. The environment variables have a fixed naming convention: * `NORMALIZED_CLUSTERNAME_ES_HOSTS` * `NORMALIZED_CLUSTERNAME_ES_USER` @@ -422,13 +422,13 @@ where NORMALIZED_CLUSTERNAME is the value taken from the `clusterName` field of ] ``` - You can [update user permissions](../../users-roles/cluster-or-deployment-auth/native.md) to include more indices if the Elasticsearch plugin is expected to use indices other than the default. Check out [Logstash configuration with a custom index](configuration-examples-logstash.md#k8s-logstash-configuration-custom-index) sample configuration that creates a user that writes to a custom index. + You can [update user permissions](../../users-roles/cluster-or-deployment-auth/native.md) to include more indices if the {{es}} plugin is expected to use indices other than the default. Check out [Logstash configuration with a custom index](configuration-examples-logstash.md#k8s-logstash-configuration-custom-index) sample configuration that creates a user that writes to a custom index. :::: -This example demonstrates how to create a Logstash deployment that connects to different Elasticsearch instances, one of which is in a separate namespace: +This example demonstrates how to create a Logstash deployment that connects to different {{es}} instances, one of which is in a separate namespace: ```yaml apiVersion: logstash.k8s.elastic.co/v1alpha1 @@ -468,16 +468,16 @@ spec: } ``` -1. Define Elasticsearch references in the CRD. This will create the appropriate Secrets to store certificate details and the rest of the connection information, and create environment variables to allow them to be referred to in Logstash pipeline configurations. -2. This refers to an Elasticsearch cluster residing in the same namespace as the Logstash instances. -3. This refers to an Elasticsearch cluster residing in a different namespace to the Logstash instances. -4. Elasticsearch output definitions - use the environment variables created by the Logstash operator when specifying an `ElasticsearchRef`. Note the use of "normalized" versions of the `clusterName` in the environment variables used to populate the relevant fields. +1. Define {{es}} references in the CRD. This will create the appropriate Secrets to store certificate details and the rest of the connection information, and create environment variables to allow them to be referred to in Logstash pipeline configurations. +2. This refers to an {{es}} cluster residing in the same namespace as the Logstash instances. +3. This refers to an {{es}} cluster residing in a different namespace to the Logstash instances. +4. {{es}} output definitions - use the environment variables created by the Logstash operator when specifying an `ElasticsearchRef`. Note the use of "normalized" versions of the `clusterName` in the environment variables used to populate the relevant fields. -### Connect to an external Elasticsearch cluster [k8s-logstash-external-es] +### Connect to an external {{es}} cluster [k8s-logstash-external-es] -Logstash can connect to external Elasticsearch cluster that is not managed by ECK. You can reference a Secret instead of an Elasticsearch cluster in the `elasticsearchRefs` section through the `secretName` attribute: +Logstash can connect to external {{es}} cluster that is not managed by ECK. You can reference a Secret instead of an {{es}} cluster in the `elasticsearchRefs` section through the `secretName` attribute: ```yaml apiVersion: v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md index 9376978361..d7e63d824e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md @@ -108,14 +108,14 @@ stringData: period: 10s ``` -You can use the Fleet application in Kibana to generate the configuration for Elastic Agent, even when running in standalone mode. Check the [Elastic Agent standalone](/reference/fleet/install-standalone-elastic-agent.md) documentation. Adding the corresponding integration package to Kibana also adds the related dashboards and visualizations. +You can use the Fleet application in {{kib}} to generate the configuration for Elastic Agent, even when running in standalone mode. Check the [Elastic Agent standalone](/reference/fleet/install-standalone-elastic-agent.md) documentation. Adding the corresponding integration package to {{kib}} also adds the related dashboards and visualizations. ## Use multiple Elastic Agent outputs [k8s-elastic-agent-multi-output] -Elastic Agent supports the use of multiple outputs. Therefore, the `elasticsearchRefs` element accepts multiple references to Elasticsearch clusters. ECK populates the outputs section of the Elastic Agent configuration based on those references. If you configure more than one output, you also have to specify a unique `outputName` attribute. +Elastic Agent supports the use of multiple outputs. Therefore, the `elasticsearchRefs` element accepts multiple references to {{es}} clusters. ECK populates the outputs section of the Elastic Agent configuration based on those references. If you configure more than one output, you also have to specify a unique `outputName` attribute. -To send Elastic Agent’s internal monitoring and log data to a different Elasticsearch cluster called `agent-monitoring` in the `elastic-monitoring` namespace, and the harvested metrics to our `quickstart` cluster, you have to define two `elasticsearchRefs` as shown in the following example: +To send Elastic Agent’s internal monitoring and log data to a different {{es}} cluster called `agent-monitoring` in the `elastic-monitoring` namespace, and the harvested metrics to our `quickstart` cluster, you have to define two `elasticsearchRefs` as shown in the following example: ```yaml apiVersion: agent.k8s.elastic.co/v1alpha1 @@ -151,14 +151,14 @@ spec: ``` -## Customize the connection to an Elasticsearch cluster [k8s-elastic-agent-connect-es] +## Customize the connection to an {{es}} cluster [k8s-elastic-agent-connect-es] -The `elasticsearchRefs` element allows ECK to automatically configure Elastic Agent to establish a secured connection to one or more managed Elasticsearch clusters. By default, it targets all nodes in your cluster. If you want to direct traffic to specific nodes of your Elasticsearch cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. +The `elasticsearchRefs` element allows ECK to automatically configure Elastic Agent to establish a secured connection to one or more managed {{es}} clusters. By default, it targets all nodes in your cluster. If you want to direct traffic to specific nodes of your {{es}} cluster, refer to [*Traffic Splitting*](requests-routing-to-elasticsearch-nodes.md) for more information and examples. ## Set manually Elastic Agent outputs [k8s-elastic-agent-set-output] -If the `elasticsearchRefs` element is specified, ECK populates the outputs section of the Elastic Agent configuration. ECK creates a user with appropriate roles and permissions and uses its credentials. If required, it also mounts the CA certificate in all Agent Pods, and recreates Pods when this certificate changes. Moreover, `elasticsearchRef` element can refer to an ECK-managed Elasticsearch cluster by filling the `name`, `namespace`, `serviceName` fields accordingly, as well as to a Kubernetes secret that contains the connection information to an Elasticsearch cluster not managed by it. In the latter case, for authenticating against the Elasticsearch cluster the secret must contain the fields of `url` and either the `username` with `password` or the `api-key`. Refer to [*Connect to external Elastic resources*](connect-to-external-elastic-resources.md) for additional details. +If the `elasticsearchRefs` element is specified, ECK populates the outputs section of the Elastic Agent configuration. ECK creates a user with appropriate roles and permissions and uses its credentials. If required, it also mounts the CA certificate in all Agent Pods, and recreates Pods when this certificate changes. Moreover, `elasticsearchRef` element can refer to an ECK-managed {{es}} cluster by filling the `name`, `namespace`, `serviceName` fields accordingly, as well as to a Kubernetes secret that contains the connection information to an {{es}} cluster not managed by it. In the latter case, for authenticating against the {{es}} cluster the secret must contain the fields of `url` and either the `username` with `password` or the `api-key`. Refer to [*Connect to external Elastic resources*](connect-to-external-elastic-resources.md) for additional details. The outputs can also be set manually. To do that, remove the `elasticsearchRefs` element from the specification and include an appropriate output configuration in the `config`, or indirectly through the `configRef` mechanism. diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md b/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md index be4f0f82f7..49da27ea1b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md @@ -31,7 +31,7 @@ ECK also facilitates configuration and operation activities with advanced featur * [**Secure settings**](/deploy-manage/security/k8s-secure-settings.md): Configure {{es}} and {{kib}} keystore settings through Kubernetes secrets. -* [**Elastic Stack configuration policies**](elastic-stack-configuration-policies.md): Organize your {{es}} and {{kib}} configuration settings through `StackConfigPolicy` resources that can be referenced within your deployments. This helps to keep your manifests simplified. +* [**{{stack}} configuration policies**](elastic-stack-configuration-policies.md): Organize your {{es}} and {{kib}} configuration settings through `StackConfigPolicy` resources that can be referenced within your deployments. This helps to keep your manifests simplified. ::::{important} Explore the [Security](/deploy-manage/security.md) and [Users and roles](/deploy-manage/users-roles.md) sections to to learn more about how to secure and control access your deployments. diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md index 009b3b5053..66c7861503 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md @@ -8,11 +8,11 @@ mapped_pages: # Configure the validating webhook [k8s-webhook] -ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting. +ECK can be configured to provide a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that validates Elastic custom resources ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) before they are created or updated. Validating webhooks provide immediate feedback if a submitted manifest contains invalid or illegal configuration — which can help you catch errors early and save time that would otherwise be spent on troubleshooting. Validating webhooks are defined using a `ValidatingWebhookConfiguration` object that defines the following: -* Type of resource to validate (Elasticsearch, Kibana and so on) +* Type of resource to validate ({{es}}, {{kib}} and so on) * Type of actions to validate (create, update, delete) * Connection details to the webhook @@ -27,7 +27,7 @@ Validating webhooks are defined using a `ValidatingWebhookConfiguration` object When using the default `operator.yaml` manifest, ECK is installed with a `ValidatingWebhookConfiguration` configured as follows: -* Validate all known Elastic custom resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) on create and update. +* Validate all known Elastic custom resources ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) on create and update. * The operator itself is the webhook server — which is exposed through a service named `elastic-webhook-server` in the `elastic-system` namespace. * The operator generates a certificate for the webhook and stores it in a secret named `elastic-webhook-server-cert` in the `elastic-system` namespace. This certificate is automatically rotated by the operator when it is due to expire. @@ -168,7 +168,7 @@ You might get errors in your Kubernetes API server logs indicating that it canno ### Resource creation taking too long or timing out [k8s-webhook-troubleshooting-timeouts] -Webhooks require network connectivity between the Kubernetes API server and the operator. If the creation of an Elasticsearch resource times out with an error message similar to the following, then the Kubernetes API server might be unable to connect to the webhook to validate the manifest. +Webhooks require network connectivity between the Kubernetes API server and the operator. If the creation of an {{es}} resource times out with an error message similar to the following, then the Kubernetes API server might be unable to connect to the webhook to validate the manifest. ``` Error from server (Timeout): error when creating "elasticsearch.yaml": Timeout: request did not complete within requested timeout 30s @@ -233,5 +233,5 @@ spec: If your attempts to update a resource fail with an error message similar to the following, you can force the webhook to ignore it by removing the `kubectl.kubernetes.io/last-applied-configuration` annotation from your resource. ``` -admission webhook "elastic-es-validation-v1.k8s.elastic.co" denied the request: Elasticsearch.elasticsearch.k8s.elastic.co "quickstart" is invalid: some-misspelled-field: Invalid value: "some-misspelled-field": some-misspelled-field field found in the kubectl.kubernetes.io/last-applied-configuration annotation is unknown +admission webhook "elastic-es-validation-v1.k8s.elastic.co" denied the request: {{es}}.elasticsearch.k8s.elastic.co "quickstart" is invalid: some-misspelled-field: Invalid value: "some-misspelled-field": some-misspelled-field field found in the kubectl.kubernetes.io/last-applied-configuration annotation is unknown ``` diff --git a/deploy-manage/deploy/cloud-on-k8s/configure.md b/deploy-manage/deploy/cloud-on-k8s/configure.md index 2ef767cc3d..a437766ade 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure.md @@ -25,7 +25,7 @@ The following guides cover common ECK configuration tasks: * [Create custom images](./create-custom-images.md): Use your own images with {{es}} plugins already installed rather than installing them through init containers. -* [Service meshes](./service-meshes.md): Connect ECK and managed Elastic Stack applications to some of the most popular [service mesh](https://www.cncf.io/blog/2017/04/26/service-mesh-critical-component-cloud-native-stack/) implementations in the Kubernetes ecosystem. +* [Service meshes](./service-meshes.md): Connect ECK and managed {{stack}} applications to some of the most popular [service mesh](https://www.cncf.io/blog/2017/04/26/service-mesh-critical-component-cloud-native-stack/) implementations in the Kubernetes ecosystem. * [Network policies](./../../security/k8s-network-policies.md): Use [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to isolate pods by restricting incoming and outgoing network connections to a trusted set of sources and destinations. diff --git a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md index 98e878fe8c..ccb0e07bd9 100644 --- a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md @@ -16,7 +16,7 @@ This section covers the following topics: ## APM Server service [k8s-apm-service] -The APM Server is exposed with a Service. For information on accessing it, check [How to access Elastic Stack services](accessing-services.md). +The APM Server is exposed with a Service. For information on accessing it, check [How to access {{stack}} services](accessing-services.md). To retrieve the list of all the APM Services, use the following command: @@ -45,6 +45,6 @@ For more information, check [APM Server Reference](https://www.elastic.co/guide/ ## APM Server API keys [k8s-apm-api-keys] -If you want to configure API keys to authorize requests to the APM Server, instead of using the APM Server CLI, you have to create API keys using the Elasticsearch [create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-api-create-api-key.html), check the [APM Server documentation](/solutions/observability/apps/api-keys.md). +If you want to configure API keys to authorize requests to the APM Server, instead of using the APM Server CLI, you have to create API keys using the {{es}} [create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-api-create-api-key.html), check the [APM Server documentation](/solutions/observability/apps/api-keys.md). diff --git a/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md b/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md index b2b36f1d54..8b2427e50c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md @@ -8,7 +8,7 @@ mapped_pages: # Connect to external Elastic resources [k8s-connect-to-unmanaged-resources] -Fields like `elasticsearchRef` or `kibanaRef` are useful to automatically establish connections between applications managed by the same ECK operator instance. It is however also possible to connect to applications managed by a different ECK operator instance, or to applications not managed by ECK, for example an Elastic Cloud deployment. This can be done by providing connection details and credentials in a `Secret` through the `secretName` attribute: +Fields like `elasticsearchRef` or `kibanaRef` are useful to automatically establish connections between applications managed by the same ECK operator instance. It is however also possible to connect to applications managed by a different ECK operator instance, or to applications not managed by ECK, for example an {{ecloud}} deployment. This can be done by providing connection details and credentials in a `Secret` through the `secretName` attribute: ```yaml apiVersion: v1 @@ -65,7 +65,7 @@ The following fields are expected to be set in the referenced `Secret`: * `password` (required): The password for the provided user. * `ca.crt` (optional): The certificate authority to be used to connect to the external resource. -In the case of Agent and Beats resources the following field can also be used to connect to Elasticsearch: +In the case of Agent and Beats resources the following field can also be used to connect to {{es}}: * `api-key`: An API key to authenticate against the Elastic resource. diff --git a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md index 9bd5250931..63712a10c6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md +++ b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md @@ -8,7 +8,7 @@ mapped_pages: # Create custom images [k8s-custom-images] -You can create your own custom application images (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) instead of using the base images provided by Elastic. You might want to do this to have a canonical image with all the necessary plugins pre-loaded rather than [installing them through an init container](init-containers-for-plugin-downloads.md) each time a Pod starts. You must use the official image as the base for custom images. For example, if you want to create an Elasticsearch 8.16.1 image with the [ICU Analysis Plugin](elasticsearch://reference/elasticsearch-plugins/analysis-icu.md), you can do the following: +You can create your own custom application images ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) instead of using the base images provided by Elastic. You might want to do this to have a canonical image with all the necessary plugins pre-loaded rather than [installing them through an init container](init-containers-for-plugin-downloads.md) each time a Pod starts. You must use the official image as the base for custom images. For example, if you want to create an {{es}} 8.16.1 image with the [ICU Analysis Plugin](elasticsearch://reference/elasticsearch-plugins/analysis-icu.md), you can do the following: 1. Create a `Dockerfile` containing: @@ -31,7 +31,7 @@ docker tag elasticsearch-icu:8.16.1 gcr.io/$PROJECT-ID/elasticsearch-icu:8.16.1 docker push gcr.io/$PROJECT-ID/elasticsearch-icu:8.16.1 ``` -Configure your Elasticsearch specification to use the newly pushed image, for example: +Configure your {{es}} specification to use the newly pushed image, for example: ```yaml spec: @@ -50,7 +50,7 @@ If your custom images follow the naming convention adopted by the official image For more information, check the following references: -* [Elasticsearch documentation on Using custom Docker images](/deploy-manage/deploy/self-managed/install-elasticsearch-docker-configure.md#_c_customized_image) +* [{{es}} documentation on Using custom Docker images](/deploy-manage/deploy/self-managed/install-elasticsearch-docker-configure.md#_c_customized_image) * [Google Container Registry](https://cloud.google.com/container-registry/docs/how-to) * [Azure Container Registry](https://docs.microsoft.com/en-us/azure/container-registry/) * [Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) diff --git a/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md b/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md index 90a283c828..e756e49d0b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md @@ -8,7 +8,7 @@ mapped_pages: # Custom configuration files and plugins [k8s-bundles-plugins] -To run Elasticsearch with specific plugins or configuration files installed on ECK, you have multiple options. Each option has its own pros and cons. +To run {{es}} with specific plugins or configuration files installed on ECK, you have multiple options. Each option has its own pros and cons. 1. Create a custom container image with the required plugins and configuration files. @@ -32,8 +32,8 @@ To run Elasticsearch with specific plugins or configuration files installed on E * **Cons** * Requires pods to have internet access. **Check [the note about using Istio](#istio-note)**. - * Adding new Elasticsearch nodes could randomly fail due to network issues or bad configuration. - * Each Elasticsearch node needs to repeat the download, wasting bandwidth and slowing startup. + * Adding new {{es}} nodes could randomly fail due to network issues or bad configuration. + * Each {{es}} node needs to repeat the download, wasting bandwidth and slowing startup. * Deployment manifests are more complicated. 3. Use ConfigMaps or Secrets together with volumes and volume mounts for configuration files. @@ -56,7 +56,7 @@ Refer to [Creating custom images](create-custom-images.md) for instructions on h ## Use init containers for plugins installation -The following example describes option 2, using a repository plugin. To install the plugin before the Elasticsearch nodes start, use an init container to run the [plugin installation tool](elasticsearch://reference/elasticsearch-plugins/installation.md). +The following example describes option 2, using a repository plugin. To install the plugin before the {{es}} nodes start, use an init container to run the [plugin installation tool](elasticsearch://reference/elasticsearch-plugins/installation.md). ```yaml spec: @@ -107,7 +107,7 @@ To install custom configuration files you can: 1. Add the configuration data into a ConfigMap or Secret. 2. Use volumes and volume mounts in your manifest to mount the contents of the ConfigMap or Secret as files in your {{es}} nodes. -The next example shows how to add a synonyms file for the [synonym token filter](elasticsearch://reference/text-analysis/analysis-synonym-tokenfilter.md) in Elasticsearch. But you can **use the same approach for any kind of file you want to mount into the configuration directory of Elasticsearch**, like adding CA certificates of external systems. +The next example shows how to add a synonyms file for the [synonym token filter](elasticsearch://reference/text-analysis/analysis-synonym-tokenfilter.md) in {{es}}. But you can **use the same approach for any kind of file you want to mount into the configuration directory of Elasticsearch**, like adding CA certificates of external systems. 1. Create the ConfigMap or Secret with the data: @@ -123,7 +123,7 @@ kubectl create configmap synonyms -n --from-file=my-synonyms.txt=syn Create the ConfigMap or Secret in the same namespace where your {{es}} cluster runs. :::: -2. Declare the ConfigMap as a volume and mount it in the Elasticsearch containers. +2. Declare the ConfigMap as a volume and mount it in the {{es}} containers. In this example, modify your {{es}} manifest to mount the contents of the `synonyms` ConfigMap into `/usr/share/elasticsearch/config/dictionaries` on the {{es}} nodes. @@ -145,7 +145,7 @@ spec: name: synonyms <4> ``` -1. Elasticsearch runs by convention in a container called `elasticsearch`. Do not change that value. +1. {{es}} runs by convention in a container called `elasticsearch`. Do not change that value. 2. Use always a path under `/usr/share/elasticsearch/config`. 3. Use `secret` instead of `configMap` if you used a secret to store the data. 4. The ConfigMap name must be the same as the ConfigMap created in the previous step. diff --git a/deploy-manage/deploy/cloud-on-k8s/customize-pods.md b/deploy-manage/deploy/cloud-on-k8s/customize-pods.md index facb6be642..bccc90d6f8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/customize-pods.md +++ b/deploy-manage/deploy/cloud-on-k8s/customize-pods.md @@ -8,7 +8,7 @@ mapped_pages: # Customize pods [k8s-customize-pods] -You can customize the Pods created for each Elastic stack application by modifying the respective `podTemplate` field in the manifest. Pod templates allow you to define labels, annotations, environment variables, volume mounts, and other custom configuration settings that are then merged with the default Pod configuration generated by ECK to produce the final Pod definition that gets deployed to the Kubernetes cluster. +You can customize the Pods created for each {{stack}} application by modifying the respective `podTemplate` field in the manifest. Pod templates allow you to define labels, annotations, environment variables, volume mounts, and other custom configuration settings that are then merged with the default Pod configuration generated by ECK to produce the final Pod definition that gets deployed to the Kubernetes cluster. The following example illustrates how to add a custom label, annotation, and an environment variable using the `podTemplate` field. @@ -59,7 +59,7 @@ spec: ``` ::::{note} -Configuration for other Elastic stack applications, like APM Server, or Beats, is identical to the Kibana configuration except for the `apiVersion` and `kind` fields. +Configuration for other {{stack}} applications, like APM Server, or Beats, is identical to the {{kib}} configuration except for the `apiVersion` and `kind` fields. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md b/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md index 3ac01e3b4b..f91d8a9e65 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md @@ -8,7 +8,7 @@ mapped_pages: # Deploy an orchestrator [k8s-quickstart] -With Elastic Cloud on Kubernetes (ECK), you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your {{es}} cluster, along with other Elastic applications. +With {{eck}} (ECK), you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your {{es}} cluster, along with other Elastic applications. In this section, you'll learn how to do the following: diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md index a5eef9d165..3380e1dd3a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md @@ -15,7 +15,7 @@ mapped_pages: This page shows how to run ECK on GKE Autopilot. 1. It is recommended that each Kubernetes host’s virtual memory kernel settings be modified. Refer to [Virtual memory](virtual-memory.md). -2. It is recommended that Elasticsearch Pods have an `initContainer` that waits for virtual memory settings to be in place. +2. It is recommended that {{es}} Pods have an `initContainer` that waits for virtual memory settings to be in place. 3. For Elastic Agent/Beats there are storage limitations to be considered. 4. Ensure you are using a node class that is applicable for your workload by adding a `cloud.google.com/compute-class` label in a `nodeSelector`. Refer to [GKE Autopilot documentation.](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-compute-classes). @@ -31,9 +31,9 @@ Only use the provided `Daemonset` exactly as specified or it could be rejected b Refer to [*Install ECK*](install.md) for more information on installation options. -## Deploy an Elasticsearch cluster [k8s-autopilot-deploy-elasticsearch] +## Deploy an {{es}} cluster [k8s-autopilot-deploy-elasticsearch] -Create an Elasticsearch cluster. If you are using the `Daemonset` described in the [Virtual memory](virtual-memory.md) section to set `max_map_count` you can add the `initContainer` below is also used to ensure the setting is set prior to starting Elasticsearch. +Create an {{es}} cluster. If you are using the `Daemonset` described in the [Virtual memory](virtual-memory.md) section to set `max_map_count` you can add the `initContainer` below is also used to ensure the setting is set prior to starting {{es}}. ```shell cat <-` when it is not provided, for Azure, GCS and S3 repositories * appends `-` to `location` for a FS repository @@ -313,7 +313,7 @@ In order to avoid a conflict between multiple Elasticsearch clusters writing the ## Specifics for secret mounts [k8s-stack-config-policy-specifics-secret-mounts] -ECK `2.11.0` introduces `spec.elasticsearch.secretMounts` as a new field. This field allows users to specify a user created secret and a mountPath to indicate where this secret should be mounted in the Elasticsearch Pods that are managed by the Elastic Stack configuration policy. This field can be used to add additional secrets to the Elasticsearch Pods that may be needed for example for sensitive files required to configure Elasticsearch security realms. The secret should be created by the user in the same namespace as the Elastic Stack configuration policy. The operator reads this secret and copies it over to the namespace of Elasticsearch so that it can be mounted by the Elasticsearch Pods. Example of configuring secret mounts in the Elastic Stack configuration policy: +ECK `2.11.0` introduces `spec.elasticsearch.secretMounts` as a new field. This field allows users to specify a user created secret and a mountPath to indicate where this secret should be mounted in the {{es}} Pods that are managed by the {{stack}} configuration policy. This field can be used to add additional secrets to the {{es}} Pods that may be needed for example for sensitive files required to configure {{es}} security realms. The secret should be created by the user in the same namespace as the {{stack}} configuration policy. The operator reads this secret and copies it over to the namespace of {{es}} so that it can be mounted by the {{es}} Pods. Example of configuring secret mounts in the {{stack}} configuration policy: ```yaml secretMounts: @@ -321,11 +321,11 @@ secretMounts: mountPath: "/usr/share/elasticsearch/config/jwks" <2> ``` -1. name of the secret created by the user in the Elastic Stack configuration policy namespace. -2. mount path where the secret must be mounted to inside the Elasticsearch Pod. +1. name of the secret created by the user in the {{stack}} configuration policy namespace. +2. mount path where the secret must be mounted to inside the {{es}} Pod. -## Configuring authentication policies using Elastic Stack configuration policy [k8s-stack-config-policy-configuring-authentication-policies] +## Configuring authentication policies using {{stack}} configuration policy [k8s-stack-config-policy-configuring-authentication-policies] -Elastic Stack configuration policy can be used to configure authentication for Elasticsearch clusters. Check [Managing authentication for multiple stacks using Elastic Stack configuration policy](../../users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md) for some examples of the various authentication configurations that can be used. +{{stack}} configuration policy can be used to configure authentication for {{es}} clusters. Check [Managing authentication for multiple stacks using {{stack}} configuration policy](../../users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md) for some examples of the various authentication configurations that can be used. diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md index 48b61e4f24..9d2652cd32 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md @@ -6,11 +6,11 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elasticsearch-specification.html --- -# Elasticsearch configuration [k8s-elasticsearch-specification] +# {{es}} configuration [k8s-elasticsearch-specification] -This section covers various Elasticsearch cluster configuration scenarios when using ECK. For configuration topics relevant to both {{es}} and {{kib}}, see the [](./configure-deployments.md). +This section covers various {{es}} cluster configuration scenarios when using ECK. For configuration topics relevant to both {{es}} and {{kib}}, see the [](./configure-deployments.md). -Before deploying and running ECK in production, review the basic and advanced settings available on this page. These configurations integrate Elasticsearch, Kubernetes, and ECK operator functionality to help you fine-tune your deployment. +Before deploying and running ECK in production, review the basic and advanced settings available on this page. These configurations integrate {{es}}, Kubernetes, and ECK operator functionality to help you fine-tune your deployment. ## Key concepts @@ -29,12 +29,12 @@ Before deploying and running ECK in production, review the basic and advanced se ## Scheduling and lifecycle management -* [Advanced Elasticsearch node scheduling](advanced-elasticsearch-node-scheduling.md): Integrate standard Kubernetes scheduling options with your {{es}} nodes. +* [Advanced {{es}} node scheduling](advanced-elasticsearch-node-scheduling.md): Integrate standard Kubernetes scheduling options with your {{es}} nodes. * [Update strategy](update-strategy.md): Control how the changes are applied to the cluster. * [Pod disruption budget](pod-disruption-budget.md): Integrate Kubernetes Pod disruption budgets in your cluster. * [Security Context](security-context.md): Kubernetes security context and kernel capabilities. * [Readiness probe](readiness-probe.md): Customize `readinessProbe` in certain use cases. -* [Pod PreStop hook](pod-prestop-hook.md): Prevent disruptions when terminating Elasticsearch Pods. +* [Pod PreStop hook](pod-prestop-hook.md): Prevent disruptions when terminating {{es}} Pods. ## TLS/SSL Certificates diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md index 860a645222..23ec2ed9ae 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md @@ -1,5 +1,5 @@ --- -navigation_title: Deploy an Elasticsearch cluster +navigation_title: Deploy an {{es}} cluster applies_to: deployment: eck: all @@ -7,7 +7,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html --- -# Deploy an Elasticsearch cluster [k8s-deploy-elasticsearch] +# Deploy an {{es}} cluster [k8s-deploy-elasticsearch] To deploy a simple [{{es}}](/solutions/search/get-started.md) cluster specification, with one {{es}} node: diff --git a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md index 7eb60a8c6d..b9186de363 100644 --- a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md @@ -22,16 +22,16 @@ Refer to [Reserve static IP and custom domain](/deploy-manage/security/k8s-https ## Provide your own certificate [k8s-maps-http-custom-tls] -If you want to use your own certificate, the required configuration is identical to Elasticsearch. Check [Custom HTTP certificate](/deploy-manage/security/k8s-https-settings.md#k8s-setting-up-your-own-certificate). +If you want to use your own certificate, the required configuration is identical to {{es}}. Check [Custom HTTP certificate](/deploy-manage/security/k8s-https-settings.md#k8s-setting-up-your-own-certificate). ## Disable TLS [k8s-maps-http-disable-tls] You can disable the generation of the self-signed certificate and hence disable TLS. Check [Disable TLS](/deploy-manage/security/k8s-https-settings.md#k8s-disable-tls). -### Ingress and Kibana configuration [k8s-maps-ingress] +### Ingress and {{kib}} configuration [k8s-maps-ingress] -To use Elastic Maps Server from your Kibana instances, you need to configure Kibana to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your Kibana is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/) for an example on how to set this up using an Ingress resource. +To use Elastic Maps Server from your {{kib}} instances, you need to configure {{kib}} to fetch maps from your Elastic Maps Server instance by using the [`map.emsUrl`](/explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server-kibana) configuration key. The value of this setting needs to be the URL where the Elastic Maps Server instance is reachable from your browser. The certificates presented by Elastic Maps Server need to be trusted by the browser, and the URL must have the same origin as the URL where your {{kib}} is hosted to avoid cross origin resource issues. Check the [recipe section](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes/) for an example on how to set this up using an Ingress resource. :::{admonition} Support scope for Ingress Controllers [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. diff --git a/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md b/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md index 1473370986..6a215db128 100644 --- a/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md +++ b/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md @@ -8,7 +8,7 @@ mapped_pages: # Init containers for plugin downloads [k8s-init-containers-plugin-downloads] -You can install custom plugins before the Elasticsearch container starts with an `initContainer`. For example: +You can install custom plugins before the {{es}} container starts with an `initContainer`. For example: ```yaml spec: @@ -27,7 +27,7 @@ spec: bin/elasticsearch-plugin install --batch analysis-icu ``` -You can also override the Elasticsearch container image to use your own image with the plugins already installed, as described in [custom images](create-custom-images.md). For more information on both these options, you can check the [Create automated snapshots](../../tools/snapshot-and-restore/cloud-on-k8s.md) section and the Kubernetes documentation on [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/). +You can also override the {{es}} container image to use your own image with the plugins already installed, as described in [custom images](create-custom-images.md). For more information on both these options, you can check the [Create automated snapshots](../../tools/snapshot-and-restore/cloud-on-k8s.md) section and the Kubernetes documentation on [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/). The init container inherits: diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md index b0d139a49f..8ae01e10ac 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md @@ -16,7 +16,7 @@ To learn about other installation methods, refer to [](/deploy-manage/deploy/clo During the installation, the following components are installed or updated: -* `CustomResourceDefinition` objects for all supported resource types (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash). +* `CustomResourceDefinition` objects for all supported resource types ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash). * `Namespace` named `elastic-system` to hold all operator resources. * `ServiceAccount`, `ClusterRole` and `ClusterRoleBinding` to allow the operator to manage resources throughout the cluster. * `ValidatingWebhookConfiguration` to validate Elastic custom resources on admission. diff --git a/deploy-manage/deploy/cloud-on-k8s/install.md b/deploy-manage/deploy/cloud-on-k8s/install.md index ede107654e..44af17e059 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install.md +++ b/deploy-manage/deploy/cloud-on-k8s/install.md @@ -9,7 +9,7 @@ mapped_pages: # Install ECK [k8s-installing-eck] -Elastic Cloud on Kubernetes (ECK) is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) that helps you deploy and manage Elastic applications on Kubernetes, including Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash. +{{eck}} (ECK) is a [Kubernetes operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) that helps you deploy and manage Elastic applications on Kubernetes, including {{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash. ECK relies on a set of [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to define how applications are deployed. CRDs are global resources, shared across the entire Kubernetes cluster, so installing them requires [specific permissions](../../../deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md#k8s-eck-permissions-installing-crds). @@ -22,7 +22,7 @@ You can install multiple instances of ECK in the same Kubernetes cluster, but on :::: ::::{warning} -Deleting CRDs will trigger deletion of all custom resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) in all namespaces of the cluster, regardless of whether they are managed by a single operator or multiple operators. +Deleting CRDs will trigger deletion of all custom resources ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) in all namespaces of the cluster, regardless of whether they are managed by a single operator or multiple operators. :::: For a list of supported Kubernetes versions refer to [](../cloud-on-k8s.md#k8s-supported) diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md index 969ff439a0..1c2aaebb80 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md @@ -8,7 +8,7 @@ mapped_pages: # Advanced configuration [k8s-kibana-advanced-configuration] -If you already looked at the [Elasticsearch on ECK](elasticsearch-configuration.md) documentation, some of these concepts might sound familiar to you. The resource definitions in ECK share the same philosophy when you want to: +If you already looked at the [{{es}} on ECK](elasticsearch-configuration.md) documentation, some of these concepts might sound familiar to you. The resource definitions in ECK share the same philosophy when you want to: * [Customize the Pod configuration](#k8s-kibana-pod-configuration) * [Customize the product configuration](#k8s-kibana-configuration) @@ -52,7 +52,7 @@ spec: The name of the container in the [Pod Template](https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates) must be `kibana`. -Check [Set compute resources for Kibana, Elastic Maps Server, APM Server and Logstash](manage-compute-resources.md#k8s-compute-resources-kibana-and-apm) for more information. +Check [Set compute resources for {{kib}}, Elastic Maps Server, APM Server and Logstash](manage-compute-resources.md#k8s-compute-resources-kibana-and-apm) for more information. ## {{kib}} configuration [k8s-kibana-configuration] @@ -88,7 +88,7 @@ To deploy more than one instance of {{kib}}, the instances must share a matching ::::{tip} If you need to access these encryption keys, you can find them using the `kubectl get secrets` command. -The secret is named after the corresponding Kibana instance. For example, for a Kibana named `my-kibana`, you can run the following command to retrieve the current encryption keys: +The secret is named after the corresponding {{kib}} instance. For example, for a {{kib}} named `my-kibana`, you can run the following command to retrieve the current encryption keys: ```shell kubectl get secret my-kibana-kb-config -o jsonpath='{ .data.kibana\.yml }' | base64 --decode | grep -A1 encryptedSavedObjects diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md index 71e9d0f10f..05053e6834 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md @@ -6,7 +6,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-es.html --- -# Connect to an Elasticsearch cluster [k8s-kibana-es] +# Connect to an {{es}} cluster [k8s-kibana-es] You can connect an {{es}} cluster that is either managed by ECK or not managed by ECK. diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md index 87c9d926a9..e8cfb77630 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md @@ -6,7 +6,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-plugins.html --- -# Install Kibana plugins [k8s-kibana-plugins] +# Install {{kib}} plugins [k8s-kibana-plugins] You can override the {{kib}} container image to use your own image with the plugins already installed, as described in the [Create custom images](create-custom-images.md). You should run an `optimize` step as part of the build, otherwise it needs to run at startup which requires additional time and resources. diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md index 853be06ca6..830d404cc2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md @@ -6,9 +6,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-deploy-elasticsearch.html --- -# Deploy an Elasticsearch instance with a route [k8s-openshift-deploy-elasticsearch] +# Deploy an {{es}} instance with a route [k8s-openshift-deploy-elasticsearch] -Use the following code to create an Elasticsearch cluster `elasticsearch-sample` and a "passthrough" route to access it: +Use the following code to create an {{es}} cluster `elasticsearch-sample` and a "passthrough" route to access it: ::::{note} A namespace other than the default namespaces (default, kube-system, kube-**, openshift-**, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. @@ -17,7 +17,7 @@ A namespace other than the default namespaces (default, kube-system, kube-**, op ```shell cat <[-].) tls: - termination: passthrough # Elasticsearch is the TLS endpoint + termination: passthrough # {{es}} is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service @@ -45,8 +45,8 @@ spec: EOF ``` -## Elasticsearch plugins [k8s-openshift-es-plugins] +## {{es}} plugins [k8s-openshift-es-plugins] -Elasticsearch plugins cannot be installed at runtime in most OpenShift environments. This is because the plugin installer must run as root, but Elasticsearch is restricted from running as root. To add plugins to Elasticsearch, you can use custom images as described in [*Create custom images*](create-custom-images.md). +{{es}} plugins cannot be installed at runtime in most OpenShift environments. This is because the plugin installer must run as root, but {{es}} is restricted from running as root. To add plugins to {{es}}, you can use custom images as described in [*Create custom images*](create-custom-images.md). diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md index afe5fb2937..1683687834 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md @@ -6,9 +6,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-deploy-kibana.html --- -# Deploy a Kibana instance with a route [k8s-openshift-deploy-kibana] +# Deploy a {{kib}} instance with a route [k8s-openshift-deploy-kibana] -Use the following code to create a Kibana instance and a "passthrough" route to access it: +Use the following code to create a {{kib}} instance and a "passthrough" route to access it: ```shell cat <[-].) tls: - termination: passthrough # Kibana is the TLS endpoint + termination: passthrough # {{kib}} is the TLS endpoint insecureEdgeTerminationPolicy: Redirect to: kind: Service diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md index dde2d316e5..b3afaf9594 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md @@ -23,7 +23,7 @@ This page shows the installation steps to deploy ECK in Openshift: oc adm pod-network make-projects-global elastic-system ``` -3. Create a namespace to hold the Elastic resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash): +3. Create a namespace to hold the Elastic resources ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash): ::::{note} A namespace other than the default namespaces (default, kube-\*, openshift-\*, etc) is required such that default [Security Context Constraint](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html) (SCC) permissions are applied automatically. Elastic resources will not work properly in any of the default namespaces. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md index beafec1b75..c39cc4b128 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md @@ -13,7 +13,7 @@ The instructions in this section describe how to connect the operator and manage These instructions have been tested with Istio 1.24.3. Older or newer versions of Istio might require additional configuration steps not documented here. ::::{warning} -Some Elastic Stack features such as [Kibana alerting and actions](/explore-analyze/alerts-cases.md) rely on the Elasticsearch API keys feature which requires TLS to be enabled at the application level. If you want to use these features, you should not disable the self-signed certificate on the Elasticsearch resource and enable `PERMISSIVE` mode for the Elasticsearch service through a `DestinationRule` or `PeerAuthentication` resource. Strict mTLS mode is currently not compatible with Elastic Stack features requiring TLS to be enabled for the Elasticsearch HTTP layer. +Some {{stack}} features such as [{{kib}} alerting and actions](/explore-analyze/alerts-cases.md) rely on the {{es}} API keys feature which requires TLS to be enabled at the application level. If you want to use these features, you should not disable the self-signed certificate on the {{es}} resource and enable `PERMISSIVE` mode for the {{es}} service through a `DestinationRule` or `PeerAuthentication` resource. Strict mTLS mode is currently not compatible with {{stack}} features requiring TLS to be enabled for the {{es}} HTTP layer. :::: @@ -24,7 +24,7 @@ If you use a Kubernetes distribution like Minikube, which does not have support ## Connect the operator to the Istio service mesh [k8s-service-mesh-istio-operator-connection] -The operator itself must be connected to the service mesh to deploy and manage Elastic Stack resources that you wish to connect to the service mesh. This is achieved by injecting an Istio sidecar to the ECK operator Pods. The following instructions assume that [automatic sidecar injection](https://istio.io/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) is enabled on your cluster through a mutating admissions webhook. Refer to [Istio injection documentation](https://istio.io/docs/setup/additional-setup/sidecar-injection/#injection) if you prefer a different method of injection. +The operator itself must be connected to the service mesh to deploy and manage {{stack}} resources that you wish to connect to the service mesh. This is achieved by injecting an Istio sidecar to the ECK operator Pods. The following instructions assume that [automatic sidecar injection](https://istio.io/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) is enabled on your cluster through a mutating admissions webhook. Refer to [Istio injection documentation](https://istio.io/docs/setup/additional-setup/sidecar-injection/#injection) if you prefer a different method of injection. 1. Create the `elastic-system` namespace and enable sidecar injection: @@ -64,16 +64,16 @@ spec: As the default `failurePolicy` of the webhook is `Ignore`, the operator continues to function even if the above annotations are not present. The downside is that you are still able to submit an invalid manifest using `kubectl` without receiving any immediate feedback. -ECK has a fallback validation mechanism that reports validation failures as events associated with the relevant resource (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) that must be manually discovered by running `kubectl describe`. For example, to find the validation errors in an Elasticsearch resource named `quickstart`, you can run `kubectl describe elasticsearch quickstart`. +ECK has a fallback validation mechanism that reports validation failures as events associated with the relevant resource ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) that must be manually discovered by running `kubectl describe`. For example, to find the validation errors in an {{es}} resource named `quickstart`, you can run `kubectl describe elasticsearch quickstart`. -## Connect Elastic Stack applications to the Istio service mesh [k8s-service-mesh-istio-stack-connection] +## Connect {{stack}} applications to the Istio service mesh [k8s-service-mesh-istio-stack-connection] This section assumes that you are deploying ECK custom resources to a namespace that has [automatic sidecar injection](https://istio.io/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) enabled. If you have configured Istio in [permissive mode](https://istio.io/docs/concepts/security/#permissive-mode), examples defined elsewhere in the ECK documentation will continue to work without requiring any modifications. However, if you have enabled strict mutual TLS authentication between services either through global (`MeshPolicy`) or namespace-level (`Policy`) configuration, the following modifications to the resource manifests are necessary for correct operation. -### Elasticsearch [k8s-service-mesh-istio-elasticsearch] +### {{es}} [k8s-service-mesh-istio-elasticsearch] ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -99,18 +99,18 @@ spec: automountServiceAccountToken: true <3> ``` -1. Disable the default self-signed certificate generated by the operator and allow TLS to be managed by Istio. Disabling the self-signed certificate might interfere with some features such as Kibana Alerting and Actions. -2. Exclude the transport port (port 9300) from being proxied. Currently ECK does not support switching off X-Pack security and TLS for the Elasticsearch transport port. If Istio is allowed to proxy the transport port, the traffic is encrypted twice and communication between Elasticsearch nodes is disrupted. +1. Disable the default self-signed certificate generated by the operator and allow TLS to be managed by Istio. Disabling the self-signed certificate might interfere with some features such as {{kib}} Alerting and Actions. +2. Exclude the transport port (port 9300) from being proxied. Currently ECK does not support switching off X-Pack security and TLS for the {{es}} transport port. If Istio is allowed to proxy the transport port, the traffic is encrypted twice and communication between {{es}} nodes is disrupted. 3. Optional. Only set `automountServiceAccountToken` to `true` if your Kubernetes cluster does not have support for issuing third-party security tokens. -If you do not have [automatic mutual TLS](https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/) enabled, you may need to create a [Destination Rule](https://istio.io/docs/reference/config/networking/destination-rule/) to allow the operator to communicate with the Elasticsearch cluster. A communication issue between the operator and the managed Elasticsearch cluster can be detected by looking at the operator logs to check if there are any errors reported with the text `503 Service Unavailable`. +If you do not have [automatic mutual TLS](https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/) enabled, you may need to create a [Destination Rule](https://istio.io/docs/reference/config/networking/destination-rule/) to allow the operator to communicate with the {{es}} cluster. A communication issue between the operator and the managed {{es}} cluster can be detected by looking at the operator logs to check if there are any errors reported with the text `503 Service Unavailable`. ```sh kubectl logs -f -n elastic-system -c manager statefulset.apps/elastic-operator ``` -If the operator logs indicate a communications problem, create a `DestinationRule` to enable mutual TLS between the operator and the affected Elasticsearch cluster. For example, the following rule enables mutual TLS for a specific Elasticsearch cluster named `elastic-istio` deployed to the `default` namespace. +If the operator logs indicate a communications problem, create a `DestinationRule` to enable mutual TLS between the operator and the affected {{es}} cluster. For example, the following rule enables mutual TLS for a specific {{es}} cluster named `elastic-istio` deployed to the `default` namespace. ```yaml apiVersion: networking.istio.io/v1alpha3 @@ -128,7 +128,7 @@ Refer to the [Istio documentation](https://istio.io/docs/tasks/security/authenti #### Using init containers with Istio CNI [k8s-service-mesh-istio-cni] -There are [known issues with init containers](https://istio.io/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers) when Istio CNI is configured. If you use init containers to [install Elasticsearch plugins](init-containers-for-plugin-downloads.md) or perform other initialization tasks that require network access, they may fail due to outbound traffic being blocked by the CNI plugin. To work around this issue, explicitly allow the external ports used by the init containers. +There are [known issues with init containers](https://istio.io/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers) when Istio CNI is configured. If you use init containers to [install {{es}} plugins](init-containers-for-plugin-downloads.md) or perform other initialization tasks that require network access, they may fail due to outbound traffic being blocked by the CNI plugin. To work around this issue, explicitly allow the external ports used by the init containers. To install plugins using an init container, use a manifest similar to the following: @@ -169,7 +169,7 @@ spec: -### Kibana [k8s-service-mesh-istio-kibana] +### {{kib}} [k8s-service-mesh-istio-kibana] ```yaml apiVersion: kibana.k8s.elastic.co/v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md index 820daad1c0..e285e277d0 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md @@ -33,15 +33,15 @@ linkerd stat sts/elastic-operator -n elastic-system If the installation was successful, the output of the above command should show `1/1` under the `MESHED` column. -## Connect Elastic Stack applications to the Linkerd service mesh [k8s-service-mesh-linkerd-stack-connection] +## Connect {{stack}} applications to the Linkerd service mesh [k8s-service-mesh-linkerd-stack-connection] -The easiest way to connect applications to the service mesh is by adding the `linkerd.io/inject: enabled` annotation to the deployment namespace. For example, if you are planning to deploy Elastic Stack applications to a namespace named `elastic-stack`, annotate it as follows to enable [automatic Linkerd sidecar injection](https://linkerd.io/2/features/proxy-injection/). +The easiest way to connect applications to the service mesh is by adding the `linkerd.io/inject: enabled` annotation to the deployment namespace. For example, if you are planning to deploy {{stack}} applications to a namespace named `elastic-stack`, annotate it as follows to enable [automatic Linkerd sidecar injection](https://linkerd.io/2/features/proxy-injection/). ```sh kubectl annotate namespace elastic-stack linkerd.io/inject=enabled ``` -Any Elasticsearch, Kibana, or APM Server resources deployed to a namespace with the above annotation will automatically join the mesh. +Any {{es}}, {{kib}}, or APM Server resources deployed to a namespace with the above annotation will automatically join the mesh. Alternatively, if you only want specific resources to join the mesh, add the `linkerd.io/inject: enabled` annotation to the `podTemplate` (check [API documentation](cloud-on-k8s://reference/api-docs.md)) of the resource as follows: @@ -52,9 +52,9 @@ podTemplate: linkerd.io/inject: enabled ``` -If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behaviour of ECK is to enable TLS for Elasticsearch, Kibana and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of Elastic Stack applications with Linkerd. +If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behaviour of ECK is to enable TLS for {{es}}, {{kib}} and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of {{stack}} applications with Linkerd. -### Elasticsearch [k8s-service-mesh-linkerd-elasticsearch] +### {{es}} [k8s-service-mesh-linkerd-elasticsearch] ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -86,9 +86,9 @@ spec: -### Kibana and APM Server [k8s-service-mesh-linkerd-kibana-apm] +### {{kib}} and APM Server [k8s-service-mesh-linkerd-kibana-apm] -The configuration is almost identical for Kibana and APM Server resources. +The configuration is almost identical for {{kib}} and APM Server resources. ```yaml apiVersion: ... diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md index 95ba94665d..aaf4acae9a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md @@ -6,7 +6,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana.html --- -# Kibana configuration [k8s-kibana] +# {{kib}} configuration [k8s-kibana] The following sections describe how to customize a {{kib}} deployment to suit your requirements. diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md index 2ac067dd71..87f42eb3ea 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md @@ -1,5 +1,5 @@ --- -navigation_title: Deploy a Kibana instance +navigation_title: Deploy a {{kib}} instance applies_to: deployment: eck: all @@ -7,7 +7,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html --- -# Deploy a Kibana instance [k8s-deploy-kibana] +# Deploy a {{kib}} instance [k8s-deploy-kibana] To deploy a simple [{{kib}}](/get-started/the-stack.md#stack-components-kibana) specification, with one {{kib}} instance: diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md index 2f6c60edd9..7f2d8d8973 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md @@ -379,7 +379,7 @@ Examples of these plugins include [`logstash-input-kafka`](logstash-docs-md://ls Some plugins have additional requirements and guidelines for optimal performance in a {{ls}} ECK environment. * [{{ls}} integration plugin](#k8s-logstash-plugin-considerations-ls-integration) -* [Elasticsearch output plugin](#k8s-logstash-plugin-considerations-es-output) +* [{{es}} output plugin](#k8s-logstash-plugin-considerations-es-output) * [Elastic_integration filter plugin](#k8s-logstash-plugin-considerations-integration-filter) * [Elastic Agent input and Beats input plugins](#k8s-logstash-plugin-considerations-agent-beats) @@ -393,7 +393,7 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin When your pipeline uses the [`Logstash integration`](logstash-docs-md://lsr/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](logstash-docs-md://lsr/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod. -### Elasticsearch output plugin [k8s-logstash-plugin-considerations-es-output] +### {{es}} output plugin [k8s-logstash-plugin-considerations-es-output] The [`elasticsearch output`](logstash-docs-md://lsr/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}. diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash.md b/deploy-manage/deploy/cloud-on-k8s/logstash.md index c3edd5be1e..bfd6d052cc 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash.md @@ -16,7 +16,7 @@ This section describes how to configure and deploy {{ls}} with ECK. * [Logstash configuration](configuration-logstash.md#k8s-logstash-configuring-logstash) * [Configuring Logstash pipelines](configuration-logstash.md#k8s-logstash-pipelines) * [Defining data volumes for Logstash](configuration-logstash.md#k8s-logstash-volumes) - * [Using Elasticsearch in Logstash pipelines](configuration-logstash.md#k8s-logstash-pipelines-es) + * [Using {{es}} in Logstash pipelines](configuration-logstash.md#k8s-logstash-pipelines-es) * [Expose services](configuration-logstash.md#k8s-logstash-expose-services) * [Securing Logstash API](securing-logstash-api.md) diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md index aaa97f0a75..12b6ee1970 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md @@ -8,10 +8,10 @@ mapped_pages: # Manage compute resources [k8s-managing-compute-resources] -To help the Kubernetes scheduler correctly place Pods in available Kubernetes nodes and ensure quality of service (QoS), it is recommended to specify the CPU and memory requirements for objects managed by the operator (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash). In Kubernetes, `requests` defines the minimum amount of resources that must be available for a Pod to be scheduled; `limits` defines the maximum amount of resources that a Pod is allowed to consume. For more information about how Kubernetes uses these concepts, check [Managing Compute Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). +To help the Kubernetes scheduler correctly place Pods in available Kubernetes nodes and ensure quality of service (QoS), it is recommended to specify the CPU and memory requirements for objects managed by the operator ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash). In Kubernetes, `requests` defines the minimum amount of resources that must be available for a Pod to be scheduled; `limits` defines the maximum amount of resources that a Pod is allowed to consume. For more information about how Kubernetes uses these concepts, check [Managing Compute Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). ::::{note} -The operator applies default requests and limits for memory and CPU. They may be suitable for experimenting with the Elastic Stack, however it is recommended to reevaluate these values for production use cases. +The operator applies default requests and limits for memory and CPU. They may be suitable for experimenting with the {{stack}}, however it is recommended to reevaluate these values for production use cases. :::: @@ -25,7 +25,7 @@ Also, to minimize disruption caused by Pod evictions due to resource contention, You can set compute resource constraints in the `podTemplate` of objects managed by the operator. -### Set compute resources for Elasticsearch [k8s-compute-resources-elasticsearch] +### Set compute resources for {{es}} [k8s-compute-resources-elasticsearch] ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -52,9 +52,9 @@ spec: #### Memory limit and JVM Heap settings [k8s-elasticsearch-memory] -Starting with Elasticsearch 7.11, the heap size of the JVM is automatically calculated based on the node roles and the available memory. The available memory is defined by the value of `resources.limits.memory` set on the `elasticsearch` container in the Pod template, or the available memory on the Kubernetes node if no limit is set. +Starting with {{es}} 7.11, the heap size of the JVM is automatically calculated based on the node roles and the available memory. The available memory is defined by the value of `resources.limits.memory` set on the `elasticsearch` container in the Pod template, or the available memory on the Kubernetes node if no limit is set. -For Elasticsearch before 7.11, or if you want to override the default calculated heap size on newer versions, set the `ES_JAVA_OPTS` environment variable in the `podTemplate` to an appropriate value: +For {{es}} before 7.11, or if you want to override the default calculated heap size on newer versions, set the `ES_JAVA_OPTS` environment variable in the `podTemplate` to an appropriate value: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -84,7 +84,7 @@ spec: #### CPU resources [k8s-elasticsearch-cpu] -The value set for CPU limits or requests directly impacts the Elasticsearch `node.processors` setting. By default Elasticsearch automatically detects the number of processors and sets the thread pool settings based on it. The following table gives the default value for `node.processors` given the CPU limits and requests set on the `elasticsearch` container: +The value set for CPU limits or requests directly impacts the {{es}} `node.processors` setting. By default {{es}} automatically detects the number of processors and sets the thread pool settings based on it. The following table gives the default value for `node.processors` given the CPU limits and requests set on the `elasticsearch` container: | | No CPU limit | With CPU limit | | --- | --- | --- | @@ -92,12 +92,12 @@ The value set for CPU limits or requests directly impacts the Elasticsearch `nod | CPU request set to 1 | `All the available cores on the K8S node` | `Value of the CPU limit` | | Other CPU requests | `Value of the CPU request` | `Value of the CPU limit` | -You can also set your own value for `node.processors` in the Elasticsearch config. +You can also set your own value for `node.processors` in the {{es}} config. ::::{note} A [known Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/51135) may lead to over-aggressive CPU limits throttling. If the host Linux Kernel does not include [this CFS quota fix](https://github.com/kubernetes/kubernetes/issues/67577), you may want to: -* not set any CPU limit in the Elasticsearch resource (Burstable QoS) +* not set any CPU limit in the {{es}} resource (Burstable QoS) * [reduce the CFS quota period](https://github.com/kubernetes/kubernetes/pull/63437) in kubelet configuration * [disable CFS quotas](https://github.com/kubernetes/kubernetes/issues/51135#issuecomment-386319185) in kubelet configuration @@ -105,7 +105,7 @@ A [known Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/51135 -### Set compute resources for Kibana, Elastic Maps Server, APM Server and Logstash [k8s-compute-resources-kibana-and-apm] +### Set compute resources for {{kib}}, Elastic Maps Server, APM Server and Logstash [k8s-compute-resources-kibana-and-apm] ```yaml apiVersion: kibana.k8s.elastic.co/v1 @@ -283,8 +283,8 @@ If `resources` is not defined in the specification of an object, then the operat | Type | Requests | Limits | | --- | --- | --- | | APM Server | `512Mi` | `512Mi` | -| Elasticsearch | `2Gi` | `2Gi` | -| Kibana | `1Gi` | `1Gi` | +| {{es}} | `2Gi` | `2Gi` | +| {{kib}} | `1Gi` | `1Gi` | | Beat | `300Mi` | `300Mi` | | Elastic Agent | `400Mi` | `400Mi` | | Elastic Maps Server | `200Mi` | `200Mi` | @@ -308,7 +308,7 @@ spec: type: Container ``` -With this limit range in place, if you create an Elasticsearch object without defining the `resources` section, you will get the following error: +With this limit range in place, if you create an {{es}} object without defining the `resources` section, you will get the following error: ``` Cannot create pod elasticsearch-sample-es-ldbgj48c7r: pods "elasticsearch-sample-es-ldbgj48c7r" is forbidden: minimum memory usage per Container is 3Gi, but request is 2Gi @@ -329,14 +329,14 @@ To avoid this, explicitly define the requests and limits mandated by your enviro ::: -#### Monitoring Elasticsearch CPU using Stack Monitoring [k8s-monitor-compute-resources-stack-monitoring] +#### Monitoring {{es}} CPU using Stack Monitoring [k8s-monitor-compute-resources-stack-monitoring] -If [Stack Monitoring](../../monitor/stack-monitoring/eck-stack-monitoring.md) is enabled, the pressure applied by the CPU cgroup controller to an Elasticsearch node can be evaluated from the **Stack Monitoring** page in Kibana. +If [Stack Monitoring](../../monitor/stack-monitoring/eck-stack-monitoring.md) is enabled, the pressure applied by the CPU cgroup controller to an {{es}} node can be evaluated from the **Stack Monitoring** page in {{kib}}. -1. On the **Stack Monitoring** page select the Elasticsearch node you want to monitor. +1. On the **Stack Monitoring** page select the {{es}} node you want to monitor. 2. Select the **Advanced** tab. -In the following example, an Elasticsearch container is limited to 2 cores. +In the following example, an {{es}} container is limited to 2 cores. ```yaml nodeSets: @@ -351,7 +351,7 @@ nodeSets: cpu: 2 ``` -The **Cgroup usage** curve shows that the CPU usage of this container has been steadily increasing up to 2 cores. Then, while the container was still requesting more CPU, the **Cgroup Throttling** curve shows how much the Elasticsearch container has been throttled: +The **Cgroup usage** curve shows that the CPU usage of this container has been steadily increasing up to 2 cores. Then, while the container was still requesting more CPU, the **Cgroup Throttling** curve shows how much the {{es}} container has been throttled: :::{image} /deploy-manage/images/cloud-on-k8s-cgroups-cfs-stats.png :alt: cgroup CPU perforamce chart diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md b/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md index f261c4924a..c6423a2425 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md @@ -5,22 +5,22 @@ applies_to: --- # Manage deployments -This section provides detailed guidance on deploying, configuring, and managing Elasticsearch and Kibana within ECK. A **deployment** refers to an {{es}} cluster, optionally with one or more {{kib}} instances connected to it. +This section provides detailed guidance on deploying, configuring, and managing {{es}} and {{kib}} within ECK. A **deployment** refers to an {{es}} cluster, optionally with one or more {{kib}} instances connected to it. ::::{tip} -This content focuses on Elasticsearch and Kibana deployments. To orchestrate other Elastic Stack applications such as APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash, refer to the [Orchestrating other Elastic Stack applications](./orchestrate-other-elastic-applications.md). +This content focuses on {{es}} and {{kib}} deployments. To orchestrate other {{stack}} applications such as APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash, refer to the [Orchestrating other {{stack}} applications](./orchestrate-other-elastic-applications.md). :::: ## What You'll Learn In this section, you'll learn how to perform the following tasks in ECK: -- [**Deploy an Elasticsearch cluster**](./elasticsearch-deployment-quickstart.md): Orchestrate an {{es}} cluster in Kubernetes. -- [**Deploy Kibana instances**](./kibana-instance-quickstart.md): Set up and connect Kibana to an existing Elasticsearch cluster. -- [**Manage deployments using Elastic Stack Helm chart**](./managing-deployments-using-helm-chart.md): Use Helm to deploy clusters and other stack applications. +- [**Deploy an {{es}} cluster**](./elasticsearch-deployment-quickstart.md): Orchestrate an {{es}} cluster in Kubernetes. +- [**Deploy {{kib}} instances**](./kibana-instance-quickstart.md): Set up and connect {{kib}} to an existing {{es}} cluster. +- [**Manage deployments using {{stack}} Helm chart**](./managing-deployments-using-helm-chart.md): Use Helm to deploy clusters and other stack applications. - [**Apply updates to your deployments**](./update-deployments.md): Modify existing deployments, scale clusters, and update configurations, while ensuring minimal disruption. - [**Configure access to your deployments**](./accessing-services.md): Use and adapt Kubernetes services to your needs. -- [**Advanced configuration**](./configure-deployments.md): Explore available settings for Elasticsearch and Kibana, including storage, networking, security, and scaling options. +- [**Advanced configuration**](./configure-deployments.md): Explore available settings for {{es}} and {{kib}}, including storage, networking, security, and scaling options. For a complete reference on configuration possibilities for {{es}} and {{kib}}, see: @@ -29,4 +29,4 @@ For a complete reference on configuration possibilities for {{es}} and {{kib}}, Other references for managing deployments: -* [**Upgrade the Elastic Stack version**](../../upgrade/deployment-or-cluster.md): Upgrade orchestrated applications on ECK. +* [**Upgrade the {{stack}} version**](../../upgrade/deployment-or-cluster.md): Upgrade orchestrated applications on ECK. diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md index 0235de0a6f..c7a02e0364 100644 --- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md @@ -7,9 +7,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stack-helm-chart.html --- -# Elastic Stack Helm chart [k8s-stack-helm-chart] +# {{stack}} Helm chart [k8s-stack-helm-chart] -Starting from ECK 2.4.0, a Helm chart is available for managing Elastic Stack resources using the ECK Operator. It is available from the Elastic Helm repository and can be added to your Helm repository list by running the following command: +Starting from ECK 2.4.0, a Helm chart is available for managing {{stack}} resources using the ECK Operator. It is available from the Elastic Helm repository and can be added to your Helm repository list by running the following command: ```sh helm repo add elastic https://helm.elastic.co @@ -20,7 +20,7 @@ helm repo update The minimum supported version of Helm is 3.2.0. :::: -The Elastic Stack (`eck-stack`) Helm chart is built on top of individual charts such as `eck-elasticsearch` and `eck-kibana`. For more details on its structure and dependencies, refer to the [chart repository](https://github.com/elastic/cloud-on-k8s/tree/main/deploy/eck-stack/). +The {{stack}} (`eck-stack`) Helm chart is built on top of individual charts such as `eck-elasticsearch` and `eck-kibana`. For more details on its structure and dependencies, refer to the [chart repository](https://github.com/elastic/cloud-on-k8s/tree/main/deploy/eck-stack/). The chart enables you to deploy the core components ({{es}} and {{kib}}) together, along with other {{stack}} applications if needed, under the same chart release. The following sections guide you through the installation process for multiple use cases. Choose the command that best fits your setup. @@ -28,59 +28,59 @@ The chart enables you to deploy the core components ({{es}} and {{kib}}) togethe All the provided examples deploy the applications in a namespace named `elastic-stack`. Consider adapting the commands to your use case. :::: -## Elasticsearch and Kibana [k8s-install-elasticsearch-kibana-helm] +## {{es}} and {{kib}} [k8s-install-elasticsearch-kibana-helm] Similar to the quickstart examples for {{es}} and {{kib}}, this section describes how to setup an {{es}} cluster with a simple {{kib}} instance managed by ECK, and how to customize a deployment using the eck-stack Helm chart’s values. ```sh -# Install an eck-managed Elasticsearch and Kibana using the default values, which deploys the quickstart examples. +# Install an eck-managed {{es}} and {{kib}} using the default values, which deploys the quickstart examples. helm install es-kb-quickstart elastic/eck-stack -n elastic-stack --create-namespace ``` -### Customize Elasticsearch and Kibana installation with example values [k8s-eck-stack-helm-customize] +### Customize {{es}} and {{kib}} installation with example values [k8s-eck-stack-helm-customize] -You can find example Helm values files for deploying and managing more advanced Elasticsearch and Kibana setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-stack/examples). +You can find example Helm values files for deploying and managing more advanced {{es}} and {{kib}} setups [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/deploy/eck-stack/examples). To use one or more of these example configurations, use the `--values` Helm option, as seen in the following section. ```sh -# Install an eck-managed Elasticsearch and Kibana using the Elasticsearch node roles example with hot, warm, and cold data tiers, and the Kibana example customizing the http service. +# Install an eck-managed {{es}} and {{kib}} using the {{es}} node roles example with hot, warm, and cold data tiers, and the {{kib}} example customizing the http service. helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/elasticsearch/hot-warm-cold.yaml \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/kibana/http-configuration.yaml ``` -## Fleet Server with Elastic Agents along with Elasticsearch and Kibana [k8s-install-fleet-agent-elasticsearch-kibana-helm] +## Fleet Server with Elastic Agents along with {{es}} and {{kib}} [k8s-install-fleet-agent-elasticsearch-kibana-helm] -The following section builds upon the previous section, and allows installing Fleet Server, and Fleet-managed Elastic Agents along with Elasticsearch and Kibana. +The following section builds upon the previous section, and allows installing Fleet Server, and Fleet-managed Elastic Agents along with {{es}} and {{kib}}. ```sh -# Install an eck-managed Elasticsearch, Kibana, Fleet Server, and managed Elastic Agents using custom values. +# Install an eck-managed {{es}}, {{kib}}, Fleet Server, and managed Elastic Agents using custom values. helm install eck-stack-with-fleet elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/agent/fleet-agents.yaml -n elastic-stack ``` -## Logstash along with Elasticsearch, Kibana and Beats [k8s-install-logstash-elasticsearch-kibana-helm] +## Logstash along with {{es}}, {{kib}} and Beats [k8s-install-logstash-elasticsearch-kibana-helm] -The following section builds upon the previous sections, and allows installing Logstash along with Elasticsearch, Kibana and Beats. +The following section builds upon the previous sections, and allows installing Logstash along with {{es}}, {{kib}} and Beats. ```sh -# Install an eck-managed Elasticsearch, Kibana, Beats and Logstash using custom values. +# Install an eck-managed {{es}}, {{kib}}, Beats and Logstash using custom values. helm install eck-stack-with-logstash elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/logstash/basic-eck.yaml -n elastic-stack ``` -## Standalone Elastic APM Server along with Elasticsearch and Kibana [k8s-install-apm-server-elasticsearch-kibana-helm] +## Standalone Elastic APM Server along with {{es}} and {{kib}} [k8s-install-apm-server-elasticsearch-kibana-helm] -The following section builds upon the previous sections, and allows installing a standalone Elastic APM Server along with Elasticsearch and Kibana. +The following section builds upon the previous sections, and allows installing a standalone Elastic APM Server along with {{es}} and {{kib}}. ```sh -# Install an eck-managed Elasticsearch, Kibana, and standalone APM Server using custom values. +# Install an eck-managed {{es}}, {{kib}}, and standalone APM Server using custom values. helm install eck-stack-with-apm-server elastic/eck-stack \ --values https://raw.githubusercontent.com/elastic/cloud-on-k8s/2.16/deploy/eck-stack/examples/apm-server/basic.yaml -n elastic-stack ``` -## Install individual components of the Elastic Stack [k8s-eck-stack-individual-components] +## Install individual components of the {{stack}} [k8s-eck-stack-individual-components] You can install individual components in one of two ways using the provided Helm Charts. @@ -99,7 +99,7 @@ helm install es-quickstart elastic/eck-stack -n elastic-stack --create-namespace helm install es-quickstart elastic/eck-elasticsearch -n elastic-stack --create-namespace ``` -## Adding Ingress to the Elastic stack [k8s-eck-stack-ingress] +## Adding Ingress to the {{stack}} [k8s-eck-stack-ingress] :::{admonition} Support scope for Ingress Controllers [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a standard Kubernetes concept. While ECK-managed workloads can be publicly exposed using ingress resources, and we provide [example configurations](/deploy-manage/deploy/cloud-on-k8s/recipes.md), setting up an Ingress controller requires in-house Kubernetes expertise. @@ -181,7 +181,7 @@ spec: ## View available configuration options [k8s-install-helm-show-values-stack] -You can view all configurable values of the Elastic Stack helm chart of the individual charts by running the following: +You can view all configurable values of the {{stack}} helm chart of the individual charts by running the following: ```sh helm show values elastic/eck-stack diff --git a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md index 11262c3881..bc45f32765 100644 --- a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md @@ -8,7 +8,7 @@ mapped_pages: # Node configuration [k8s-node-configuration] -Any setting defined in the `elasticsearch.yml` configuration file can also be defined for a set of Elasticsearch nodes in the `spec.nodeSets[?].config` section. +Any setting defined in the `elasticsearch.yml` configuration file can also be defined for a set of {{es}} nodes in the `spec.nodeSets[?].config` section. Some settings are managed by ECK, it is not recommended to change them, refer to [Settings managed by ECK](settings-managed-by-eck.md) for more details. @@ -18,14 +18,14 @@ spec: - name: masters count: 3 config: - # On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following: + # On {{es}} versions before 7.9.0, replace the node.roles configuration with the following: # node.master: true node.roles: ["master"] xpack.ml.enabled: true - name: data count: 10 config: - # On Elasticsearch versions before 7.9.0, replace the node.roles configuration with the following: + # On {{es}} versions before 7.9.0, replace the node.roles configuration with the following: # node.master: false # node.data: true # node.ingest: true @@ -35,7 +35,7 @@ spec: ``` ::::{warning} -ECK parses Elasticsearch configuration and normalizes it to YAML. Consequently, some Elasticsearch configuration schema are impossible to express with ECK and, therefore, must be set using [dynamic cluster settings](/deploy-manage/deploy/self-managed/configure-elasticsearch.md#cluster-setting-types). For example: +ECK parses {{es}} configuration and normalizes it to YAML. Consequently, some {{es}} configuration schema are impossible to express with ECK and, therefore, must be set using [dynamic cluster settings](/deploy-manage/deploy/self-managed/configure-elasticsearch.md#cluster-setting-types). For example: ```yaml spec: nodeSets: @@ -48,5 +48,5 @@ spec: ``` :::: -For more information on Elasticsearch settings, check [Configuring Elasticsearch](/deploy-manage/deploy/self-managed/configure-elasticsearch.md). +For more information on {{es}} settings, check [Configuring Elasticsearch](/deploy-manage/deploy/self-managed/configure-elasticsearch.md). diff --git a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md index 8e24817807..1f0e3c7cd4 100644 --- a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md +++ b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md @@ -19,7 +19,7 @@ This section covers the following topics: ## NodeSets overview [k8s-nodesets] -NodeSets are used to specify the topology of the Elasticsearch cluster. Each NodeSet represents a group of Elasticsearch nodes that share the same Elasticsearch configuration and Kubernetes Pod configuration. +NodeSets are used to specify the topology of the {{es}} cluster. Each NodeSet represents a group of {{es}} nodes that share the same {{es}} configuration and Kubernetes Pod configuration. ::::{tip} You can use [YAML anchors](https://yaml.org/spec/1.2/spec.html#id2765878) to declare the configuration change once and reuse it across all the node sets. @@ -64,27 +64,27 @@ spec: storageClassName: standard ``` -In this example, the Elasticsearch resource defines two NodeSets: +In this example, the {{es}} resource defines two NodeSets: * `master-nodes` with 10Gi volumes * `data-nodes` with 1000Gi volumes -The Elasticsearch cluster is composed of 13 nodes: 3 master nodes and 10 data nodes. +The {{es}} cluster is composed of 13 nodes: 3 master nodes and 10 data nodes. ## Upgrading the cluster [k8s-upgrading] -ECK handles smooth upgrades from one cluster specification to another. You can apply a new Elasticsearch specification at any time. For example, based on the Elasticsearch specification described in the [NodeSets overview](#k8s-nodesets), you can: +ECK handles smooth upgrades from one cluster specification to another. You can apply a new {{es}} specification at any time. For example, based on the {{es}} specification described in the [NodeSets overview](#k8s-nodesets), you can: -* Add five additional Elasticsearch data nodes: In `data-nodes` change the value in the `count` field from 10 to 15. +* Add five additional {{es}} data nodes: In `data-nodes` change the value in the `count` field from 10 to 15. * Increase the memory limit of data nodes to 32Gi: Set a [different resource limit](manage-compute-resources.md) in the existing `data-nodes` NodeSet. -* Replace dedicated master and dedicated data nodes with nodes having both master and data roles: Replace the two existing NodeSets by a single one with a different name and the appropriate Elasticsearch configuration settings. -* Upgrade Elasticsearch from version 7.2.0 to 7.3.0: Change the value in the `version` field. +* Replace dedicated master and dedicated data nodes with nodes having both master and data roles: Replace the two existing NodeSets by a single one with a different name and the appropriate {{es}} configuration settings. +* Upgrade {{es}} from version 7.2.0 to 7.3.0: Change the value in the `version` field. ECK orchestrates NodeSet changes with no downtime and makes sure that: * Before a node is removed, the relevant data is migrated to other nodes (with some [limitations](#k8s-orchestration-limitations)). -* When a cluster topology changes, these Elasticsearch settings are adjusted accordingly: +* When a cluster topology changes, these {{es}} settings are adjusted accordingly: * `discovery.seed_hosts` * `cluster.initial_master_nodes` @@ -96,11 +96,11 @@ ECK orchestrates NodeSet changes with no downtime and makes sure that: ## StatefulSets orchestration [k8s-statefulsets] -Behind the scenes, ECK translates each NodeSet specified in the Elasticsearch resource into a [StatefulSet in Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). The NodeSet specification is based on the StatefulSet specification: +Behind the scenes, ECK translates each NodeSet specified in the {{es}} resource into a [StatefulSet in Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). The NodeSet specification is based on the StatefulSet specification: -* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an Elasticsearch node. -* `podTemplate` can be used to [customize some aspects of the Elasticsearch Pods](customize-pods.md) created by the underlying StatefulSet. -* The StatefulSet name is derived from the Elasticsearch resource name and the NodeSet name. Each Pod in the StatefulSet gets a name generated by suffixing the pod ordinal to the StatefulSet name. Elasticsearch nodes have the same name as the Pod they are running on. +* `count` corresponds to the number of replicas in the StatefulSet. A StatefulSet replica is a Pod — which corresponds to an {{es}} node. +* `podTemplate` can be used to [customize some aspects of the {{es}} Pods](customize-pods.md) created by the underlying StatefulSet. +* The StatefulSet name is derived from the {{es}} resource name and the NodeSet name. Each Pod in the StatefulSet gets a name generated by suffixing the pod ordinal to the StatefulSet name. {{es}} nodes have the same name as the Pod they are running on. The actual Pod creation is handled by the StatefulSet controller in Kubernetes. ECK relies on the [OnDelete StatefulSet update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies) since it needs full control over when and how Pods get upgraded to a new revision. @@ -111,9 +111,9 @@ When a Pod is removed and recreated (maybe with a newer revision), the StatefulS Depending on how the NodeSets are updated, ECK handles the Kubernetes resource reconciliation in various ways. -* A new NodeSet is added to the Elasticsearch resource. +* A new NodeSet is added to the {{es}} resource. - ECK creates the corresponding StatefulSet. It also sets up [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) and [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) to hold the TLS certificates and Elasticsearch configuration files. + ECK creates the corresponding StatefulSet. It also sets up [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) and [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) to hold the TLS certificates and {{es}} configuration files. * The node count of an existing NodeSet is increased. @@ -121,22 +121,22 @@ Depending on how the NodeSets are updated, ECK handles the Kubernetes resource r * The node count of an existing NodeSet is decreased. - ECK migrates data away from the Elasticsearch nodes due to be removed and then decreases the replicas of the corresponding StatefulSet. [PersistentVolumeClaims](volume-claim-templates.md) belonging to the removed nodes are automatically removed as well. + ECK migrates data away from the {{es}} nodes due to be removed and then decreases the replicas of the corresponding StatefulSet. [PersistentVolumeClaims](volume-claim-templates.md) belonging to the removed nodes are automatically removed as well. * An existing NodeSet is removed. - ECK migrates data away from the Elasticsearch nodes in the NodeSet and removes the underlying StatefulSet. + ECK migrates data away from the {{es}} nodes in the NodeSet and removes the underlying StatefulSet. -* The specification of an existing NodeSet is updated. For example, the Elasticsearch configuration, or the PodTemplate resources requirements. +* The specification of an existing NodeSet is updated. For example, the {{es}} configuration, or the PodTemplate resources requirements. - ECK performs a rolling upgrade of the corresponding Elasticsearch nodes. It follows the [Elasticsearch rolling upgrade best practices](/deploy-manage/upgrade/deployment-or-cluster.md) to update the underlying Pods while maintaining the availability of the Elasticsearch cluster where possible. In most cases, the process simply involves restarting Elasticsearch nodes one-by-one. Note that some cluster topologies may be impossible to deploy without making the cluster unavailable (check [Limitations](#k8s-orchestration-limitations) ). + ECK performs a rolling upgrade of the corresponding {{es}} nodes. It follows the [{{es}} rolling upgrade best practices](/deploy-manage/upgrade/deployment-or-cluster.md) to update the underlying Pods while maintaining the availability of the {{es}} cluster where possible. In most cases, the process simply involves restarting {{es}} nodes one-by-one. Note that some cluster topologies may be impossible to deploy without making the cluster unavailable (check [Limitations](#k8s-orchestration-limitations) ). * An existing NodeSet is renamed. - ECK creates a new NodeSet with the new name, migrates data away from the old NodeSet, and then removes it. During this process the Elasticsearch cluster could temporarily have more nodes than normal. The Elasticsearch [update strategy](update-strategy.md) controls how many nodes can exist above or below the target node count during the upgrade. + ECK creates a new NodeSet with the new name, migrates data away from the old NodeSet, and then removes it. During this process the {{es}} cluster could temporarily have more nodes than normal. The {{es}} [update strategy](update-strategy.md) controls how many nodes can exist above or below the target node count during the upgrade. -In all these cases, ECK handles StatefulSet operations according to the Elasticsearch orchestration best practices by adjusting the following orchestration settings: +In all these cases, ECK handles StatefulSet operations according to the {{es}} orchestration best practices by adjusting the following orchestration settings: * `discovery.seed_hosts` * `cluster.initial_master_nodes` @@ -156,8 +156,8 @@ Due to relying on Kubernetes primitives such as StatefulSets, the ECK orchestrat If an {{es}} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. To ensure [high availability](/deploy-manage/production-guidance/availability-and-resilience.md) it is recommended to configure clusters with three master nodes, more than one node per [data tier](/manage-data/lifecycle/data-tiers.md) and at least one replica per index. -* Elasticsearch Pods may stay `Pending` during a rolling upgrade if the Kubernetes scheduler cannot re-schedule them back. This is especially important when using local PersistentVolumes. If the Kubernetes node bound to a local PersistentVolume does not have enough capacity to host an upgraded Pod which was temporarily removed, that Pod will stay `Pending`. -* Rolling upgrades can only make progress if the Elasticsearch cluster health is green. There are exceptions to this rule if the cluster health is yellow and if the following conditions are satisfied: +* {{es}} Pods may stay `Pending` during a rolling upgrade if the Kubernetes scheduler cannot re-schedule them back. This is especially important when using local PersistentVolumes. If the Kubernetes node bound to a local PersistentVolume does not have enough capacity to host an upgraded Pod which was temporarily removed, that Pod will stay `Pending`. +* Rolling upgrades can only make progress if the {{es}} cluster health is green. There are exceptions to this rule if the cluster health is yellow and if the following conditions are satisfied: * A cluster version upgrade is in progress and some Pods are not up to date. * There are no initializing or relocating shards. @@ -167,29 +167,29 @@ If these conditions are met, then ECK can delete a Pod for upgrade even if the c The health of the cluster is deliberately ignored in the following cases: -* If all the Elasticsearch nodes of a NodeSet are unavailable, probably caused by a misconfiguration, the operator ignores the cluster health and upgrades nodes of the NodeSet. -* If an Elasticsearch node to upgrade is not healthy, and not part of the Elasticsearch cluster, the operator ignores the cluster health and upgrades the Elasticsearch node. +* If all the {{es}} nodes of a NodeSet are unavailable, probably caused by a misconfiguration, the operator ignores the cluster health and upgrades nodes of the NodeSet. +* If an {{es}} node to upgrade is not healthy, and not part of the {{es}} cluster, the operator ignores the cluster health and upgrades the {{es}} node. - * Elasticsearch versions cannot be downgraded. For example, it is impossible to downgrade an existing cluster from version 7.3.0 to 7.2.0. This is not supported by Elasticsearch. + * {{es}} versions cannot be downgraded. For example, it is impossible to downgrade an existing cluster from version 7.3.0 to 7.2.0. This is not supported by {{es}}. Advanced users may force an upgrade by manually deleting Pods themselves. The deleted Pods are automatically recreated at the latest revision. -Operations that reduce the number of nodes in the cluster cannot make progress without user intervention, if the Elasticsearch index replica settings are incompatible with the intended downscale. Specifically, if the Elasticsearch index settings demand a higher number of shard copies than data nodes in the cluster after the downscale operation, ECK cannot migrate the data away from the node about to be removed. You can address this in the following ways: +Operations that reduce the number of nodes in the cluster cannot make progress without user intervention, if the {{es}} index replica settings are incompatible with the intended downscale. Specifically, if the {{es}} index settings demand a higher number of shard copies than data nodes in the cluster after the downscale operation, ECK cannot migrate the data away from the node about to be removed. You can address this in the following ways: -* Adjust the Elasticsearch [index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) to a number of replicas that allow the desired node removal. +* Adjust the {{es}} [index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) to a number of replicas that allow the desired node removal. * Use [`auto_expand_replicas`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md) to automatically adjust the replicas to the number of data nodes in the cluster. ## Advanced control during rolling upgrades [k8s-advanced-upgrade-control] During {{es}} rolling upgrades, ECK follows a set of rules (also known as predicates) to ensure the upgrade process is safe and does not put the cluster at risk. For example, one of these predicates ensures that only a single master node is upgraded at a time, while another prevents nodes from being restarted if the cluster is in a red state. -These predicates can be selectively disabled for certain scenarios where the ECK operator will not proceed with an Elasticsearch cluster upgrade because it deems it to be "unsafe". +These predicates can be selectively disabled for certain scenarios where the ECK operator will not proceed with an {{es}} cluster upgrade because it deems it to be "unsafe". For a complete list of available predicates, their meaning, and example usage, refer to [ECK upgrade predicates](cloud-on-k8s://reference/upgrade-predicates.md). ::::{warning} -* Selectively disabling the predicates is extremely risky, and carry a high chance of either data loss, or causing a cluster to become completely unavailable. Use them only if you are sure that you are not causing permanent damage to an Elasticsearch cluster. +* Selectively disabling the predicates is extremely risky, and carry a high chance of either data loss, or causing a cluster to become completely unavailable. Use them only if you are sure that you are not causing permanent damage to an {{es}} cluster. * These predicates might change in the future. We will be adding, removing, and renaming these over time, so be careful in adding these to any automation. * Also, make sure you remove them after use by running `kublectl annotate elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample eck.k8s.elastic.co/disable-upgrade-predicates-` :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md index 4ac0a66688..28c4ef4523 100644 --- a/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md +++ b/deploy-manage/deploy/cloud-on-k8s/orchestrate-other-elastic-applications.md @@ -1,6 +1,6 @@ # Orchestrate other Elastic applications -This section explains how to deploy and configure various Elastic Stack applications within Elastic Cloud on Kubernetes (ECK). +This section explains how to deploy and configure various {{stack}} applications within {{eck}} (ECK). ::::{tip} This content applies to APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash applications. To orchestrate an {{es}} cluster or {{kib}}, refer to [](./manage-deployments.md). @@ -16,13 +16,13 @@ The following guides provide specific instructions for deploying and configuring When orchestrating any of these applications, also consider the following topics: -* [Elastic Stack Helm Chart](managing-deployments-using-helm-chart.md) +* [{{stack}} Helm Chart](managing-deployments-using-helm-chart.md) * [Recipes](recipes.md) -* [Secure the Elastic Stack](../../security.md) -* [Access Elastic Stack services](accessing-services.md) +* [Secure the {{stack}}](../../security.md) +* [Access {{stack}} services](accessing-services.md) * [Customize Pods](customize-pods.md) * [Manage compute resources](manage-compute-resources.md) * [Autoscaling stateless applications](../../autoscaling/autoscaling-in-eck.md#k8s-stateless-autoscaling) -* [Elastic Stack configuration policies](elastic-stack-configuration-policies.md) -* [Upgrade the Elastic Stack version](../../upgrade/deployment-or-cluster.md) +* [{{stack}} configuration policies](elastic-stack-configuration-policies.md) +* [Upgrade the {{stack}} version](../../upgrade/deployment-or-cluster.md) * [Connect to external Elastic resources](connect-to-external-elastic-resources.md) \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md index 63e9058088..dca565211e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md +++ b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md @@ -8,7 +8,7 @@ mapped_pages: # Pod PreStop hook [k8s-prestop] -When an Elasticsearch `Pod` is terminated, its `Endpoint` is removed from the `Service` and the Elasticsearch process is terminated. As these two operations happen in parallel, a race condition exists. If the Elasticsearch process is already shut down, but the `Endpoint` is still a part of the `Service`, any new connection might fail. For more information, check [Termination of pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods). +When an {{es}} `Pod` is terminated, its `Endpoint` is removed from the `Service` and the {{es}} process is terminated. As these two operations happen in parallel, a race condition exists. If the {{es}} process is already shut down, but the `Endpoint` is still a part of the `Service`, any new connection might fail. For more information, check [Termination of pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods). Moreover, kube-proxy resynchronizes its rules [every 30 seconds by default](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/#options). During that time window of 30 seconds, the terminating Pod IP may still be used when targeting the service. Please note the resync operation itself may take some time, especially if kube-proxy is configured to use iptables with a lot of services and rules to apply. @@ -35,9 +35,9 @@ spec: value: "5" ``` -The pre-stop lifecycle hook also tries to gracefully shut down the Elasticsearch node in case of a termination that is not caused by the ECK operator. Examples of such terminations could be Kubernetes node maintenance or a Kubernetes upgrade. In these cases the script will try to interact with the Elasticsearch API to notify Elasticsearch of the impending termination of the node. The intent is to avoid relocation and recovery of shards while the Elasticsearch node is only temporarily unavailable. +The pre-stop lifecycle hook also tries to gracefully shut down the {{es}} node in case of a termination that is not caused by the ECK operator. Examples of such terminations could be Kubernetes node maintenance or a Kubernetes upgrade. In these cases the script will try to interact with the {{es}} API to notify {{es}} of the impending termination of the node. The intent is to avoid relocation and recovery of shards while the {{es}} node is only temporarily unavailable. -This is done on a best effort basis. In particular requests to an Elasticsearch cluster already in the process of shutting down might fail if the Kubernetes service has already been removed. The script allows for `PRE_STOP_MAX_DNS_ERRORS` which default to 2 before giving up. +This is done on a best effort basis. In particular requests to an {{es}} cluster already in the process of shutting down might fail if the Kubernetes service has already been removed. The script allows for `PRE_STOP_MAX_DNS_ERRORS` which default to 2 before giving up. -When using local persistent volumes a different behaviour might be desirable because the Elasticsearch node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Please be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`. +When using local persistent volumes a different behaviour might be desirable because the {{es}} node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Please be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`. diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md index 73d895c4ef..402a218401 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md @@ -8,7 +8,7 @@ mapped_pages: # Quickstart [k8s-beat-quickstart] -1. Apply the following specification to deploy Filebeat and collect the logs of all containers running in the Kubernetes cluster. ECK automatically configures the secured connection to an Elasticsearch cluster named `quickstart`, created in the [Elasticsearch quickstart](deploy-an-orchestrator.md). +1. Apply the following specification to deploy Filebeat and collect the logs of all containers running in the Kubernetes cluster. ECK automatically configures the secured connection to an {{es}} cluster named `quickstart`, created in the [{{es}} quickstart](deploy-an-orchestrator.md). ```yaml cat < **Discover**. + * Follow the {{kib}} deployment [guide](kibana-instance-quickstart.md), log in and go to **Kibana** > **Discover**. diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md index 2573b41038..9158076d88 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md @@ -8,7 +8,7 @@ mapped_pages: # Quickstart [k8s-logstash-quickstart] -Add the following specification to create a minimal {{ls}} deployment that will listen to a Beats agent or Elastic Agent configured to send to Logstash on port 5044, create the service and write the output to an Elasticsearch cluster named `quickstart`, created in the [Elasticsearch quickstart](deploy-an-orchestrator.md). +Add the following specification to create a minimal {{ls}} deployment that will listen to a Beats agent or Elastic Agent configured to send to Logstash on port 5044, create the service and write the output to an {{es}} cluster named `quickstart`, created in the [{{es}} quickstart](deploy-an-orchestrator.md). ```yaml cat <<'EOF' | kubectl apply -f - diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md index ce4dafe4d0..7c86a83df4 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md @@ -8,7 +8,7 @@ mapped_pages: # Quickstart [k8s-elastic-agent-quickstart] -1. Apply the following specification to deploy Elastic Agent with the System metrics integration to harvest CPU metrics from the Agent Pods. ECK automatically configures the secured connection to an Elasticsearch cluster named `quickstart`, created in the [Elasticsearch quickstart](deploy-an-orchestrator.md). +1. Apply the following specification to deploy Elastic Agent with the System metrics integration to harvest CPU metrics from the Agent Pods. ECK automatically configures the secured connection to an {{es}} cluster named `quickstart`, created in the [{{es}} quickstart](deploy-an-orchestrator.md). ```yaml cat < **Discover**. + * Follow the {{kib}} deployment [guide](kibana-instance-quickstart.md), log in and go to **Kibana** > **Discover**. diff --git a/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md b/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md index 58a25f00cb..f044e0ccf5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md +++ b/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md @@ -8,9 +8,9 @@ mapped_pages: # Readiness probe [k8s-readiness] -## Elasticsearch versions before 8.2.0 [k8s_elasticsearch_versions_before_8_2_0] +## {{es}} versions before 8.2.0 [k8s_elasticsearch_versions_before_8_2_0] -By default, the readiness probe checks that the Pod responds to HTTP requests within a timeout of three seconds. This is acceptable in most cases. However, when the cluster is under heavy load, you might need to increase the timeout. This allows the Pod to stay in a `Ready` state and be part of the Elasticsearch service even if it is responding slowly. To adjust the timeout, set the `READINESS_PROBE_TIMEOUT` environment variable in the Pod template and update the readiness probe configuration with the new timeout. +By default, the readiness probe checks that the Pod responds to HTTP requests within a timeout of three seconds. This is acceptable in most cases. However, when the cluster is under heavy load, you might need to increase the timeout. This allows the Pod to stay in a `Ready` state and be part of the {{es}} service even if it is responding slowly. To adjust the timeout, set the `READINESS_PROBE_TIMEOUT` environment variable in the Pod template and update the readiness probe configuration with the new timeout. This example describes how to increase the API call timeout to ten seconds and the overall check time to twelve seconds: @@ -42,10 +42,10 @@ spec: Note that this requires restarting the Pods. -## Elasticsearch versions 8.2.0 and later [k8s_elasticsearch_versions_8_2_0_and_later] +## {{es}} versions 8.2.0 and later [k8s_elasticsearch_versions_8_2_0_and_later] % this feature might have disappeared, we will need to investigate this a bit more, as the link below doesn't work anymore but it does for 8.15 for example. -We do not recommend overriding the default readiness probe on Elasticsearch 8.2.0 and later. ECK configures a socket based readiness probe using the Elasticsearch [readiness port feature](elasticsearch://reference/elasticsearch/jvm-settings.md#readiness-tcp-port) which is not influenced by the load on the Elasticsearch cluster. +We do not recommend overriding the default readiness probe on {{es}} 8.2.0 and later. ECK configures a socket based readiness probe using the {{es}} [readiness port feature](elasticsearch://reference/elasticsearch/jvm-settings.md#readiness-tcp-port) which is not influenced by the load on the {{es}} cluster. diff --git a/deploy-manage/deploy/cloud-on-k8s/recipes.md b/deploy-manage/deploy/cloud-on-k8s/recipes.md index 14bbdcedbc..2b9225bc4e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/recipes.md +++ b/deploy-manage/deploy/cloud-on-k8s/recipes.md @@ -10,13 +10,13 @@ mapped_pages: This section includes recipes that provide configuration examples for some common use cases. -* [Expose Elasticsearch and Kibana using a Google Cloud Load Balancer (GCLB)](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/gclb) -* [Expose Elasticsearch and Kibana using Istio ingress gateway](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/istio-gateway) +* [Expose {{es}} and {{kib}} using a Google Cloud Load Balancer (GCLB)](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/gclb) +* [Expose {{es}} and {{kib}} using Istio ingress gateway](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/istio-gateway) * [Using Logstash with ECK](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/logstash) -* [Expose Elastic Maps Server and Kibana using a Kubernetes Ingress](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/maps) +* [Expose Elastic Maps Server and {{kib}} using a Kubernetes Ingress](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/maps) * [Secure your cluster with Pod Security Policies](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/psp) -* [Use Traefik to expose Elastic Stack applications](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/traefik) -* [Deploy Elasticsearch, Kibana, Elastic Fleet Server and Elastic Agent within GKE Autopilot](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/autopilot) +* [Use Traefik to expose {{stack}} applications](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/traefik) +* [Deploy {{es}}, {{kib}}, Elastic Fleet Server and Elastic Agent within GKE Autopilot](https://github.com/elastic/cloud-on-k8s/tree/main/config/recipes/autopilot) ::::{warning} Compared to other configuration examples that are consistently tested, like [fleet-managed Elastic Agent on ECK](configuration-examples-fleet.md), [standalone Elastic Agent on ECK](configuration-examples-standalone.md), or [Beats on ECK](/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md), the recipes in this section are not regularly tested by our automation system, and therefore should not be considered to be production-ready. diff --git a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md index 01864075db..f0468306af 100644 --- a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md +++ b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md @@ -6,9 +6,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-traffic-splitting.html --- -# Requests routing to Elasticsearch nodes [k8s-traffic-splitting] +# Requests routing to {{es}} nodes [k8s-traffic-splitting] -The default Kubernetes service created by ECK, named `-es-http`, is configured to include all the Elasticsearch nodes in that cluster. This configuration is good to get started and is adequate for most use cases. However, if you are operating an Elasticsearch cluster with [different node types](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md) and want control over which nodes handle which types of traffic, you should create additional Kubernetes services yourself. +The default Kubernetes service created by ECK, named `-es-http`, is configured to include all the {{es}} nodes in that cluster. This configuration is good to get started and is adequate for most use cases. However, if you are operating an {{es}} cluster with [different node types](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md) and want control over which nodes handle which types of traffic, you should create additional Kubernetes services yourself. As an alternative, you can use features provided by third-party software such as service meshes and ingress controllers to achieve more advanced traffic management configurations. Check the [recipes directory](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/recipes) in the ECK source repository for a few examples. @@ -19,7 +19,7 @@ If ingress configuration is challenging or unsupported in your environment, cons ::: -The service configurations shown in these sections are based on the following Elasticsearch cluster definition: +The service configurations shown in these sections are based on the following {{es}} cluster definition: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -64,7 +64,7 @@ spec: ## Create services for exposing different node types [k8s-traffic-splitting-by-node-type] -The following examples illustrate how to create services for accessing different types of Elasticsearch nodes. The procedure for exposing services publicly is the same as described in [Allow public access](accessing-services.md#k8s-allow-public-access). +The following examples illustrate how to create services for accessing different types of {{es}} nodes. The procedure for exposing services publicly is the same as described in [Allow public access](accessing-services.md#k8s-allow-public-access). $$$k8s-traffic-splitting-coordinating-nodes$$$ @@ -124,7 +124,7 @@ spec: ## Specify a custom service in elasticsearchRef [k8s-traffic-splitting-with-service-name] -You can then use your custom service in the `elasticsearchRef` element when specifying connections between Elasticsearch and other stack applications. This is an example on how to target only coordinating node from Kibana: +You can then use your custom service in the `elasticsearchRef` element when specifying connections between {{es}} and other stack applications. This is an example on how to target only coordinating node from {{kib}}: ```yaml apiVersion: kibana.k8s.elastic.co/v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md b/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md index e5ec89ca32..4177e711ed 100644 --- a/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md +++ b/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md @@ -22,7 +22,7 @@ This permission is required to install CRDs. CRDs ([CustomResourceDefinitions](h | Name | API group | Optional? | Usage | | --- | --- | --- | --- | -| `CustomResourceDefinition` | `apiextensions.k8s.io` | no | Extend Kubernetes APIs with Elastic Stack application resources. | +| `CustomResourceDefinition` | `apiextensions.k8s.io` | no | Extend Kubernetes APIs with {{stack}} application resources. | ## Installing the ECK operator [k8s-eck-permissions-installing-operator] @@ -50,17 +50,17 @@ These permissions are needed by the Service Account that ECK operator runs as. | Name | API group | Optional? | Usage | | --- | --- | --- | --- | -| `Pod` | | no | Assuring expected Pods presence during Elasticsearch reconciliation, safely deleting Pods during configuration changes and validating `podTemplate` by dry-run creation of Pods. | +| `Pod` | | no | Assuring expected Pods presence during {{es}} reconciliation, safely deleting Pods during configuration changes and validating `podTemplate` by dry-run creation of Pods. | | `Endpoint` | | no | Checking availability of service endpoints. | | `Event` | | no | Emitting events concerning reconciliation progress and issues. | | `PersistentVolumeClaim` | | no | Expanding existing volumes. Check [docs](volume-claim-templates.md#k8s-volume-claim-templates-update) to learn more. | | `Secret` | | no | Reading/writing configuration, passwords, certificates, and so on. | -| `Service` | | no | Creating Services fronting Elastic Stack applications. | +| `Service` | | no | Creating Services fronting {{stack}} applications. | | `ConfigMap` | | no | Reading/writing configuration. | -| `StatefulSet` | `apps` | no | Deploying Elasticsearch | -| `Deployment` | `apps` | no | Deploying Kibana, APM Server, EnterpriseSearch, Maps, Beats or Elastic Agent. | +| `StatefulSet` | `apps` | no | Deploying {{es}} | +| `Deployment` | `apps` | no | Deploying {{kib}}, APM Server, EnterpriseSearch, Maps, Beats or Elastic Agent. | | `DaemonSet` | `apps` | no | Deploying Beats or Elastic Agent. | -| `PodDisruptionBudget` | `policy` | no | Ensuring update safety for Elasticsearch. Check [docs](/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md) to learn more. | +| `PodDisruptionBudget` | `policy` | no | Ensuring update safety for {{es}}. Check [docs](/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md) to learn more. | | `StorageClass` | `storage.k8s.io` | yes | Validating storage expansion support. Check [docs](volume-claim-templates.md#k8s-volume-claim-templates-update) to learn more. | | `coreauthorization.k8s.io` | `SubjectAccessReview` | yes | Controlling access between referenced resources. Check [docs](/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md) to learn more. | @@ -69,7 +69,7 @@ And all permissions that the [Using ECK-managed resources](#k8s-eck-permissions- ## Using ECK-managed resources [k8s-eck-permissions-using] -These permissions are needed to manage each Elastic Stack application. For example, to create, update and delete Elasticsearch clusters the permissions for the respective verbs must be held by the user that performs the operation. +These permissions are needed to manage each {{stack}} application. For example, to create, update and delete {{es}} clusters the permissions for the respective verbs must be held by the user that performs the operation. | Name | API group | Optional? | | --- | --- | --- | diff --git a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md index c9238c673e..3e5b1476c9 100644 --- a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md +++ b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md @@ -10,11 +10,11 @@ mapped_pages: This section describes how to restrict associations that can be created between resources managed by ECK. -When using the `elasticsearchRef` field to establish a connection to Elasticsearch from Kibana, APM Server, or Beats resources, by default the association is allowed as long as both resources are deployed to namespaces managed by that particular ECK instance. The association will succeed even if the user creating the association does not have access to one of the namespaces or the Elasticsearch resource. +When using the `elasticsearchRef` field to establish a connection to {{es}} from {{kib}}, APM Server, or Beats resources, by default the association is allowed as long as both resources are deployed to namespaces managed by that particular ECK instance. The association will succeed even if the user creating the association does not have access to one of the namespaces or the {{es}} resource. The enforcement of access control rules for cross-namespace associations is disabled by default. Once enabled, it only enforces access control for resources deployed across two different namespaces. Associations between resources deployed in the same namespace are not affected. -Associations are allowed as long as the `ServiceAccount` used by the associated resource can execute HTTP `GET` requests against the referenced Elasticsearch object. +Associations are allowed as long as the `ServiceAccount` used by the associated resource can execute HTTP `GET` requests against the referenced {{es}} object. ::::{important} ECK automatically removes any associations that do not have the correct access rights. If you have existing associations, do not enable this feature without creating the required `Roles` and `RoleBindings` as described in the following sections. @@ -23,7 +23,7 @@ ECK automatically removes any associations that do not have the correct access r To enable the restriction of cross-namespace associations, start the operator with the `--enforce-rbac-on-refs` flag. -1. Create a `ClusterRole` to allow HTTP `GET` requests to be run against Elasticsearch objects: +1. Create a `ClusterRole` to allow HTTP `GET` requests to be run against {{es}} objects: ```yaml apiVersion: rbac.authorization.k8s.io/v1 @@ -39,7 +39,7 @@ To enable the restriction of cross-namespace associations, start the operator wi - get ``` -2. Create a `ServiceAccount` and a `RoleBinding` in the Elasticsearch namespace to allow any resource using the `ServiceAccount` to associate with the Elasticsearch cluster: +2. Create a `ServiceAccount` and a `RoleBinding` in the {{es}} namespace to allow any resource using the `ServiceAccount` to associate with the {{es}} cluster: ```sh > kubectl create serviceaccount associated-resource-sa @@ -74,7 +74,7 @@ To enable the restriction of cross-namespace associations, start the operator wi elasticsearchRef: name: "elasticsearch-sample" namespace: "elasticsearch-ns" - # Service account used by this resource to get access to an Elasticsearch cluster + # Service account used by this resource to get access to an {{es}} cluster serviceAccountName: associated-resource-sa ``` @@ -86,5 +86,5 @@ If the `serviceAccountName` is not set, ECK uses the default service account ass :::: -The associated resource `associated-resource` is now allowed to create an association with any Elasticsearch cluster in the namespace `elasticsearch-ns`. +The associated resource `associated-resource` is now allowed to create an association with any {{es}} cluster in the namespace `elasticsearch-ns`. diff --git a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md index 4182812001..94ac7d75a6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md +++ b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md @@ -75,7 +75,7 @@ spec: ## Provide your own certificate [k8s-logstash-http-custom-tls] -If you want to use your own certificate, the required configuration is similar to Elasticsearch. Configure the certificate in `api` Service. Check [Custom HTTP certificate](../../security/secure-cluster-communications.md). +If you want to use your own certificate, the required configuration is similar to {{es}}. Configure the certificate in `api` Service. Check [Custom HTTP certificate](../../security/secure-cluster-communications.md). ```yaml apiVersion: logstash.k8s.elastic.co/v1alpha1 diff --git a/deploy-manage/deploy/cloud-on-k8s/security-context.md b/deploy-manage/deploy/cloud-on-k8s/security-context.md index d0a06b3f0a..311d7ee886 100644 --- a/deploy-manage/deploy/cloud-on-k8s/security-context.md +++ b/deploy-manage/deploy/cloud-on-k8s/security-context.md @@ -10,9 +10,9 @@ mapped_pages: In Kubernetes, a [`securityContext`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) defines privilege and access control settings for a Pod or Container. You can set up it through the `podTemplate` section of an Elastic resource specification. -## Default Elasticsearch security context [k8s_default_elasticsearch_security_context] +## Default {{es}} security context [k8s_default_elasticsearch_security_context] -As of version 8.8.0, the Elasticsearch container and ECK managed sidecars and init containers are running with the following security context: +As of version 8.8.0, the {{es}} container and ECK managed sidecars and init containers are running with the following security context: ```yaml securityContext: @@ -28,16 +28,16 @@ securityContext: -## Running older versions of Elasticsearch as non-root [k8s_running_older_versions_of_elasticsearch_as_non_root] +## Running older versions of {{es}} as non-root [k8s_running_older_versions_of_elasticsearch_as_non_root] ::::{note} when running on Red Hat OpenShift a random user ID is [automatically assigned](https://cloud.redhat.com/blog/a-guide-to-openshift-and-uids) and the following instructions do not apply. :::: -In versions of Elasticsearch before 8.0.0, the Elastisearch container is run as root and its entrypoint is responsible to run the Elasticsearch process with the `elasticsearch` user (defined with ID 1000). In the background, ECK uses an `initContainer` to make sure that the data volume is writable for the `elasticsearch` user. +In versions of {{es}} before 8.0.0, the Elastisearch container is run as root and its entrypoint is responsible to run the {{es}} process with the `elasticsearch` user (defined with ID 1000). In the background, ECK uses an `initContainer` to make sure that the data volume is writable for the `elasticsearch` user. -To run the Elastisearch container as a non-root user, you need to configure the Elasticsearch manifest with an appropriate security context to make the data volume writable to the `elasticsearch` user by specifying the right group ID through the `fsGroup`. +To run the Elastisearch container as a non-root user, you need to configure the {{es}} manifest with an appropriate security context to make the data volume writable to the `elasticsearch` user by specifying the right group ID through the `fsGroup`. Kubernetes recursively changes ownership and permissions for the contents of each volume to match the `fsGroup` specified in a Pod’s securityContext when that volume is mounted and makes all processes of the containers part of the supplementary group ID. diff --git a/deploy-manage/deploy/cloud-on-k8s/service-meshes.md b/deploy-manage/deploy/cloud-on-k8s/service-meshes.md index 12918acef3..2f1ed9c830 100644 --- a/deploy-manage/deploy/cloud-on-k8s/service-meshes.md +++ b/deploy-manage/deploy/cloud-on-k8s/service-meshes.md @@ -8,7 +8,7 @@ mapped_pages: # Service meshes [k8s-service-meshes] -You can connect ECK and managed Elastic Stack applications to some of the most popular [service mesh](https://www.cncf.io/blog/2017/04/26/service-mesh-critical-component-cloud-native-stack/) implementations in the Kubernetes ecosystem: +You can connect ECK and managed {{stack}} applications to some of the most popular [service mesh](https://www.cncf.io/blog/2017/04/26/service-mesh-critical-component-cloud-native-stack/) implementations in the Kubernetes ecosystem: * [Istio](k8s-service-mesh-istio.md) * [Linkerd](k8s-service-mesh-linkerd.md) diff --git a/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md b/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md index 208e5abcb4..b018e255fc 100644 --- a/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md @@ -8,7 +8,7 @@ mapped_pages: # Settings managed by ECK [k8s-reserved-settings] -The following Elasticsearch settings are managed by ECK: +The following {{es}} settings are managed by ECK: * `cluster.name` * `discovery.seed_hosts` @@ -27,10 +27,10 @@ The following Elasticsearch settings are managed by ECK: * `xpack.security.transport.ssl.enabled` * `xpack.security.transport.ssl.verification_mode` -The following Elasticsearch settings are not supported by ECK: +The following {{es}} settings are not supported by ECK: * `xpack.security.http.ssl.client_authentication`: `required` ::::{warning} -It is not recommended to change these ECK settings. We don’t support user-provided Elasticsearch configurations that use any of these settings. +It is not recommended to change these ECK settings. We don’t support user-provided {{es}} configurations that use any of these settings. :::: diff --git a/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md b/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md index 8d26f6783f..d0f35a5c9b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md +++ b/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md @@ -8,9 +8,9 @@ mapped_pages: # Storage recommendations [k8s-storage-recommendations] -ECK does not come with its own storage mechanism for Elasticsearch data. It is compatible with any Kubernetes storage option. It is recommended to use PersistentVolumes, by configuring the [VolumeClaimTemplates](volume-claim-templates.md) section of the Elasticsearch resource. +ECK does not come with its own storage mechanism for {{es}} data. It is compatible with any Kubernetes storage option. It is recommended to use PersistentVolumes, by configuring the [VolumeClaimTemplates](volume-claim-templates.md) section of the {{es}} resource. -Multiple PersistentVolume storage classes are available, depending on your Kubernetes setup. Their specifications impact Elasticsearch performance and operations. Evaluate the trade-offs among the various options and choose the solution that best fits your needs. +Multiple PersistentVolume storage classes are available, depending on your Kubernetes setup. Their specifications impact {{es}} performance and operations. Evaluate the trade-offs among the various options and choose the solution that best fits your needs. ## Network-attached or Local PersistentVolumes [k8s_network_attached_or_local_persistentvolumes] @@ -20,7 +20,7 @@ PersistentVolumes can be of two types: **Network-attached** or **Local**. ECK ha * **Network-attached PersistentVolumes** can generally be attached to a Pod regardless of the host they are scheduled on. They provide a major operational benefit: if the host goes down, or needs to be replaced, the Pod can simply be deleted. Kubernetes reschedules it automatically on a different host, generally in the same region, and reattaches the same volume. This can take only a few seconds, and does not require any human intervention. * **Local PersistentVolumes** are bound to a particular host, and map a directory on the filesystem. They provide a major operational overhead: once bound to a Local PersistentVolume, a Pod can only be scheduled on the same host. If that host goes down, or needs to be replaced, the Pod cannot be scheduled on a different host. It remains in a `Pending` state until the host is available, or until the PersistentVolumeClaim is manually deleted. For that reason, Local PersistentVolumes bring more operational overhead. -In both cases, the performance depends on the underlying hardware and implementation. In general, local SSDs give the best performance. The fastest network-attached volumes from major Cloud providers can also provide acceptable performance, depending on your Elasticsearch use cases. To better evaluate your performance requirements, you can [benchmark](https://github.com/elastic/rally) your storage options against the expected Elasticsearch usage. +In both cases, the performance depends on the underlying hardware and implementation. In general, local SSDs give the best performance. The fastest network-attached volumes from major Cloud providers can also provide acceptable performance, depending on your {{es}} use cases. To better evaluate your performance requirements, you can [benchmark](https://github.com/elastic/rally) your storage options against the expected {{es}} usage. ## Local PersistentVolumes operations [k8s_local_persistentvolumes_operations] @@ -28,14 +28,14 @@ In both cases, the performance depends on the underlying hardware and implementa ### Host maintenance [k8s_host_maintenance] -To take a host out of the Kubernetes cluster temporarily, it is common to cordon, then drain it. Kubernetes deletes Elasticsearch Pods scheduled on that host automatically, as long as the [PodDisruptionBudget](pod-disruption-budget.md) allows it. By default, ECK manages a PodDisruptionBudget that allows one Pod to be taken down, as long as the cluster has a green health. Once deleted, that Pod cannot be scheduled again on the cordoned host: the Pod stays `Pending`, waiting for that host to come back online. The next Pod can be automatically deleted when the Elasticsearch cluster health becomes green again. +To take a host out of the Kubernetes cluster temporarily, it is common to cordon, then drain it. Kubernetes deletes {{es}} Pods scheduled on that host automatically, as long as the [PodDisruptionBudget](pod-disruption-budget.md) allows it. By default, ECK manages a PodDisruptionBudget that allows one Pod to be taken down, as long as the cluster has a green health. Once deleted, that Pod cannot be scheduled again on the cordoned host: the Pod stays `Pending`, waiting for that host to come back online. The next Pod can be automatically deleted when the {{es}} cluster health becomes green again. Some hosted Kubernetes offerings only respect the PodDisruptionBudget for a certain amount of time, before killing all Pods on the node. For example, [GKE automated version upgrade](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades) rotates all nodes without preserving local volumes, and respects the PodDisruptionBudget for a maximum of one hour. In such cases it is preferable to [manually handle the cluster version upgrade](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades#upgrading_manually). ### Host removal [k8s_host_removal] -If a host has a failure, or is permanently removed, its local data is likely lost. The corresponding Pod stays `Pending` because it can no longer attach the PersistentVolume. To schedule the Pod on a different host with a new empty volume, you have to manually remove both the PersistenteVolumeClaim and the Pod. A new Pod is automatically created with a new PersistentVolumeClaim, which is then matched with a PersistentVolume. Then, Elasticsearch shard replication makes sure that data is recovered on the new instance. +If a host has a failure, or is permanently removed, its local data is likely lost. The corresponding Pod stays `Pending` because it can no longer attach the PersistentVolume. To schedule the Pod on a different host with a new empty volume, you have to manually remove both the PersistenteVolumeClaim and the Pod. A new Pod is automatically created with a new PersistentVolumeClaim, which is then matched with a PersistentVolume. Then, {{es}} shard replication makes sure that data is recovered on the new instance. ## Local PersistentVolume provisioners [k8s_local_persistentvolume_provisioners] @@ -59,4 +59,4 @@ In the PersistentVolume StorageClass, it is important to set [`volumeBindingMode The reclaim policy of a StorageClass specifies whether a PersistentVolume should be automatically deleted once its corresponding PersistentVolumeClaim is deleted. It can be set to `Delete` or `Retain`. -ECK automatically deletes PersistentVolumeClaims when they are no longer needed, following a cluster downscale or deletion. However, ECK does not delete PersistentVolumes. The system cannot reuse a PersistentVolume with existing data from a different cluster. In this case Elasticsearch does not start, as it detects data that belongs to a different cluster. For this reason, it is recommended to use the `Delete` reclaim policy. +ECK automatically deletes PersistentVolumeClaims when they are no longer needed, following a cluster downscale or deletion. However, ECK does not delete PersistentVolumes. The system cannot reuse a PersistentVolume with existing data from a different cluster. In this case {{es}} does not start, as it detects data that belongs to a different cluster. For this reason, it is recommended to use the `Delete` reclaim policy. diff --git a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md index f1d022d740..11f23167c2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md @@ -10,7 +10,7 @@ mapped_pages: ## Beat Pods are crashing when kibanaRef is specified [k8s-beat-beat-pods-are-crashing-when-kibanaref-is-specified] -When `kibanaRef` is specified, Beat tries to connect to the Kibana instance. If it’s unable to do so, the Beat process exits and the Pod restarts. This may happen when Kibana is not yet up or when a Beat user is not yet created in Elasticsearch. The Pod may restart a few times when it is first deployed. Afterwards, the Beat should run successfully. +When `kibanaRef` is specified, Beat tries to connect to the {{kib}} instance. If it’s unable to do so, the Beat process exits and the Pod restarts. This may happen when {{kib}} is not yet up or when a Beat user is not yet created in {{es}}. The Pod may restart a few times when it is first deployed. Afterwards, the Beat should run successfully. ## Configuration containing key: null is malformed [k8s-beat-configuration-containing-key-null-is-malformed] diff --git a/deploy-manage/deploy/cloud-on-k8s/update-strategy.md b/deploy-manage/deploy/cloud-on-k8s/update-strategy.md index 3b375f0582..8beff65af5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/update-strategy.md +++ b/deploy-manage/deploy/cloud-on-k8s/update-strategy.md @@ -21,7 +21,7 @@ spec: maxUnavailable: 1 ``` -`maxSurge`: Refers to the number of extra Pods that can be temporarily scheduled exceeding the number of Pods defined in the specification. This setting is useful for controlling the resource usage of the Kubernetes cluster when nodeSet configuration changes and new Pods need to be spun up to replace existing Pods. `MaxSurge` restricts the number of extra pods that can be running at any given point in time. If you have a large Elasticsearch cluster or a Kubernetes cluster running near capacity, not setting `maxSurge` could cause the newly created pods to temporarily use up all available spare resource capacity in the Kubernetes cluster and starve other workloads running there. +`maxSurge`: Refers to the number of extra Pods that can be temporarily scheduled exceeding the number of Pods defined in the specification. This setting is useful for controlling the resource usage of the Kubernetes cluster when nodeSet configuration changes and new Pods need to be spun up to replace existing Pods. `MaxSurge` restricts the number of extra pods that can be running at any given point in time. If you have a large {{es}} cluster or a Kubernetes cluster running near capacity, not setting `maxSurge` could cause the newly created pods to temporarily use up all available spare resource capacity in the Kubernetes cluster and starve other workloads running there. `maxUnavailable`: Refers to the number of Pods that can be unavailable out of the total number of Pods in the currently applied specification. A Pod is defined unavailable when it is not ready from a Kubernetes perspective. @@ -29,7 +29,7 @@ The operator only tries to apply these constraints when a new specification is b For example, if a new specification defines a larger cluster with `maxUnavailable: 0`, the operator creates the missing Pods according to the best practices. Similarly, if a new specification defines a smaller cluster with `maxSurge: 0`, the operator safely removes the unnecessary Pods. -The operator will not enforce the change budget on version upgrades for clusters that have a non-HA setup, that is, less than three nodes. In these setups, removing a single node makes the whole cluster unavailable, and the operator will instead opt to upgrade all nodes at once. This is to avoid a situation where no progress can be made in a rolling upgrade process because the Elasticsearch cluster cannot form a quorum until all nodes have been upgraded. +The operator will not enforce the change budget on version upgrades for clusters that have a non-HA setup, that is, less than three nodes. In these setups, removing a single node makes the whole cluster unavailable, and the operator will instead opt to upgrade all nodes at once. This is to avoid a situation where no progress can be made in a rolling upgrade process because the {{es}} cluster cannot form a quorum until all nodes have been upgraded. ## Specify changeBudget [k8s_specify_changebudget] diff --git a/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md b/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md index c831df1fef..e11af509fc 100644 --- a/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md @@ -6,11 +6,11 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-eck-managed-es.html --- -# Use an Elasticsearch cluster managed by ECK [k8s-apm-eck-managed-es] +# Use an {{es}} cluster managed by ECK [k8s-apm-eck-managed-es] -Managing APM Server, Kibana and Elasticsearch with ECK allows a smooth and secured integration between the stack components. The output configuration of the APM Server is setup automatically to establish a trust relationship with Elasticsearch. Specifying the Kibana reference allows ECK to automatically configure the [Kibana endpoint](/solutions/observability/apps/configure-kibana-endpoint.md). +Managing APM Server, {{kib}} and {{es}} with ECK allows a smooth and secured integration between the stack components. The output configuration of the APM Server is setup automatically to establish a trust relationship with {{es}}. Specifying the {{kib}} reference allows ECK to automatically configure the [{{kib}} endpoint](/solutions/observability/apps/configure-kibana-endpoint.md). -1. To deploy an APM Server and connect it to the Elasticsearch cluster and Kibana instance you created in the [quickstart](deploy-an-orchestrator.md), apply the following specification: +1. To deploy an APM Server and connect it to the {{es}} cluster and {{kib}} instance you created in the [quickstart](deploy-an-orchestrator.md), apply the following specification: ```yaml cat < **Usage**. * To end your trial or unsubscribe from the service, delete your deployment(s). * Elastic provides different [subscription levels](https://www.elastic.co/subscriptions/cloud). During your 7-day trial you will automatically have an Enterprise level subscription. After the trial you can choose the subscription level. @@ -34,7 +34,7 @@ Some differences exist when you subscribe to {{ecloud}} through the AWS Marketpl Note the following items before you subscribe: -* You cannot use an email address that already has an Elastic Cloud account. If you want to use the same account email address with AWS Marketplace billing, you must first change the email address on your existing account before setting up your new AWS Marketplace subscription. For instructions on how to change your email address in Elastic Cloud, check [update your email address](../../../cloud-account/update-your-email-address.md). +* You cannot use an email address that already has an {{ecloud}} account. If you want to use the same account email address with AWS Marketplace billing, you must first change the email address on your existing account before setting up your new AWS Marketplace subscription. For instructions on how to change your email address in {{ecloud}}, check [update your email address](../../../cloud-account/update-your-email-address.md). * If you want to manage deployments on the existing {{ecloud}} account with your AWS MP billing account, you must migrate your deployments over to the new MP billing account. To migrate, use a [custom repository](../../tools/snapshot-and-restore/elastic-cloud-hosted.md) to take a snapshot and then restore that snapshot to a new deployment under your AWS Marketplace account. @@ -48,7 +48,7 @@ To subscribe to {{ecloud}} through the AWS Marketplace: 1. Ensure that you have the necessary AWS permissions required to complete a marketplace transaction. 2. Create a new {{ecloud}} account. This account is linked to your AWS Marketplace subscription. - 3. (Optional) Use the {{ecloud}} CloudFormation template to quickly get started with Elastic. The template deploys the Elastic Stack in your {{ecloud}} account, and also provisions the {{agent}} on a new EC2 instance in your AWS environment. + 3. (Optional) Use the {{ecloud}} CloudFormation template to quickly get started with Elastic. The template deploys the {{stack}} in your {{ecloud}} account, and also provisions the {{agent}} on a new EC2 instance in your AWS environment. 4. Navigate to {{ecloud}} to continue. ::::{note} @@ -61,28 +61,28 @@ To subscribe to {{ecloud}} through the AWS Marketplace: This section describes some scenarios that you may experience onboarding onto the marketplace offer. If you’re running into issues with your marketplace subscription or are encountering technical issues, create a support case or contact `support@elastic.co`. -* [I receive an error message telling me that I’m already signed up using an Elastic Cloud email address.](#ec-awsmp-account-collision01) -* [When I try to configure a new account from the AWS console, I get the Elastic Cloud login page, not the sign-up page. If I sign up to a new account it is not connected to the marketplace.](#ec-awsmp-account-collision02) +* [I receive an error message telling me that I’m already signed up using an {{ecloud}} email address.](#ec-awsmp-account-collision01) +* [When I try to configure a new account from the AWS console, I get the {{ecloud}} login page, not the sign-up page. If I sign up to a new account it is not connected to the marketplace.](#ec-awsmp-account-collision02) * [When I try to configure an account from the AWS console I get an error that An active AWS subscription already exists.](#ec-awsmp-account-collision03) -### I receive an error message telling me that I’m already signed up using an Elastic Cloud email address. [ec-awsmp-account-collision01] +### I receive an error message telling me that I’m already signed up using an {{ecloud}} email address. [ec-awsmp-account-collision01] -This occurs when you attempt to sign up to the marketplace offer using an email address that already exists in Elastic Cloud, such as part of a trial account. You have a few options: +This occurs when you attempt to sign up to the marketplace offer using an email address that already exists in {{ecloud}}, such as part of a trial account. You have a few options: -* **Change the email address of your previous Elastic Cloud account** - Log in to your existing Elastic Cloud account and change the email address. Once changed, navigate back to the AWS console to finish setting up your marketplace subscription. -* **Sign up using a different email address** - Sign up to Elastic Cloud using a different email address. +* **Change the email address of your previous {{ecloud}} account** - Log in to your existing {{ecloud}} account and change the email address. Once changed, navigate back to the AWS console to finish setting up your marketplace subscription. +* **Sign up using a different email address** - Sign up to {{ecloud}} using a different email address. -### When I try to configure a new account from the AWS console, I get the Elastic Cloud login page, not the sign-up page. If I sign up to a new account it is not connected to the marketplace. [ec-awsmp-account-collision02] +### When I try to configure a new account from the AWS console, I get the {{ecloud}} login page, not the sign-up page. If I sign up to a new account it is not connected to the marketplace. [ec-awsmp-account-collision02] -If the Elastic Cloud login page displays when coming from the AWS console, then an Elastic Cloud account is already connected to your marketplace subscription. Log into Elastic Cloud with that account to continue. If you can’t remember your password, use the **Forgot password?** link to reset your password. +If the {{ecloud}} login page displays when coming from the AWS console, then an {{ecloud}} account is already connected to your marketplace subscription. Log into {{ecloud}} with that account to continue. If you can’t remember your password, use the **Forgot password?** link to reset your password. -If you can’t remember which email address you used to sign up to Elastic Cloud, or you need more help, contact `support@elastic.co`. +If you can’t remember which email address you used to sign up to {{ecloud}}, or you need more help, contact `support@elastic.co`. ### When I try to configure an account from the AWS console I get an error that an active AWS subscription already exists. [ec-awsmp-account-collision03] -This error occurs when you have already provisioned a marketplace subscription under your AWS user account. Each AWS user account can only subscribe to Elastic Cloud once. +This error occurs when you have already provisioned a marketplace subscription under your AWS user account. Each AWS user account can only subscribe to {{ecloud}} once. -If you wish to configure multiple marketplace subscriptions, you need to use a different AWS user account to create the marketplace subscription from the AWS console. Once the marketplace subscription is created in AWS, you can continue to configure the subscription in Elastic Cloud. +If you wish to configure multiple marketplace subscriptions, you need to use a different AWS user account to create the marketplace subscription from the AWS console. Once the marketplace subscription is created in AWS, you can continue to configure the subscription in {{ecloud}}. diff --git a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md index c5a8e9dae1..7bde604703 100644 --- a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md +++ b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md @@ -25,7 +25,7 @@ The {{ecloud}} Azure Native ISV Service allows you to deploy managed instances o ::::{tip} -The full product name in the Azure integrated marketplace is `Elastic Cloud (Elasticsearch) - An Azure Native ISV Service`. +The full product name in the Azure integrated marketplace is `{{ecloud}} (Elasticsearch) - An Azure Native ISV Service`. :::: diff --git a/deploy-manage/deploy/elastic-cloud/change-hardware.md b/deploy-manage/deploy/elastic-cloud/change-hardware.md index c07fe34b47..0185ab235e 100644 --- a/deploy-manage/deploy/elastic-cloud/change-hardware.md +++ b/deploy-manage/deploy/elastic-cloud/change-hardware.md @@ -8,31 +8,31 @@ mapped_pages: # Change hardware [ec-change-hardware-for-a-specific-resource] -The virtual hardware on which Elastic stack deployments run is defined by instance configurations. To learn more about what an instance configuration is, refer to [Instance configurations](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations). +The virtual hardware on which {{stack}} deployments run is defined by instance configurations. To learn more about what an instance configuration is, refer to [Instance configurations](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations). -When a deployment is created, each Elasticsearch tier and stateless resource (e.g., Kibana) gets an instance configuration assigned to it, based on the hardware profile used. The combination of instance configurations defined within each hardware profile is designed to provide the best possible outcome for each use case. Therefore, it is not advisable to use instance configurations that are not specified on the hardware profile, except in specific situations in which we may need to migrate an Elasticsearch tier or stateless resource to a different hardware type. An example of such a scenario is when a cloud provider stops supporting a hardware type in a specific region. +When a deployment is created, each {{es}} tier and stateless resource (e.g., Kibana) gets an instance configuration assigned to it, based on the hardware profile used. The combination of instance configurations defined within each hardware profile is designed to provide the best possible outcome for each use case. Therefore, it is not advisable to use instance configurations that are not specified on the hardware profile, except in specific situations in which we may need to migrate an {{es}} tier or stateless resource to a different hardware type. An example of such a scenario is when a cloud provider stops supporting a hardware type in a specific region. ## Migrate to a different instance configuration using the API [ec_migrate_to_a_different_instance_configuration_using_the_api] -Hardware profile migrations are possible to perform through the Elastic Cloud console, however, migrating a specific tier or resource to a different instance configuration can only be achieved through the API. +Hardware profile migrations are possible to perform through the {{ecloud}} console, however, migrating a specific tier or resource to a different instance configuration can only be achieved through the API. Prerequisites: -* A valid Elastic Cloud [API key](../../api-keys/elastic-cloud-api-keys.md) (`$EC_API_KEY`) +* A valid {{ecloud}} [API key](../../api-keys/elastic-cloud-api-keys.md) (`$EC_API_KEY`) Follow these steps to migrate to a different instance configuration, replacing the default `$EC_API_KEY` value with your actual API key: 1. From the [list of instance configurations available for each region](cloud://reference/cloud-hosted/ec-regions-templates-instances.md), select the target instance configuration you want to migrate to. ::::{note} - The target instance configuration must be compatible with the Elasticsearch tier or stateless resource you are updating. - For example, if you are migrating the hot Elasticsearch tier, the target instance configuration must also be of the `es.datahot` family. + The target instance configuration must be compatible with the {{es}} tier or stateless resource you are updating. + For example, if you are migrating the hot {{es}} tier, the target instance configuration must also be of the `es.datahot` family. :::: 2. Get the deployment update payload from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Edit** page, by selecting **Equivalent API request**, and store it in a file called `migrate_instance_configuration.json`. - Example payload containing relevant data for migrating the hot Elasticsearch tier: + Example payload containing relevant data for migrating the hot {{es}} tier: ```json { @@ -47,8 +47,8 @@ Follow these steps to migrate to a different instance configuration, replacing t "instance_configuration_version": 1, ``` -3. Set the `instance_configuration_id` field of the Elasticsearch tier or stateless resource you want to migrate to the **Instance ID** of the instance configuration selected in step 1. -4. If the `instance_configuration_version` field is defined for that Elasticsearch tier or stateless resource, remove it from the payload. +3. Set the `instance_configuration_id` field of the {{es}} tier or stateless resource you want to migrate to the **Instance ID** of the instance configuration selected in step 1. +4. If the `instance_configuration_version` field is defined for that {{es}} tier or stateless resource, remove it from the payload. Following is the update that would be required to migrate the example above to the `gcp.es.datahot.n2.68x10x95` instance configuration: @@ -80,7 +80,7 @@ You can perform multiple instance configuration migrations in the same request. ::::{warning} -Having an instance configuration mismatch between the deployment and the hardware profile will cause the Elastic Cloud console to announce that there is a **Newer version available** for the hardware profile. Any hardware profile migration performed through the Elastic Cloud console will cause the instance configurations to be reset to the values in the hardware profile. +Having an instance configuration mismatch between the deployment and the hardware profile will cause the {{ecloud}} console to announce that there is a **Newer version available** for the hardware profile. Any hardware profile migration performed through the {{ecloud}} console will cause the instance configurations to be reset to the values in the hardware profile. :::: diff --git a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md index 31f8bb7ec9..b1270862a2 100644 --- a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md +++ b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md @@ -9,7 +9,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-about.html --- -# Elastic Cloud Hosted +# {{ech}} % What needs to be done: Refine @@ -52,16 +52,16 @@ $$$faq-where$$$ $$$faq-x-pack$$$ -**{{ech}} is the Elastic Stack, managed through {{ecloud}} deployments.** +**{{ech}} is the {{stack}}, managed through {{ecloud}} deployments.** -It is also formerly known as Elasticsearch Service. +It is also formerly known as {{es}} Service. -{{ech}} allows you to manage one or more instances of the Elastic Stack through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. +{{ech}} allows you to manage one or more instances of the {{stack}} through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. -A **hosted deployment** helps you manage an Elasticsearch cluster and instances of other Elastic products, like Kibana or APM instances, in one place. Spin up, scale, upgrade, and delete your Elastic Stack products without having to manage each one separately. In a deployment, everything works together. +A **hosted deployment** helps you manage an {{es}} cluster and instances of other Elastic products, like {{kib}} or APM instances, in one place. Spin up, scale, upgrade, and delete your {{stack}} products without having to manage each one separately. In a deployment, everything works together. ::::{note} -{{ech}} is one of the two deployment options available on {{ecloud}}. [Depending on your needs](../elastic-cloud.md), you can also run [Elastic Cloud Serverless projects](/deploy-manage/deploy/elastic-cloud/serverless.md). +{{ech}} is one of the two deployment options available on {{ecloud}}. [Depending on your needs](../elastic-cloud.md), you can also run [{{serverless-full}} projects](/deploy-manage/deploy/elastic-cloud/serverless.md). :::: @@ -78,7 +78,7 @@ You can use these presets, or start from them to get the unique configuration yo Building a rich search experience, gaining actionable insight into your environment, or protecting your systems and endpoints? You can implement each of these major use cases, and more, with the solutions that are pre-built in each Elastic deployment. :::{image} /deploy-manage/images/cloud-ec-stack-components.png -:alt: Elastic Stack components and solutions with Enterprise Search +:alt: {{stack}} components and solutions with Enterprise Search :width: 75% ::: @@ -86,7 +86,7 @@ Building a rich search experience, gaining actionable insight into your environm Enterprise Search is not available in {{stack}} 9.0+. ::: -These solutions help you accomplish your use cases: Ingest data into the deployment and set up specific capabilities of the Elastic Stack. +These solutions help you accomplish your use cases: Ingest data into the deployment and set up specific capabilities of the {{stack}}. Of course, you can choose to follow your own path and use Elastic components available in your deployment to ingest, visualize, and analyze your data independently from solutions. @@ -101,7 +101,7 @@ Of course, you can choose to follow your own path and use Elastic components ava * [Sign Up for a Trial](/deploy-manage/deploy/elastic-cloud/create-an-organization.md) - Sign up, check what your free trial includes and when we require a credit card. * [Sign Up from Marketplace](/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md) - Consolidate billing portals by signing up through one of the available marketplaces. -* [Create a deployment](/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) - Get up and running very quickly. Select your desired configuration and let Elastic deploy Elasticsearch, Kibana, and the Elastic products that you need for you. In a deployment, everything works together, everything runs on hardware that is optimized for your use case. +* [Create a deployment](/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) - Get up and running very quickly. Select your desired configuration and let Elastic deploy {{es}}, {{kib}}, and the Elastic products that you need for you. In a deployment, everything works together, everything runs on hardware that is optimized for your use case. * [Connect your data to your deployment](/manage-data/ingest.md) - Ingest and index the data you want, from a variety of sources, and take action on it. **Adjust the capacity and capabilities of your deployments for production** @@ -135,8 +135,8 @@ $$$ec-faq-getting-started$$$ This frequently-asked-questions list helps you with common questions while you get {{ech}} up and running for the first time. For questions about {{ech}} configuration options or billing, check the [Technical FAQ](/deploy-manage/index.md) and the [Billing FAQ](/deploy-manage/cloud-organization/billing/billing-faq.md). * [What is {{ech}}?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-what) -* [Is {{ech}}, formerly known as Elasticsearch Service, the same as Amazon’s {{es}} Service?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) -* [Can I run the full Elastic Stack in {{ech}}?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) +* [Is {{ech}}, formerly known as {{es}} Service, the same as Amazon’s {{es}} Service?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) +* [Can I run the full {{stack}} in {{ech}}?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) * [Can I try {{ech}} for free?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-trial) * [What if I need to change the size of my {{es}} cluster at a later time?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-config) * [Do you offer support subscriptions?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-subscriptions) @@ -148,16 +148,16 @@ This frequently-asked-questions list helps you with common questions while you g * [Is there a limit on the number of documents or indexes I can have in my cluster?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-limit) $$$faq-what$$$**What is {{ech}}?** -: {{ech}} is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. {{ech}} is part of Elastic Cloud and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. +: {{ech}} is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. {{ech}} is part of {{ecloud}} and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. -$$$faq-aws-difference$$$**Is {{ech}}, formerly known as Elasticsearch Service, the same as Amazon’s {{es}} Service?** +$$$faq-aws-difference$$$**Is {{ech}}, formerly known as {{es}} Service, the same as Amazon’s {{es}} Service?** : {{ech}} is not the same as the Amazon {{es}} service. To learn more about the differences, check our [AWS {{es}} Service](https://www.elastic.co/aws-elasticsearch-service) comparison. -$$$faq-full-stack$$$**Can I run the full Elastic Stack in {{ech}}?** -: Many of the products that are part of the Elastic Stack are readily available in {{ech}}, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with {{ech}}. For example, both Logstash and Beats can send their data to {{ech}}. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). +$$$faq-full-stack$$$**Can I run the full {{stack}} in {{ech}}?** +: Many of the products that are part of the {{stack}} are readily available in {{ech}}, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other {{stack}} products directly with {{ech}}. For example, both Logstash and Beats can send their data to {{ech}}. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). $$$faq-trial$$$**Can I try {{ech}} for free?** -: Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created. During the free trial period get access to a deployment to explore Elastic solutions for Search, Observability, Security, or the latest version of the Elastic Stack. +: Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created. During the free trial period get access to a deployment to explore Elastic solutions for Search, Observability, Security, or the latest version of the {{stack}}. $$$faq-config$$$**What if I need to change the size of my {{es}} cluster at a later time?** @@ -192,12 +192,12 @@ $$$faq-elastic$$$**Do you offer Elastic’s commercial products?** [Contact us](https://www.elastic.co/cloud/contact) to learn more. -$$$faq-x-pack$$$**Is my Elasticsearch cluster protected by X-Pack?** +$$$faq-x-pack$$$**Is my {{es}} cluster protected by X-Pack?** : Yes, X-Pack security features offer the full power to protect your {{ech}} deployment with basic authentication and role-based access control. $$$faq-limit$$$**Is there a limit on the number of documents or indexes I can have in my cluster?** : No. We do not enforce any artificial limit on the number of indexes or documents you can store in your cluster. - That said, there is a limit to how many indexes Elasticsearch can cope with. Every shard of every index is a separate Lucene index, which in turn comprises several files. A process cannot have an unlimited number of open files. Also, every shard has its associated control structures in memory. So, while we will let you make as many indexes as you want, there are limiting factors. Our larger plans provide your processes with more dedicated memory and CPU-shares, so they are capable of handling more indexes. The number of indexes or documents you can fit in a given plan therefore depends on their structure and use. + That said, there is a limit to how many indexes {{es}} can cope with. Every shard of every index is a separate Lucene index, which in turn comprises several files. A process cannot have an unlimited number of open files. Also, every shard has its associated control structures in memory. So, while we will let you make as many indexes as you want, there are limiting factors. Our larger plans provide your processes with more dedicated memory and CPU-shares, so they are capable of handling more indexes. The number of indexes or documents you can fit in a given plan therefore depends on their structure and use. ::: \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/complete-registration-of-an-annual-subscription-on-aws-marketplace.md b/deploy-manage/deploy/elastic-cloud/complete-registration-of-an-annual-subscription-on-aws-marketplace.md index 12cf558402..a954513eda 100644 --- a/deploy-manage/deploy/elastic-cloud/complete-registration-of-an-annual-subscription-on-aws-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/complete-registration-of-an-annual-subscription-on-aws-marketplace.md @@ -9,7 +9,7 @@ mapped_pages: # Complete the registration of an annual subscription on AWS Marketplace [ec-marketplace-aws-prepaid-signup] -Follow the instructions on this page to complete the registration process for an Elastic Cloud annual subscription on AWS Marketplace. +Follow the instructions on this page to complete the registration process for an {{ecloud}} annual subscription on AWS Marketplace. ## Accept your offer and complete registration diff --git a/deploy-manage/deploy/elastic-cloud/configure.md b/deploy-manage/deploy/elastic-cloud/configure.md index 89e74dbfca..828062888b 100644 --- a/deploy-manage/deploy/elastic-cloud/configure.md +++ b/deploy-manage/deploy/elastic-cloud/configure.md @@ -24,7 +24,7 @@ You might want to change the configuration of your deployment to: * Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 6.8.23 to 7.17.27, or from one minor version to another, such as 6.1 to 6.2. You can’t downgrade versions. * Change what plugins are available on your {{es}} cluster. -With the exception of major version upgrades for Elastic Stack products, {{ech}} can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action, you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. +With the exception of major version upgrades for {{stack}} products, {{ech}} can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action, you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. diff --git a/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md b/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md index 001075722d..405af2a7da 100644 --- a/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md +++ b/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md @@ -9,12 +9,12 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure-deployment-settings.html --- -# Create an Elastic Cloud Hosted deployment [ec-create-deployment] +# Create an {{ech}} deployment [ec-create-deployment] -An Elastic Cloud deployment includes Elastic Stack components such as Elasticsearch, Kibana, and other features, allowing you to store, search, and analyze your data. You can spin up a proof-of-concept deployment to learn more about what Elastic can do for you. +An {{ecloud}} deployment includes {{stack}} components such as {{es}}, {{kib}}, and other features, allowing you to store, search, and analyze your data. You can spin up a proof-of-concept deployment to learn more about what Elastic can do for you. :::{note} -You can also create a deployment using the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). This can be an interesting alternative for more advanced needs, such as for [creating a deployment encrypted with your own key](../../security/encrypt-deployment-with-customer-managed-encryption-key.md). +You can also create a deployment using the [{{ecloud}} API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). This can be an interesting alternative for more advanced needs, such as for [creating a deployment encrypted with your own key](../../security/encrypt-deployment-with-customer-managed-encryption-key.md). ::: 1. Log in to your [cloud.elastic.co](https://cloud.elastic.co/login) account and select **Create deployment** from the {{ecloud}} main page: @@ -38,9 +38,9 @@ You can also create a deployment using the [Elastic Cloud API](https://www.elast **Region**: The cloud platform’s region your deployment will live. If you have compliance or latency requirements, you can create your deployment in any of our [supported regions](cloud://reference/cloud-hosted/regions.md). The region should be as close as possible to the location of your data. - **Hardware profile**: This allows you to configure the underlying virtual hardware that you’ll deploy your Elastic Stack on. Each hardware profile provides a unique blend of storage, RAM and vCPU sizes. You can select a hardware profile that’s best suited for your use case. For example CPU Optimized if you have a search-heavy use case that’s bound by compute resources. For more details, check the [hardware profiles](ec-change-hardware-profile.md) section. You can also view the [virtual hardware details](cloud://reference/cloud-hosted/hardware.md) which powers hardware profiles. With the **Advanced settings** option, you can configure the underlying virtual hardware associated with each profile. + **Hardware profile**: This allows you to configure the underlying virtual hardware that you’ll deploy your {{stack}} on. Each hardware profile provides a unique blend of storage, RAM and vCPU sizes. You can select a hardware profile that’s best suited for your use case. For example CPU Optimized if you have a search-heavy use case that’s bound by compute resources. For more details, check the [hardware profiles](ec-change-hardware-profile.md) section. You can also view the [virtual hardware details](cloud://reference/cloud-hosted/hardware.md) which powers hardware profiles. With the **Advanced settings** option, you can configure the underlying virtual hardware associated with each profile. - **Version**: The Elastic Stack version that will get deployed. Defaults to the latest version. Our [version policy](available-stack-versions.md) describes which versions are available to deploy. + **Version**: The {{stack}} version that will get deployed. Defaults to the latest version. Our [version policy](available-stack-versions.md) describes which versions are available to deploy. **Snapshot source**: To create a deployment from a snapshot, select a snapshot source. You need to [configure snapshots](../../tools/snapshot-and-restore.md) and establish a snapshot lifecycle management policy and repository before you can restore from a snapshot. The snapshot options depend on the stack version the deployment is running. @@ -52,7 +52,7 @@ You can also create a deployment using the [Elastic Cloud API](https://www.elast Trial users won’t find the Advanced settings when they create their first deployment. This option is available on the deployment’s edit page once the deployment is created. :::: -3. Select **Create deployment**. It takes a few minutes before your deployment gets created. While waiting, you are prompted to save the admin credentials for your deployment which provides you with superuser access to Elasticsearch. Keep these credentials safe as they are shown only once. These credentials also help you [add data using Kibana](../../../manage-data/ingest.md). If you need to refresh these credentials, you can [reset the password](../../users-roles/cluster-or-deployment-auth/built-in-users.md). +3. Select **Create deployment**. It takes a few minutes before your deployment gets created. While waiting, you are prompted to save the admin credentials for your deployment which provides you with superuser access to {{es}}. Keep these credentials safe as they are shown only once. These credentials also help you [add data using Kibana](../../../manage-data/ingest.md). If you need to refresh these credentials, you can [reset the password](../../users-roles/cluster-or-deployment-auth/built-in-users.md). 4. Once the deployment is ready, select **Continue** to open the deployment’s main page. From here, you can start [ingesting data](../../../manage-data/ingest.md) or simply [try a sample data](../../../explore-analyze/index.md#gs-get-data-into-kibana) set to get started. At any time, you can manage and [adjust the configuration](configure.md) of your deployment to your needs, add extra layers of [security](../../users-roles/cluster-or-deployment-auth.md), or (highly recommended) set up [health monitoring](../../monitor/stack-monitoring.md). @@ -67,7 +67,7 @@ To make sure you’re all set for production, consider the following actions: * [Plan for your expected workloads](/deploy-manage/production-guidance.md) and consider how many availability zones you’ll need. * [Create a deployment](/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) on the region you need and with a hardware profile that matches your use case. -* [Change your configuration](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md) by turning on autoscaling, adding high availability, or adjusting components of the Elastic Stack. +* [Change your configuration](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md) by turning on autoscaling, adding high availability, or adjusting components of the {{stack}}. * [Add extensions and plugins](/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use Elastic supported extensions or add your own custom dictionaries and scripts. * [Edit settings and defaults](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to fine tune the performance of specific features. * [Manage your deployment](/deploy-manage/deploy/elastic-cloud/manage-deployments.md) as a whole to restart, upgrade, stop routing, or delete. diff --git a/deploy-manage/deploy/elastic-cloud/create-an-organization.md b/deploy-manage/deploy/elastic-cloud/create-an-organization.md index e9b6826432..96de797180 100644 --- a/deploy-manage/deploy/elastic-cloud/create-an-organization.md +++ b/deploy-manage/deploy/elastic-cloud/create-an-organization.md @@ -30,7 +30,7 @@ $$$general-sign-up-trial-what-is-included-in-my-trial$$$ To sign up: -1. Go to the [Elastic Cloud Sign Up](https://cloud.elastic.co/registration?page=docs&placement=docs-body) page. +1. Go to the [{{ecloud}} Sign Up](https://cloud.elastic.co/registration?page=docs&placement=docs-body) page. 2. Choose one of the available sign up methods. You can register with your email address and a password, use a Google or Microsoft account, or [subscribe from a Marketplace](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). :::{note} @@ -39,7 +39,7 @@ You can only belong to one {{ecloud}} organization at a time. If you want to cre When your first sign up, you create an organization and start with a trial license. -This organization is the umbrella for all of your Elastic Cloud resources, users, and account settings. Every organization has a unique identifier. Bills are invoiced according to the billing contact and details that you set for your organization. For more details on how to manage your organization, refer to [](/deploy-manage/cloud-organization.md). +This organization is the umbrella for all of your {{ecloud}} resources, users, and account settings. Every organization has a unique identifier. Bills are invoiced according to the billing contact and details that you set for your organization. For more details on how to manage your organization, refer to [](/deploy-manage/cloud-organization.md). ## Trial information [general-sign-up-trial-what-is-included-in-my-trial] @@ -48,13 +48,13 @@ Your free 14-day trial includes: **One hosted deployment** -A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the Elastic Stack. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. +A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the {{stack}}. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. For more information, check the [{{ech}} documentation](cloud-hosted.md). **One serverless project** -Serverless projects package Elastic Stack features by type of solution: +Serverless projects package {{stack}} features by type of solution: * [{{es}}](../../../solutions/search.md) * [Observability](../../../solutions/observability.md) @@ -87,26 +87,26 @@ For more information, check the [{{ech}} documentation](cloud-hosted.md). **Remove limitations** -Subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md) for the following benefits: +Subscribe to [{{ecloud}}](/deploy-manage/cloud-organization/billing/add-billing-details.md) for the following benefits: * Increased memory or storage for deployment components, such as {{es}} clusters, machine learning nodes, and APM server. * As many deployments and projects as you need. * Third availability zone for your deployments. * Access to additional features, such as cross-cluster search and cross-cluster replication. -You can subscribe to Elastic Cloud at any time during your trial. [Billing](../../../deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md) starts when you subscribe. To maximize the benefits of your trial, subscribe at the end of the free period. To monitor charges, anticipate future costs, and adjust your usage, check your [account usage](/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md) and [billing history](/deploy-manage/cloud-organization/billing/view-billing-history.md). +You can subscribe to {{ecloud}} at any time during your trial. [Billing](../../../deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md) starts when you subscribe. To maximize the benefits of your trial, subscribe at the end of the free period. To monitor charges, anticipate future costs, and adjust your usage, check your [account usage](/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md) and [billing history](/deploy-manage/cloud-organization/billing/view-billing-history.md). ### Get started with your trial [general-sign-up-trial-how-do-i-get-started-with-my-trial] -Start by checking out some common approaches for [moving data into Elastic Cloud](https://www.elastic.co/guide/en/cloud/current/ec-cloud-ingest-data.html). +Start by checking out some common approaches for [moving data into {{ecloud}}](https://www.elastic.co/guide/en/cloud/current/ec-cloud-ingest-data.html). ### Maintain access to your trial projects and data [general-sign-up-trial-what-happens-at-the-end-of-the-trial] -When your trial expires, the deployment and project that you created during the trial period are suspended until you subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md). When you subscribe, you are able to resume your deployment and serverless project, and regain access to the ingested data. After your trial expires, you have 30 days to subscribe. After 30 days, your deployment, serverless project, and ingested data are permanently deleted. +When your trial expires, the deployment and project that you created during the trial period are suspended until you subscribe to [{{ecloud}}](/deploy-manage/cloud-organization/billing/add-billing-details.md). When you subscribe, you are able to resume your deployment and serverless project, and regain access to the ingested data. After your trial expires, you have 30 days to subscribe. After 30 days, your deployment, serverless project, and ingested data are permanently deleted. -If you’re interested in learning more ways to subscribe to Elastic Cloud, don’t hesitate to [contact us](https://www.elastic.co/contact). +If you’re interested in learning more ways to subscribe to {{ecloud}}, don’t hesitate to [contact us](https://www.elastic.co/contact). ## How do I get help? [ec_how_do_i_get_help] diff --git a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md index f8cb7ab87f..bc500ff88f 100644 --- a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md @@ -9,14 +9,14 @@ mapped_pages: # Create a monthly pay-as-you-go subscription on AWS Marketplace [ec-aws-marketplace-conversion] -When subscribing to an annual prepaid subscription to Elastic Cloud on AWS Marketplace, please follow these instructions to obtain a separate pay-as-you-go subscription. This subscription will allow us to continue your Elastic Cloud service through the Marketplace once the contract is expired. You will not get charged twice for the usage under the annual contract. +When subscribing to an annual prepaid subscription to {{ecloud}} on AWS Marketplace, please follow these instructions to obtain a separate pay-as-you-go subscription. This subscription will allow us to continue your {{ecloud}} service through the Marketplace once the contract is expired. You will not get charged twice for the usage under the annual contract. 1. Log in to AWS under the same Account ID that you will use to accept the Annual Private Offer. -2. Go to the [AWS Marketplace subscription page for Elastic Cloud pay-as-you-go](https://aws.amazon.com/marketplace/saas/ordering?productId=bb253a6c-e775-4634-bdf0-17bd56a69c36&offerId=b2uzdkwqj7177fqhm39o4snxy). +2. Go to the [AWS Marketplace subscription page for {{ecloud}} pay-as-you-go](https://aws.amazon.com/marketplace/saas/ordering?productId=bb253a6c-e775-4634-bdf0-17bd56a69c36&offerId=b2uzdkwqj7177fqhm39o4snxy). 3. Click **Subscribe** to create an AWS Marketplace subscription under the selected AWS Account. :::{image} /deploy-manage/images/cloud-aws-subscribe-button.png -:alt: Subscribe to Elastic Cloud on AWS Marketplace +:alt: Subscribe to {{ecloud}} on AWS Marketplace ::: No further steps required in AWS. Ignore the steps 1 to 3 that appear on the right side of the AWS page. @@ -25,9 +25,9 @@ No further steps required in AWS. Ignore the steps 1 to 3 that appear on the rig :alt: AWS panel with steps to skip ::: -You should now see the monthly *Pay as you go* subscription for Elastic Cloud in your AWS **Manage subscriptions** page. +You should now see the monthly *Pay as you go* subscription for {{ecloud}} in your AWS **Manage subscriptions** page. -From the top-right corner, you can check that the Account ID is the same account that has your Elastic Cloud annual subscription. +From the top-right corner, you can check that the Account ID is the same account that has your {{ecloud}} annual subscription. :::{image} /deploy-manage/images/cloud-aws-mp-manage-subscriptions.png :alt: Account ID on top-right menu diff --git a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md index d04022d9bd..4c963cf9ef 100644 --- a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md @@ -9,11 +9,11 @@ mapped_pages: # Create a monthly pay-as-you-go subscription on GCP Marketplace [ec-gcp-marketplace-conversion] -When subscribing to an annual prepaid subscription to Elastic Cloud on GCP Marketplace, please follow these instructions to obtain a separate pay-as-you-go subscription. This subscription will allow us to continue your Elastic Cloud service through the Marketplace once the contract is expired. You will not get charged twice for the usage under the annual contract. +When subscribing to an annual prepaid subscription to {{ecloud}} on GCP Marketplace, please follow these instructions to obtain a separate pay-as-you-go subscription. This subscription will allow us to continue your {{ecloud}} service through the Marketplace once the contract is expired. You will not get charged twice for the usage under the annual contract. -1. Go to the [GCP Marketplace listing page for Elastic Cloud pay-as-you-go](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud). +1. Go to the [GCP Marketplace listing page for {{ecloud}} pay-as-you-go](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud). 2. Click **Subscribe** to create a GCP Marketplace subscription under the selected GCP Billing Account. -No further steps required in GCP. You do not need to register for a new Elastic Cloud organization if you are already using Elastic Cloud. +No further steps required in GCP. You do not need to register for a new {{ecloud}} organization if you are already using {{ecloud}}. -You should now see the monthly *Pay as you go* subscription for Elastic Cloud in your [GCP Marketplace Orders page](https://console.cloud.google.com/marketplace/orders). +You should now see the monthly *Pay as you go* subscription for {{ecloud}} in your [GCP Marketplace Orders page](https://console.cloud.google.com/marketplace/orders). diff --git a/deploy-manage/deploy/elastic-cloud/create-serverless-project.md b/deploy-manage/deploy/elastic-cloud/create-serverless-project.md index 09b65b3f63..dd49a5dfd4 100644 --- a/deploy-manage/deploy/elastic-cloud/create-serverless-project.md +++ b/deploy-manage/deploy/elastic-cloud/create-serverless-project.md @@ -9,15 +9,15 @@ applies_to: There are two options to create serverless projects: -* If you are a new user, [sign up for a free 14-day trial](https://cloud.elastic.co/serverless-registration) to create a serverless project. For more information about the Elastic Cloud trials, check [Trial features](create-an-organization.md#general-sign-up-trial-what-is-included-in-my-trial). -* If you are an existing customer, [log in to Elastic Cloud](https://cloud.elastic.co/login). On the home page, you will see a new option to create serverless projects. +* If you are a new user, [sign up for a free 14-day trial](https://cloud.elastic.co/serverless-registration) to create a serverless project. For more information about the {{ecloud}} trials, check [Trial features](create-an-organization.md#general-sign-up-trial-what-is-included-in-my-trial). +* If you are an existing customer, [log in to {{ecloud}}](https://cloud.elastic.co/login). On the home page, you will see a new option to create serverless projects. Choose the type of project that matches your needs and we’ll help you get started with our solution guides. | | | | --- | --- | | | | -| ![elasticsearch](https://www.elastic.co/docs/assets/images/elasticsearch.png "elasticsearch =50%") | **Elasticsearch**
Build custom search applications with Elasticsearch.

[**View guide →**](/solutions/search/serverless-elasticsearch-get-started.md)
| +| ![elasticsearch](https://www.elastic.co/docs/assets/images/elasticsearch.png "elasticsearch =50%") | **Elasticsearch**
Build custom search applications with {{es}}.

[**View guide →**](/solutions/search/serverless-elasticsearch-get-started.md)
| | ![observability](https://www.elastic.co/docs/assets/images/observability.png "observability =50%") | **Observability**
Monitor applications and systems with Elastic Observability.

[**View guide →**](/solutions/observability/get-started/create-an-observability-project.md)
| | ![security](https://www.elastic.co/docs/assets/images/security.png "security =50%") | **Security**
Detect, investigate, and respond to threats with Elastic Security.

[**View guide →**](/solutions/security/get-started/create-security-project.md)
| | | | \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md b/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md index c72e59fd80..6d0e5b979f 100644 --- a/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md +++ b/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md @@ -57,7 +57,7 @@ After removing an alias, your organisation’s account will hold a claim on it f ## Using the custom endpoint URL [ec-using-regional-deployment-alias] -To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the Elasticsearch cluster in that deployment is `test-alias.es`. +To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the {{es}} cluster in that deployment is `test-alias.es`. ::::{note} You can get the application-specific custom endpoint alias by selecting **Copy endpoint** for that product. It should contain a subdomain for each application type, for example `es`, `kb`, `apm`, or `ent`. @@ -83,7 +83,7 @@ While the `TransportClient` is deprecated, your custom endpoint aliases still wo * As part of the host name: - Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to Elasticsearch. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. + Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to {{es}}. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. * As part of the **Settings**: @@ -120,7 +120,7 @@ For more information on configuring the `TransportClient`, see ## Create a custom domain with NGINX [ec-custom-domains-with-nginx] -If you don’t get the level of domain customization you’re looking for by using the [custom endpoint aliases](../../../deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md), you might consider creating a CNAME record that points to your Elastic Cloud endpoints. However, that can lead to some issues. Instead, setting up your own proxy could provide the desired level of customization. +If you don’t get the level of domain customization you’re looking for by using the [custom endpoint aliases](../../../deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md), you might consider creating a CNAME record that points to your {{ecloud}} endpoints. However, that can lead to some issues. Instead, setting up your own proxy could provide the desired level of customization. ::::{important} The setup described in the following sections is not supported by Elastic, and if your proxy cannot connect to the endpoint, but curl can, we may not be able to help. @@ -130,7 +130,7 @@ The setup described in the following sections is not supported by Elastic, and i ### Avoid creating CNAMEs [ec_avoid_creating_cnames] -To achieve a fully custom domain, you can add a CNAME that points to your Elastic Cloud endpoint. However, this will lead to invalid certificate errors, and moreover, may simply not work. Your Elastic Cloud endpoints already point to a proxy internal to Elastic Cloud, which may not resolve your configured CNAME in the desired way. +To achieve a fully custom domain, you can add a CNAME that points to your {{ecloud}} endpoint. However, this will lead to invalid certificate errors, and moreover, may simply not work. Your {{ecloud}} endpoints already point to a proxy internal to {{ecloud}}, which may not resolve your configured CNAME in the desired way. So what to do, instead? diff --git a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md index f4a74c97f6..d9f7c47dad 100644 --- a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md +++ b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md @@ -7,9 +7,9 @@ applies_to: ess: --- -# Compare Elastic Cloud Hosted and Serverless [elasticsearch-differences] +# Compare {{ech}} and Serverless [elasticsearch-differences] -This guide compares Elastic Cloud Hosted deployments with Elastic Cloud Serverless projects, highlighting key features and capabilities across different project types. Use this information to understand what's available in each deployment option or to plan migrations between platforms. +This guide compares {{ech}} deployments with {{serverless-full}} projects, highlighting key features and capabilities across different project types. Use this information to understand what's available in each deployment option or to plan migrations between platforms. :::{note} The information below reflects our strategic goals, plans and objectives and includes estimated release dates, anticipated features and functions, and proposed descriptions for commercial features. All details are for information only and are subject to change in our discretion. Information may be updated, added, or removed from this document as features or products become available, canceled, or postponed. @@ -17,9 +17,9 @@ The information below reflects our strategic goals, plans and objectives and inc ## Architectural differences -Elastic Cloud Serverless takes a fundamentally different approach to running the Elastic Stack compared to Elastic Cloud Hosted: +{{serverless-full}} takes a fundamentally different approach to running the {{stack}} compared to {{ech}}: -| **Functionality** | Elastic Cloud Hosted | Elastic Cloud Serverless | +| **Functionality** | {{ech}} | {{serverless-full}} | |--------|----------------------|--------------------------| | **Management model** | Self-service infrastructure | Fully managed service | | **Project organization** | Single deployments with multiple capabilities | Separate projects for Elasticsearch, Observability, and Security | @@ -30,7 +30,7 @@ Elastic Cloud Serverless takes a fundamentally different approach to running the | **Upgrades** | User-controlled timing | Automatically performed by Elastic | | **User management** | Elastic Cloud-managed and deployment-local users | Elastic Cloud-managed users only. Serverless users are managed at the organization level with SAML authentication support. | | **Backups** | User-managed with Snapshot & Restore | Automatically backed up by Elastic | -| **Solutions** | Full Elastic Stack per deployment | Single solution per project | +| **Solutions** | Full {{stack}} per deployment | Single solution per project | In Serverless, Elastic automatically manages: * Cluster scaling and optimization @@ -47,14 +47,14 @@ $$$elasticsearch-differences-serverless-feature-planned$$$ ### Core platform capabilities -This table compares the core platform capabilities between Elastic Cloud Hosted deployments and Serverless projects: +This table compares the core platform capabilities between {{ech}} deployments and Serverless projects: -| **Feature** | Elastic Cloud Hosted | Serverless projects| Notes | +| **Feature** | {{ech}} | Serverless projects| Notes | |----------|----------------------|--------------------|--------| | **Audit logging** | ✅ | **Planned** | Anticipated in a future release | | **Authentication realms** | ✅ | ✅ | Managed at organization level in Serverless; deployment level in Hosted | | **BYO-Key for Encryption at Rest** | ✅ | **Planned** | Anticipated in a future release; data in Serverless is stored on cloud-provider encrypted object storage | -| **Cloud provider support** | - AWS
- GCP
- Azure | - AWS
- Azure (in preview)
- GCP (in preview) | - [Elastic Cloud Hosted regions](cloud://reference/cloud-hosted/regions.md)
- [Serverless regions](/deploy-manage/deploy/elastic-cloud/regions.md) | +| **Cloud provider support** | - AWS
- GCP
- Azure | - AWS
- Azure (in preview)
- GCP (in preview) | - [{{ech}} regions](cloud://reference/cloud-hosted/regions.md)
- [Serverless regions](/deploy-manage/deploy/elastic-cloud/regions.md) | | **Cluster scaling** | Manual with autoscaling option | Managed | Automatic scaling eliminates capacity planning - [Learn more](https://www.elastic.co/blog/elastic-serverless-architecture) | | **Custom plugins and bundles** | ✅ | ❌ | Not available in Serverless | | **Custom roles** | ✅ | ✅ | Not available in Serverless Observability projects. | @@ -68,14 +68,14 @@ This table compares the core platform capabilities between Elastic Cloud Hosted | **Snapshot/restore** | ✅ | **Planned** | User-initiated snapshots are anticipated in a future release | :::{note} -The [Elastic Cloud Serverless roadmap](https://www.elastic.co/cloud/serverless/roadmap) primarily focuses on platform capabilities rather than project-specific features. Use the following project-specific tables for information about features for each project type. +The [{{serverless-full}} roadmap](https://www.elastic.co/cloud/serverless/roadmap) primarily focuses on platform capabilities rather than project-specific features. Use the following project-specific tables for information about features for each project type. ::: ### Elasticsearch -This table compares Elasticsearch capabilities between Elastic Cloud Hosted deployments and Serverless projects: +This table compares Elasticsearch capabilities between {{ech}} deployments and Serverless projects: -| **Feature** | Elastic Cloud Hosted | Serverless Elasticsearch projects | Serverless notes | +| **Feature** | {{ech}} | Serverless Elasticsearch projects | Serverless notes | |---------|----------------------|-----------------------------------|------------------| | [**AI Assistant**](/solutions/observability/observability-ai-assistant.md) | ✅ | ✅ | | | **Behavioral analytics** | ❌ (deprecated in 9.0) | ❌ | Not available in Serverless | @@ -97,9 +97,9 @@ This table compares Elasticsearch capabilities between Elastic Cloud Hosted depl ### Observability -This table compares Observability capabilities between Elastic Cloud Hosted deployments and Serverless projects: +This table compares Observability capabilities between {{ech}} deployments and Serverless projects: -| **Feature** | Elastic Cloud Hosted | Serverless Observability projects | Serverless notes | +| **Feature** | {{ech}} | Serverless Observability projects | Serverless notes | |---------|----------------------|-----------------------------------|------------------| | [**AI Assistant**](/solutions/observability/observability-ai-assistant.md) | ✅ | ✅ | | | **APM integration** | ✅ | ✅ | Use **Managed Intake Service** (supports Elastic APM and OTLP protocols) | @@ -126,9 +126,9 @@ This table compares Observability capabilities between Elastic Cloud Hosted depl ### Security -This table compares Security capabilities between Elastic Cloud Hosted deployments and Serverless projects: +This table compares Security capabilities between {{ech}} deployments and Serverless projects: -| **Feature** | Elastic Cloud Hosted | Serverless Security projects | Serverless notes | +| **Feature** | {{ech}} | Serverless Security projects | Serverless notes | |---------|---------------------|------------------------------|------------------| | **[Advanced Entity Analytics](/solutions/security/advanced-entity-analytics.md)** | ✅ | ✅ | | | **[AI Assistant](/solutions/security/ai/ai-assistant.md)** | ✅ | ✅ | | @@ -158,7 +158,7 @@ These recommendations do not apply to indices using better binary quantization ( ## Available {{es}} APIs [elasticsearch-differences-serverless-apis-availability] -Because Elastic Cloud Serverless manages infrastructure automatically, certain Elasticsearch APIs are not available: +Because {{serverless-full}} manages infrastructure automatically, certain Elasticsearch APIs are not available: Infrastructure operations : * All `_nodes/*` operations @@ -197,7 +197,7 @@ Refer to the [{{es-serverless}} API reference](https://www.elastic.co/docs/api/d ## Available {{es}} settings [elasticsearch-differences-serverless-settings-availability] -In Elastic Cloud Serverless Elasticsearch projects, you can only configure [index-level settings](elasticsearch://reference/elasticsearch/index-settings/index.md). Cluster-level settings and node-level settings are fully managed by Elastic. +In {{serverless-full}} Elasticsearch projects, you can only configure [index-level settings](elasticsearch://reference/elasticsearch/index-settings/index.md). Cluster-level settings and node-level settings are fully managed by Elastic. Available settings : **Index-level settings**: Settings that control how documents are processed, stored, and searched are available to end users. These include: @@ -217,10 +217,10 @@ Managed settings ## Learn more -- [Elastic Cloud Serverless roadmap](https://www.elastic.co/cloud/serverless/roadmap): See upcoming features and development plans for the Serverless platform -- [Elasticsearch Serverless API reference](https://www.elastic.co/docs/api/doc/elasticsearch-serverless): Check out the complete list of available APIs in Elastic Cloud Serverless -- [Project settings](/deploy-manage/deploy/elastic-cloud/project-settings.md): Configure project settings in Elastic Cloud Serverless -- [Serverless regions](/deploy-manage/deploy/elastic-cloud/regions.md): Choose the right region for your Elastic Cloud Serverless project -- [Elastic Cloud pricing](https://www.elastic.co/pricing/): Understand pricing for Elastic Cloud Hosted and Serverless projects +- [{{serverless-full}} roadmap](https://www.elastic.co/cloud/serverless/roadmap): See upcoming features and development plans for the Serverless platform +- [Elasticsearch Serverless API reference](https://www.elastic.co/docs/api/doc/elasticsearch-serverless): Check out the complete list of available APIs in {{serverless-full}} +- [Project settings](/deploy-manage/deploy/elastic-cloud/project-settings.md): Configure project settings in {{serverless-full}} +- [Serverless regions](/deploy-manage/deploy/elastic-cloud/regions.md): Choose the right region for your {{serverless-full}} project +- [{{ecloud}} pricing](https://www.elastic.co/pricing/): Understand pricing for {{ech}} and Serverless projects - [Serverless project billing](/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md): Understand billing dimensions for Serverless projects - - [Elastic Cloud Hosted billing](/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md): Understand billing dimensions for Elastic Cloud Hosted deployments \ No newline at end of file + - [{{ech}} billing](/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md): Understand billing dimensions for {{ech}} deployments diff --git a/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md b/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md index 35ba6ff1a0..8e70d03309 100644 --- a/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md +++ b/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md @@ -8,19 +8,19 @@ mapped_pages: # Change hardware profiles [ec-change-hardware-profile] -Deployment hardware profiles deploy the Elastic Stack on virtual hardware. Each hardware profile has a different blend of storage, RAM, and vCPU. +Deployment hardware profiles deploy the {{stack}} on virtual hardware. Each hardware profile has a different blend of storage, RAM, and vCPU. -Elastic Cloud regularly introduces new hardware profiles to provide: +{{ecloud}} regularly introduces new hardware profiles to provide: -* More optimal hardware for applications in the Elastic Stack. +* More optimal hardware for applications in the {{stack}}. * Cost efficiencies when new hardware from Cloud providers becomes available. ::::{tip} -The Elastic Cloud console indicates when a new version of a hardware profile is available in the overview page for your deployment, under the Hardware profile section. +The {{ecloud}} console indicates when a new version of a hardware profile is available in the overview page for your deployment, under the Hardware profile section. :::: -## Change the hardware profile using the Elastic Cloud console [ec_change_the_hardware_profile_using_the_elastic_cloud_console] +## Change the hardware profile using the {{ecloud}} console [ec_change_the_hardware_profile_using_the_elastic_cloud_console] ### Upgrade to the newest version of your current hardware profile [ec_upgrade_to_the_newest_version_of_your_current_hardware_profile] @@ -72,7 +72,7 @@ If your deployment is configured for high availability, the hardware profile cha Prerequisites: -* A valid Elastic Cloud [API key](../../api-keys/elastic-cloud-api-keys.md) (`$EC_API_KEY`) +* A valid {{ecloud}} [API key](../../api-keys/elastic-cloud-api-keys.md) (`$EC_API_KEY`) * The deployment ID of the deployment you wish to modify (`{{deployment_id}}`) Replace those values with your actual API key and deployment ID in the following instructions. @@ -147,7 +147,7 @@ Replace those values with your actual API key and deployment ID in the following ### Storage optimized [ec-profiles-storage] -Your Elasticsearch data nodes are optimized for high I/O throughput. Use this profile if you are new to Elasticsearch or don’t need to run a more specialized workload. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +Your {{es}} data nodes are optimized for high I/O throughput. Use this profile if you are new to {{es}} or don’t need to run a more specialized workload. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -156,7 +156,7 @@ Good for most ingestion use cases with 7-10 days of data available for fast acce ### Storage optimized (dense) [ec-profiles-storage-dense] -Your Elasticsearch data nodes are optimized for high I/O throughput. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +Your {{es}} data nodes are optimized for high I/O throughput. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** diff --git a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md b/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md index b315b58e20..efb99df9c8 100644 --- a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md +++ b/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md @@ -121,7 +121,7 @@ In production systems, you might need to control what {{es}} data users can acce ## {{integrations-server}} [ec_integrations_server] -{{integrations-server}} connects observability and security data from Elastic Agents and APM to Elasticsearch. An {{integrations-server}} instance is created automatically as part of every deployment. +{{integrations-server}} connects observability and security data from Elastic Agents and APM to {{es}}. An {{integrations-server}} instance is created automatically as part of every deployment. Refer to [Manage your Integrations Server](manage-integrations-server.md) to learn more. diff --git a/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md b/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md index 80ef20a82d..49f8c13a19 100644 --- a/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md +++ b/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md @@ -9,15 +9,15 @@ applies_to: # vCPU boosting and credits [ec-vcpu-boost-instance] -Elastic Cloud allows smaller instance sizes to get temporarily boosted vCPU when under heavy load. vCPU boosting is governed by vCPU credits that instances can earn over time when vCPU usage is less than the assigned amount. +{{ecloud}} allows smaller instance sizes to get temporarily boosted vCPU when under heavy load. vCPU boosting is governed by vCPU credits that instances can earn over time when vCPU usage is less than the assigned amount. ## How does vCPU boosting work? [ec_how_does_vcpu_boosting_work] -Based on the instance size, the vCPU resources assigned to your instance can be boosted to improve performance temporarily, by using vCPU credits. If credits are available, Elastic Cloud will automatically boost your instance when under heavy load. Boosting is available depending on the instance size: +Based on the instance size, the vCPU resources assigned to your instance can be boosted to improve performance temporarily, by using vCPU credits. If credits are available, {{ecloud}} will automatically boost your instance when under heavy load. Boosting is available depending on the instance size: -* Instance sizes up to and including 12 GB of RAM get boosted. The boosted vCPU value is `16 * vCPU ratio`, the vCPU ratios are dependent on the [hardware profile](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) selected. If an instance is eligible for boosting, the Elastic Cloud console will display **Up to 2.5 vCPU**, depending on the hardware profile selected. The baseline, or unboosted, vCPU value is calculated as: `RAM size * vCPU ratio`. -* Instance sizes bigger than 12 GB of RAM do not get boosted. The vCPU value is displayed in the Elastic Cloud console and calculated as follows: `RAM size * vCPU ratio`. +* Instance sizes up to and including 12 GB of RAM get boosted. The boosted vCPU value is `16 * vCPU ratio`, the vCPU ratios are dependent on the [hardware profile](cloud://reference/cloud-hosted/hardware.md#ec-getting-started-configurations) selected. If an instance is eligible for boosting, the {{ecloud}} console will display **Up to 2.5 vCPU**, depending on the hardware profile selected. The baseline, or unboosted, vCPU value is calculated as: `RAM size * vCPU ratio`. +* Instance sizes bigger than 12 GB of RAM do not get boosted. The vCPU value is displayed in the {{ecloud}} console and calculated as follows: `RAM size * vCPU ratio`. ## What are vCPU credits? [ec_what_are_vcpu_credits] diff --git a/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md b/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md index ff65d1c3dd..f27b4ab46b 100644 --- a/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md +++ b/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md @@ -79,18 +79,18 @@ To add or edit user settings: 6. Select **Save changes**. ::::{note} -In some cases, you may get a warning saying "User settings are different across Elasticsearch instances". To fix this issue, ensure that your user settings (including the comments sections and whitespaces) are identical across all Elasticsearch nodes (not only the data tiers, but also the Master, Machine Learning, and Coordinating nodes). +In some cases, you may get a warning saying "User settings are different across {{es}} instances". To fix this issue, ensure that your user settings (including the comments sections and whitespaces) are identical across all {{es}} nodes (not only the data tiers, but also the Master, Machine Learning, and Coordinating nodes). :::: -## Edit Kibana user settings [ec-manage-kibana-settings] +## Edit {{kib}} user settings [ec-manage-kibana-settings] -{{ech}} supports most of the standard Kibana and X-Pack settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. +{{ech}} supports most of the standard {{kib}} and X-Pack settings. Through a YAML editor in the console, you can append {{kib}} properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. -For a list of supported settings, check [Kibana settings](kibana://reference/cloud/elastic-cloud-kibana-settings.md). +For a list of supported settings, check [{{kib}} settings](kibana://reference/cloud/elastic-cloud-kibana-settings.md). -To change Kibana settings: +To change {{kib}} settings: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Find your deployment on the home page or on the **Hosted deployments** page, then select **Manage** to access its settings menus. @@ -102,7 +102,7 @@ To change Kibana settings: 5. Update the user settings. 6. Select **Save changes**. -Saving your changes initiates a configuration plan change that restarts Kibana automatically for you. +Saving your changes initiates a configuration plan change that restarts {{kib}} automatically for you. ::::{note} If a setting is not supported by {{ech}}, you will get an error message when you try to save. diff --git a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md index 888122007c..bdb74e0b68 100644 --- a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md +++ b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md @@ -8,13 +8,13 @@ mapped_pages: # Find your Cloud ID [ec-cloud-id] -The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted Elasticsearch cluster on {{ecloud}}. Because we made it easier to send data, you can start exploring visualizations in Kibana on {{ecloud}} that much more quickly. +The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted {{es}} cluster on {{ecloud}}. Because we made it easier to send data, you can start exploring visualizations in {{kib}} on {{ecloud}} that much more quickly. :::{image} /deploy-manage/images/cloud-ec-ce-cloud-id-beats-logstash.png -:alt: Exploring data from Beats or Logstash in Kibana after sending it to a hosted Elasticsearch cluster +:alt: Exploring data from Beats or Logstash in {{kib}} after sending it to a hosted {{es}} cluster ::: -The Cloud ID works by assigning a unique ID to your hosted Elasticsearch cluster on {{ecloud}}. All deployments automatically get a Cloud ID. +The Cloud ID works by assigning a unique ID to your hosted {{es}} cluster on {{ecloud}}. All deployments automatically get a Cloud ID. You include your Cloud ID along with your {{ecloud}} user credentials (defined in `cloud.auth`) when you run Beats or Logstash locally, and then let {{ecloud}} handle all of the remaining connection details to send the data to your hosted cluster on {{ecloud}} safely and securely. @@ -27,22 +27,22 @@ You include your Cloud ID along with your {{ecloud}} user credentials (defined i Not sure why you need Beats or Logstash? Here’s what they do: -* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to Elasticsearch. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted Elasticsearch cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in Elasticsearch. -* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted Elasticsearch cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. +* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to {{es}}. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted {{es}} cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in {{es}}. +* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted {{es}} cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. ## Before you begin [ec_before_you_begin_3] To use the Cloud ID, you need: -* A deployment with an Elasticsearch cluster to send data to. +* A deployment with an {{es}} cluster to send data to. * Beats or Logstash, installed locally wherever you want to send data from. * To configure Beats or Logstash, you need: * The unique Cloud ID for your deployment, available from the deployment overview page. * A user ID and password that has permission to send data to your cluster. - In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](beats://reference/metricbeat/feature-roles.md) documentation. + In our examples, we use the `elastic` superuser that every {{es}} cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](beats://reference/metricbeat/feature-roles.md) documentation. @@ -61,13 +61,13 @@ To get started with Metricbeat and {{ecloud}}: 2. [Create a new deployment](create-an-elastic-cloud-hosted-deployment.md) and copy down the password for the `elastic` user. 3. On the deployment overview page, copy down the Cloud ID. 4. Set up the Beat of your choice, such as [Metricbeat version 7.17](beats://reference/metricbeat/metricbeat-installation-configuration.md). -5. [Configure the Beat output to send to Elastic Cloud](beats://reference/metricbeat/configure-cloud-id.md). +5. [Configure the Beat output to send to {{ecloud}}](beats://reference/metricbeat/configure-cloud-id.md). ::::{note} Make sure you replace the values for `cloud.id` and `cloud.auth` with your own information. :::: -6. Open Kibana and explore! +6. Open {{kib}} and explore! -Metricbeat creates a data view (formerly *index pattern*) with defined fields, searches, visualizations, and dashboards that you can start exploring in Kibana. Look for information related to system metrics, such as CPU usage, utilization rates for memory and disk, and details for processes. +Metricbeat creates a data view (formerly *index pattern*) with defined fields, searches, visualizations, and dashboards that you can start exploring in {{kib}}. Look for information related to system metrics, such as CPU usage, utilization rates for memory and disk, and details for processes. diff --git a/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md b/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md index 3401b3fd41..50ae29a4df 100644 --- a/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md @@ -15,7 +15,7 @@ Some differences exist when you subscribe to {{ecloud}} through the GCP Marketpl * There is no trial period. Billing starts when you subscribe to {{ecloud}}. * Existing {{ecloud}} organizations cannot be converted to use the GCP Marketplace. -* Pricing for an {{ecloud}} subscription through the GCP Marketplace follows the pricing outlined on the [Elastic Cloud](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the Elastic Cloud [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md). +* Pricing for an {{ecloud}} subscription through the GCP Marketplace follows the pricing outlined on the [{{ecloud}}](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the {{ecloud}} [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md). * To access your billing information at any time go to **Account & Billing**. You can also go to **Account & Billing** and then **Usage** to view your usage hours and units per hour. ::::{important} @@ -26,19 +26,19 @@ Only one {{ecloud}} organization can be subscribed through GCP Marketplace per G To subscribe to {{ecloud}} through the GCP Marketplace: 1. Log in to your Google Cloud Platform account. -2. Go to the [Elastic Cloud](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud) page in the GCP Marketplace. -3. On the Elastic Cloud page select **Subscribe**, where you will be directed to another page. There is only one plan—the Elastic plan—and it’s pre-selected. The billing account you are logged into will be pre-selected for this purchase, though you can change it at this time. +2. Go to the [{{ecloud}}](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud) page in the GCP Marketplace. +3. On the {{ecloud}} page select **Subscribe**, where you will be directed to another page. There is only one plan—the Elastic plan—and it’s pre-selected. The billing account you are logged into will be pre-selected for this purchase, though you can change it at this time. 4. Accept the terms of service (TOS) and select **Subscribe**. 5. When you are presented with a pop-up that specifies that "Your order request has been sent to Elastic" choose **Sign up with Elastic** to continue. 6. After choosing to sign up, a new window will appear. Do one of the following: - * Create a new, unique user account for an {{ecloud}} Elastic Cloud organization. - * Log in with an existing user account that’s associated with an Elastic Cloud trial. This links the billing account used for the purchase on GCP Marketplace to the existing Elastic organization. + * Create a new, unique user account for an {{ecloud}} {{ecloud}} organization. + * Log in with an existing user account that’s associated with an {{ecloud}} trial. This links the billing account used for the purchase on GCP Marketplace to the existing Elastic organization. 7. After signing up, check your inbox to verify the email address you signed up with. Upon verification, you will be asked to create a password, and once created your organization will be set up and you will be logged into it. ::::{note} - Immediately after your first login to Elastic Cloud you may briefly see a banner on the Elastic Cloud user console saying that your account is disconnected. There is sometimes a short delay in activation, but refreshing the page is generally enough time to allow its completion. If this issue persists, please contact support. + Immediately after your first login to {{ecloud}} you may briefly see a banner on the {{ecloud}} user console saying that your account is disconnected. There is sometimes a short delay in activation, but refreshing the page is generally enough time to allow its completion. If this issue persists, please contact support. :::: @@ -63,7 +63,7 @@ To prevent downtime, do not remove the currently used billing account before the {{ecloud}} subscriptions through GCP Marketplace are associated with a GCP billing account. In order to change the billing account associated with an {{ecloud}} organization: * for customers under a Private Offer contract: please reach out to Elastic support and provide the GCP Billing Account, as well as the contact of any reseller information for approval. -* for pay-as-you-go customers: you need to have purchased and subscribed to Elastic Cloud on the new billing account using the details above—but do not create a new Elastic user or organization (that is, you can skip Steps 5 and 6 in the subscription instructions, above). Once you successfully subscribed with the new billing account, you can contact Elastic support and provide the new billing account ID you wish to move to, which you can find from [GCP’s billing page](https://console.cloud.google.com/billing). The ID is in the format `000000-000000-000000`. +* for pay-as-you-go customers: you need to have purchased and subscribed to {{ecloud}} on the new billing account using the details above—but do not create a new Elastic user or organization (that is, you can skip Steps 5 and 6 in the subscription instructions, above). Once you successfully subscribed with the new billing account, you can contact Elastic support and provide the new billing account ID you wish to move to, which you can find from [GCP’s billing page](https://console.cloud.google.com/billing). The ID is in the format `000000-000000-000000`. If you cancel your {{ecloud}} order on GCP through the [marketplace orders page](https://console.cloud.google.com/marketplace/orders) before the switch to the new billing account has been done, any running deployments will immediately enter a degraded state known as maintenance mode and they will be scheduled for termination in five days. @@ -72,5 +72,5 @@ If you already unsubscribed before the new billing account has been set up, you ## Native GCP integrations [ec-gcp-marketplace-native] -You can ingest data from Google Pub/Sub to the Elastic Stack very easily from the Google Cloud Console. You can use the [Metricbeat Google Cloud Platform module](../../../solutions/observability/cloud/monitor-google-cloud-platform-gcp.md) or the [GCP Dataflow Templates](../../../solutions/observability/cloud/gcp-dataflow-templates.md). +You can ingest data from Google Pub/Sub to the {{stack}} very easily from the Google Cloud Console. You can use the [Metricbeat Google Cloud Platform module](../../../solutions/observability/cloud/monitor-google-cloud-platform-gcp.md) or the [GCP Dataflow Templates](../../../solutions/observability/cloud/gcp-dataflow-templates.md). diff --git a/deploy-manage/deploy/elastic-cloud/heroku-getting-started-accessing.md b/deploy-manage/deploy/elastic-cloud/heroku-getting-started-accessing.md index 286665c301..b1d11dc3d9 100644 --- a/deploy-manage/deploy/elastic-cloud/heroku-getting-started-accessing.md +++ b/deploy-manage/deploy/elastic-cloud/heroku-getting-started-accessing.md @@ -11,7 +11,7 @@ applies_to: You use the console to manage your cluster from a web browser. Tasks that are available from the console include upgrading versions, configuring security features, working with custom plugins, and more. :::{image} /deploy-manage/images/cloud-heroku-ech-console.png -:alt: [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) +:alt: [{{es}} Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) ::: To access the console in a browser from the Heroku CLI: diff --git a/deploy-manage/deploy/elastic-cloud/heroku-getting-started-installing.md b/deploy-manage/deploy/elastic-cloud/heroku-getting-started-installing.md index 42e47cdc3b..edfe295d84 100644 --- a/deploy-manage/deploy/elastic-cloud/heroku-getting-started-installing.md +++ b/deploy-manage/deploy/elastic-cloud/heroku-getting-started-installing.md @@ -36,7 +36,7 @@ FOUNDELASTICSEARCH_URL: https://74f176887fdef36bb51e6e37nnnnnnnn.us-east-1.aws.f ## Install a specific version and plugins [ech-getting-started-installing-version] -If you want your add-on to run a specific version of Elasticsearch, use the `--elasticsearch-version` parameter. We also provide many of the plugins that are available for Elasticsearch. You use the `--plugins` parameter to specify a comma-separated list of plugins that you want installed. +If you want your add-on to run a specific version of {{es}}, use the `--elasticsearch-version` parameter. We also provide many of the plugins that are available for {{es}}. You use the `--plugins` parameter to specify a comma-separated list of plugins that you want installed. To find which {{es}} versions and plugins are currently available, you can omit the version to default to the latest one and add plugins later on from the [{{heroku}} console](https://cloud.elastic.co?page=docs&placement=docs-body). To use your own custom plugins, you can upload and select these plugins in the console as well. diff --git a/deploy-manage/deploy/elastic-cloud/heroku.md b/deploy-manage/deploy/elastic-cloud/heroku.md index b82e1b8be7..45dbcdab61 100644 --- a/deploy-manage/deploy/elastic-cloud/heroku.md +++ b/deploy-manage/deploy/elastic-cloud/heroku.md @@ -14,7 +14,7 @@ navigation_title: Heroku This documentation applies to Heroku users who want to make use of the {{es}} Add-On for Heroku that is available from the [Heroku Dashboard](https://dashboard.heroku.com/), or that can be installed from the CLI. -The add-on runs on {{ecloud}} and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the Elastic Stack are also readily available to Heroku users through the [{{es}} Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use {{kib}} to visualize your {{es}} data. +The add-on runs on {{ecloud}} and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the {{stack}} are also readily available to Heroku users through the [{{es}} Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use {{kib}} to visualize your {{es}} data. To learn more about what plans are available for Heroku users and their cost, refer to the [{{es}} add-on](https://elements.heroku.com/addons/foundelasticsearch) in the Elements Marketplace. @@ -79,7 +79,7 @@ There are several ways to ingest data into the deployment: You might want to add more layers of security to your deployment, such as: * Add more users to the deployment with third-party authentication providers and services like [SAML](../../users-roles/cluster-or-deployment-auth/saml.md), [OpenID Connect](../../users-roles/cluster-or-deployment-auth/openid-connect.md), or [Kerberos](../../users-roles/cluster-or-deployment-auth/kerberos.md). -* Do not use clients that only support HTTP to connect to Elastic Cloud. If you need to do so, you should use a reverse proxy setup. +* Do not use clients that only support HTTP to connect to {{ecloud}}. If you need to do so, you should use a reverse proxy setup. * Create [traffic filters](../../security/traffic-filtering.md) and apply them to your deployments. * If needed, you can [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) the `elastic` password. diff --git a/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md b/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md index c74bdbd33f..3c46831af3 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md +++ b/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md @@ -6,7 +6,7 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-api-deployment-crud.html --- -# Manage deployments using the Elastic Cloud API [ec-api-deployment-crud] +# Manage deployments using the {{ecloud}} API [ec-api-deployment-crud] The following examples demonstrate Create, Read, Update and Delete operations on a `deployments` resource. If you haven’t created an API Key yet, you can follow the [Authentication documentation](../../api-keys/elastic-cloud-api-keys.md). @@ -38,7 +38,7 @@ curl \ When you create a new deployment through the API, you have two options: 1. **Use default values.** The simplest option is to create the deployment using a set of default values that are gathered automatically from a deployment template specified in the API request. -2. **Configure the deployment settings manually.** With this option, the API request to create a new deployment is very descriptive, with many settings to tweak. If you use this option we recommend that you configure your desired deployment in the Elastic Cloud UI and copy the JSON payload. +2. **Configure the deployment settings manually.** With this option, the API request to create a new deployment is very descriptive, with many settings to tweak. If you use this option we recommend that you configure your desired deployment in the {{ecloud}} UI and copy the JSON payload. ### Create a deployment using default values [ec-api-examples-deployment-simple] @@ -69,9 +69,9 @@ A `resource` field can be included in this request (check the following, manual This example creates a new deployment named "my-first-api-deployment" with the following characteristics: -* Version 8.17.1 of the Elastic Stack -* Elasticsearch cluster in two zones with 4 GB of memory for each node -* 1 GB single zone Kibana instance and 1 GB Integrations Server instance +* Version 8.17.1 of the {{stack}} +* {{es}} cluster in two zones with 4 GB of memory for each node +* 1 GB single zone {{kib}} instance and 1 GB Integrations Server instance ```sh curl -XPOST \ @@ -263,9 +263,9 @@ curl -XPOST \ ``` 1. [Available Regions](cloud://reference/cloud-hosted/ec-regions-templates-instances.md) -2. Availability zones for the Elasticsearch cluster +2. Availability zones for the {{es}} cluster 3. [Available instance configurations](cloud://reference/cloud-hosted/ec-regions-templates-instances.md) -4. Memory allocated for each Elasticsearch node +4. Memory allocated for each {{es}} node 5. [Available templates](cloud://reference/cloud-hosted/ec-regions-templates-instances.md) 6. Availability zones for Kibana 7. Memory allocated for Kibana @@ -286,7 +286,7 @@ You are able to create deployments with *non* [End-of-life (EOL) versions](avail ## Update a deployment [ec_update_a_deployment] -Modify the Elasticsearch resource by increasing the amount of memory to 8 GB. +Modify the {{es}} resource by increasing the amount of memory to 8 GB. ```sh curl -XPUT \ @@ -342,7 +342,7 @@ curl -XPUT \ ``` 1. Give the deployment a new name -2. Increase the amount of memory allocated for each Elasticsearch node to 8 GB +2. Increase the amount of memory allocated for each {{es}} node to 8 GB ::::{tip} diff --git a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md index f739ef60b2..077ca10565 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md +++ b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md @@ -8,18 +8,18 @@ mapped_pages: # Manage your Integrations server [ec-manage-integrations-server] -For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. +For deployments that are version 8.0 and later, you have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. -As part of provisioning, the APM Server and Fleet Server are already configured to work with Elasticsearch and Kibana. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. +As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. From the deployment **Integrations Server** page you can also: * Get the URL to complete the APM agent configuration. -* Use the `elastic` credentials to go to the APM area of Kibana. Step by step instructions to configure a variety of agents are available right in Kibana. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. -* Use the `elastic` credentials to go to the Fleet area of Kibana. Step by step instructions to download and install Elastic Agent on your hosts are available right in Kibana. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. +* Use the `elastic` credentials to go to the APM area of {{kib}}. Step by step instructions to configure a variety of agents are available right in {{kib}}. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. +* Use the `elastic` credentials to go to the Fleet area of {{kib}}. Step by step instructions to download and install Elastic Agent on your hosts are available right in {{kib}}. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. * Access the Integrations Server logs and metrics. * Stop and restart your Integrations Server. -* Upgrade your Integrations Server version if it is out of sync with your Elasticsearch cluster. +* Upgrade your Integrations Server version if it is out of sync with your {{es}} cluster. * Fully remove the Integrations Server, delete it from the disk, and stop the charges. ::::{important} @@ -34,7 +34,7 @@ This example demonstrates how to use the {{ecloud}} RESTful API to create a depl #### Requirements [ec_requirements_2] -Integrations Server can be enabled only on new deployments, starting with Elastic Stack version 8.0. +Integrations Server can be enabled only on new deployments, starting with {{stack}} version 8.0. It’s possible to enable Integrations Server on an existing deployment with version 8.0 only if [APM & Fleet Server](switch-from-apm-to-integrations-server-payload.md#ec-manage-apm-and-fleet) hasn’t been previously enabled on the deployment. diff --git a/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md b/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md index 7d7539ac4e..afb4519ad8 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md +++ b/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md @@ -36,9 +36,9 @@ For plugins larger than 200MB the download URL option **must** be used. Plugins These two examples are for the `plugin` extension type. For bundles, change `extension_type` to `bundle`. -For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](elasticsearch://extend/index.md) for details. For plugins larger than 5GB, the `plugin-descriptor.properties` file needs to be at the top of the archive. This ensures that the our verification process is able to detect that it is an Elasticsearch plugin; otherwise the plugin will be rejected by the API. This order can be achieved by specifying at time of creating the ZIP file: `zip -r name-of-plugin.zip plugin-descriptor.properties *`. +For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](elasticsearch://extend/index.md) for details. For plugins larger than 5GB, the `plugin-descriptor.properties` file needs to be at the top of the archive. This ensures that the our verification process is able to detect that it is an {{es}} plugin; otherwise the plugin will be rejected by the API. This order can be achieved by specifying at time of creating the ZIP file: `zip -r name-of-plugin.zip plugin-descriptor.properties *`. -For bundles, we recommend setting `version` using wildcard notation that matches the major version of the Elasticsearch deployment. For example, if Elasticsearch is on version 8.4.3, simply set `8.*` as the version. The value `8.*` means that the bundle is compatible with all 8.x versions of Elasticsearch. +For bundles, we recommend setting `version` using wildcard notation that matches the major version of the {{es}} deployment. For example, if {{es}} is on version 8.4.3, simply set `8.*` as the version. The value `8.*` means that the bundle is compatible with all 8.x versions of {{es}}. $$$ec-extension-guide-create-option1$$$ **Option 1: Stream the file from a publicly-accessible download URL** @@ -165,7 +165,7 @@ The following are examples of a GCP plan. Your specific deployment plan will be } ``` -You can use the [cat plugins API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-plugins) to confirm that the plugin has been deployed successfully to Elasticsearch. +You can use the [cat plugins API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-plugins) to confirm that the plugin has been deployed successfully to {{es}}. The previous examples are for plugins. For bundles, use the `user_bundles` construct instead. @@ -269,7 +269,7 @@ Updating `extension_type` has no effect. You cannot change the extension’s typ ## Update the version of an existing bundle [ec-extension-guide-update-version-bundle] -For bundles, we recommend setting `version` using wildcard notation that matches the major version of the Elasticsearch deployment. For example, if Elasticsearch is on version 8.4.3, simply set `8.*` as the version. The value `8.*` means that the bundle is compatible with all 7.x versions of Elasticsearch. +For bundles, we recommend setting `version` using wildcard notation that matches the major version of the {{es}} deployment. For example, if {{es}} is on version 8.4.3, simply set `8.*` as the version. The value `8.*` means that the bundle is compatible with all 7.x versions of {{es}}. For example, if the bundle was previously uploaded with the version `8.4.2`, simply update the version field. You no longer have to specify the `download_url` when only making metadata changes to a bundle. @@ -311,7 +311,7 @@ For plugins, `version` must match (exactly) the `elasticsearch.version` field de ## Update the file associated to an existing extension [ec-extension-guide-update-file] -You may want to update an uploaded file for an existing extension without performing an Elasticsearch upgrade. If you are updating the extension to prepare for an Elasticsearch upgrade, check the [Upgrade Elasticsearch](#ec-extension-guide-upgrade-elasticsearch) scenario later on this page. +You may want to update an uploaded file for an existing extension without performing an {{es}} upgrade. If you are updating the extension to prepare for an {{es}} upgrade, check the [Upgrade Elasticsearch](#ec-extension-guide-upgrade-elasticsearch) scenario later on this page. This example is for the `plugin` extension type. For bundles, change `extension_type` to `bundle`. @@ -341,38 +341,38 @@ curl -v -X PUT "https://api.elastic-cloud.com/api/v1/deployments/extensions/EXTE ``` ::::{important} -If you are not making any other plan changes and simply updating an extension file, you need to issue a no-op plan so that Elasticsearch will make use of this new file. A *no-op* (no operation) plan triggers a rolling restart on the deployment, applying the same (unchanged) plan as the current plan. +If you are not making any other plan changes and simply updating an extension file, you need to issue a no-op plan so that {{es}} will make use of this new file. A *no-op* (no operation) plan triggers a rolling restart on the deployment, applying the same (unchanged) plan as the current plan. :::: Updating the file of an existing extension or bundle does not change its `EXTENSION_ID`. -## Upgrade Elasticsearch [ec-extension-guide-upgrade-elasticsearch] +## Upgrade {{es}} [ec-extension-guide-upgrade-elasticsearch] -When you upgrade Elasticsearch in a deployment, you must ensure that: +When you upgrade {{es}} in a deployment, you must ensure that: -* Bundles are on versions that are compatible with the Elasticsearch version that you are upgrading to. -* Plugins match (exactly) the Elasticsearch upgrade version. +* Bundles are on versions that are compatible with the {{es}} version that you are upgrading to. +* Plugins match (exactly) the {{es}} upgrade version. **To prepare existing bundle and update the plan:** -1. **Update the bundle version to be compatible with the Elasticsearch upgrade version.** +1. **Update the bundle version to be compatible with the {{es}} upgrade version.** - Bundles using wildcard notation for versions (for example, `7.*`, `8.*`) in their extension metadata are compatible with all minor versions of the same Elasticsearch major version. In other words, if you are performing a patch (for example, from `8.4.2` to `8.4.3`) or a minor (for example `8.3.0` to `8.4.3`) version upgrade of Elasticsearch and you are already using `8.*` as the `version` for the extension, you are ready for the Elasticsearch upgrade and can proceed to Step 2. + Bundles using wildcard notation for versions (for example, `7.*`, `8.*`) in their extension metadata are compatible with all minor versions of the same {{es}} major version. In other words, if you are performing a patch (for example, from `8.4.2` to `8.4.3`) or a minor (for example `8.3.0` to `8.4.3`) version upgrade of {{es}} and you are already using `8.*` as the `version` for the extension, you are ready for the {{es}} upgrade and can proceed to Step 2. - However, if you are using a specific `version` for bundles, or upgrading to a major version, you must update the metadata of the extension to specify the matching Elasticsearch `version` that you are upgrading to, or use the wildcard syntax described in the previous paragraph. For example, if you are upgrading from version 7.x to 8.x, set `version` to `8.*` before the upgrade. Refer to [Update the version of an existing bundle](#ec-extension-guide-update-version-bundle). + However, if you are using a specific `version` for bundles, or upgrading to a major version, you must update the metadata of the extension to specify the matching {{es}} `version` that you are upgrading to, or use the wildcard syntax described in the previous paragraph. For example, if you are upgrading from version 7.x to 8.x, set `version` to `8.*` before the upgrade. Refer to [Update the version of an existing bundle](#ec-extension-guide-update-version-bundle). 2. **Update the bundle reference as part of an upgrade plan.** Submit a plan change that performs the following operations in a *single* [update deployment API](https://www.elastic.co/docs/api/doc/cloud/operation/operation-update-deployment) call: - * Upgrade the version of Elasticsearch to the upgrade version (for example, `8.4.3`). - * Update reference to the existing bundle to be compatible with Elasticsearch upgrade version (for example, `8.*`). + * Upgrade the version of {{es}} to the upgrade version (for example, `8.4.3`). + * Update reference to the existing bundle to be compatible with {{es}} upgrade version (for example, `8.*`). - This triggers a rolling upgrade plan change to the later Elasticsearch version and updates the reference to the bundle at the same time. + This triggers a rolling upgrade plan change to the later {{es}} version and updates the reference to the bundle at the same time. - The following example shows the upgrade of an Elasticsearch deployment and its bundle. You can also upgrade other deployment resources within the same plan change. + The following example shows the upgrade of an {{es}} deployment and its bundle. You can also upgrade other deployment resources within the same plan change. Update `resources.elasticsearch.plan.elasticsearch.version` and `resources.elasticsearch.plan.cluster_topology.elasticsearch.user_bundles.elasticsearch_version` accordingly. @@ -414,23 +414,23 @@ When you upgrade Elasticsearch in a deployment, you must ensure that: **To create a new plugin and update the plan:** -Unlike bundles, plugins *must* match the Elasticsearch version down to the patch level (for example, `8.4.3`). When upgrading Elasticsearch to a new patch, minor, or major version, update the version in the extension metadata and update the extension file. The following example updates an existing plugin and upgrades the Elasticsearch deployment from version 8.3.0 to 8.4.3. +Unlike bundles, plugins *must* match the {{es}} version down to the patch level (for example, `8.4.3`). When upgrading {{es}} to a new patch, minor, or major version, update the version in the extension metadata and update the extension file. The following example updates an existing plugin and upgrades the {{es}} deployment from version 8.3.0 to 8.4.3. -1. **Create a new plugin that matches the Elasticsearch upgrade version.** +1. **Create a new plugin that matches the {{es}} upgrade version.** - Follow the steps in [Get an extension](#ec-extension-guide-get-extension) to create a new extension with a `version` metadata field and the plugin’s `elasticsearch.version` field in `plugin-descriptor.properties` that matches the Elasticsearch upgrade version (for example, `8.4.3`). + Follow the steps in [Get an extension](#ec-extension-guide-get-extension) to create a new extension with a `version` metadata field and the plugin’s `elasticsearch.version` field in `plugin-descriptor.properties` that matches the {{es}} upgrade version (for example, `8.4.3`). 2. **Remove the old plugin and add the new plugin to the upgrade plan.** Submit a plan change that performs the following operations in a *single* [update deployment API](https://www.elastic.co/docs/api/doc/cloud/operation/operation-update-deployment) call: - * Upgrade the version of Elasticsearch to the upgrade version (for example, `8.4.3`). + * Upgrade the version of {{es}} to the upgrade version (for example, `8.4.3`). * Remove reference to the the plugin on the older version (for example, `8.3.0`) from the plan. * Add reference to the new plugin on the upgrade version (for example, `8.4.3`) to the plan. - This triggers a rolling upgrade plan change to the later Elasticsearch version, removes reference to the older plugin, and deploys your updated plugin at the same time. + This triggers a rolling upgrade plan change to the later {{es}} version, removes reference to the older plugin, and deploys your updated plugin at the same time. - The following example shows the upgrade of an Elasticsearch deployment and its plugin. You can also upgrade other deployment resources within the same plan change. + The following example shows the upgrade of an {{es}} deployment and its plugin. You can also upgrade other deployment resources within the same plan change. Update deployment plans, update `resources.elasticsearch.plan.elasticsearch.version` and `resources.elasticsearch.plan.cluster_topology.elasticsearch.user_plugins.elasticsearch_version` accordingly. @@ -469,7 +469,7 @@ Unlike bundles, plugins *must* match the Elasticsearch version down to the patch } ``` - You can use the [cat plugins API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-plugins) to confirm that the plugin has been upgraded successfully to Elasticsearch. + You can use the [cat plugins API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-plugins) to confirm that the plugin has been upgraded successfully to {{es}}. diff --git a/deploy-manage/deploy/elastic-cloud/project-settings.md b/deploy-manage/deploy/elastic-cloud/project-settings.md index fa3352782c..b658d5ff0a 100644 --- a/deploy-manage/deploy/elastic-cloud/project-settings.md +++ b/deploy-manage/deploy/elastic-cloud/project-settings.md @@ -16,7 +16,7 @@ $$$elasticsearch-manage-project-search-power-settings$$$ Your project’s performance and general data retention are controlled by the **Search AI Lake settings**. To manage these settings: 1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/). -2. Log in to your Elastic Cloud account. +2. Log in to your {{ecloud}} account. 3. Select your project from the **Serverless projects** panel and click **Manage**. Additionally, there are [features and add-ons](#project-features-add-ons) available for security that you can configure. @@ -44,8 +44,8 @@ $$$elasticsearch-manage-project-search-power-settings$$$ | Setting | Description | Project Type | | :--- | :--- | :--- | -| **Search Power** | Search Power controls the speed of searches against your data. With Search Power, you can improve search performance by adding more resources for querying, or you can reduce provisioned resources to cut costs. Choose from three Search Power settings:

**On-demand:** Autoscales based on data and search load, with a lower minimum baseline for resource use. This flexibility results in more variable query latency and reduced maximum throughput.

**Performant:** Delivers consistently low latency and autoscales to accommodate moderately high query throughput.

**High-throughput:** Optimized for high-throughput scenarios, autoscaling to maintain query latency even at very high query volumes.
| Elasticsearch | -| **Search Boost Window** | Non-time series data is always considered search-ready. The **Search Boost Window** determines the volume of time series project data that will be considered search-ready.

Increasing the window results in a bigger portion of time series project data included in the total search-ready data volume.
| Elasticsearch | +| **Search Power** | Search Power controls the speed of searches against your data. With Search Power, you can improve search performance by adding more resources for querying, or you can reduce provisioned resources to cut costs. Choose from three Search Power settings:

**On-demand:** Autoscales based on data and search load, with a lower minimum baseline for resource use. This flexibility results in more variable query latency and reduced maximum throughput.

**Performant:** Delivers consistently low latency and autoscales to accommodate moderately high query throughput.

**High-throughput:** Optimized for high-throughput scenarios, autoscaling to maintain query latency even at very high query volumes.
| {{es}} | +| **Search Boost Window** | Non-time series data is always considered search-ready. The **Search Boost Window** determines the volume of time series project data that will be considered search-ready.

Increasing the window results in a bigger portion of time series project data included in the total search-ready data volume.
| {{es}} | | **Data Retention** | Data retention policies determine how long your project data is retained.
In {{serverless-full}} data retention policies are configured through [data streams](../../../manage-data/lifecycle/data-stream.md) and you can specify different retention periods for specific data streams in your project.

{{elastic-sec}} has to additional configuration settings that can be configured to managed your data retention.

**Maximum data retention period**

When enabled, this setting determines the maximum length of time that data can be retained in any data streams of this project.

Editing this setting replaces the data retention set for all data streams of the project that have a longer data retention defined. Data older than the new maximum retention period that you set is permanently deleted.

**Default data retention period**

When enabled, this setting determines the default retention period that is automatically applied to all data streams in your project that do not have a custom retention period already set.
|Elasticsearch
Observability
Security | | **Project features** | Controls [feature tiers and add-on options](../../../deploy-manage/deploy/elastic-cloud/project-settings.md#project-features-add-ons) for your {{elastic-sec}} project. | Security | diff --git a/deploy-manage/deploy/elastic-cloud/regions.md b/deploy-manage/deploy/elastic-cloud/regions.md index 7be5a73b52..1d03842c45 100644 --- a/deploy-manage/deploy/elastic-cloud/regions.md +++ b/deploy-manage/deploy/elastic-cloud/regions.md @@ -7,9 +7,9 @@ applies_to: # Regions [regions] -A region is the geographic area where the data center of the cloud provider that hosts your project is located. Review the available Elastic Cloud Serverless regions to decide which region to use. If you aren’t sure which region to pick, choose one that is geographically close to you to reduce latency. +A region is the geographic area where the data center of the cloud provider that hosts your project is located. Review the available {{serverless-full}} regions to decide which region to use. If you aren’t sure which region to pick, choose one that is geographically close to you to reduce latency. -Elastic Cloud Serverless handles all hosting details for you. You are unable to change the region after you create a project. +{{serverless-full}} handles all hosting details for you. You are unable to change the region after you create a project. ::::{note} Currently, a limited number of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) regions are available. More regions for AWS, Azure, and GCP, will be added in the future. diff --git a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md index ae5e9302c0..ca7c2e4f5c 100644 --- a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md +++ b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md @@ -14,9 +14,9 @@ When using {{ecloud}}, there are some limitations you should be aware of: * [Security](#ec-restrictions-security) * [APIs](#ec-restrictions-apis) * [Transport client](#ec-restrictions-transport-client) -* [Elasticsearch and Kibana plugins](#ec-restrictions-plugins) +* [{{es}} and {{kib}} plugins](#ec-restrictions-plugins) * [Watcher](#ec-restrictions-watcher) -* [Private Link and SSO to Kibana URLs](#ec-restrictions-traffic-filters-kibana-sso) +* [Private Link and SSO to {{kib}} URLs](#ec-restrictions-traffic-filters-kibana-sso) * [PDF report generation using Alerts or Watcher webhooks](#ec-restrictions-traffic-filters-watcher) * [Kibana](#ec-restrictions-kibana) % * [APM Agent central configuration with Private Link or traffic filters](#ec-restrictions-apm-traffic-filters) @@ -28,9 +28,9 @@ When using {{ecloud}}, there are some limitations you should be aware of: For limitations related to logging and monitoring, check the [Restrictions and limitations](../../monitor/stack-monitoring/ece-ech-stack-monitoring.md#restrictions-monitoring) section of the logging and monitoring page. -% Occasionally, we also publish information about [Known problems](#ec-known-problems) with our {{ecloud}} or the Elastic Stack. +% Occasionally, we also publish information about [Known problems](#ec-known-problems) with our {{ecloud}} or the {{stack}}. -To learn more about the features that are supported by {{ecloud}}, check [Elastic Cloud Subscriptions](https://www.elastic.co/cloud/elasticsearch-service/subscriptions?page=docs&placement=docs-body). +To learn more about the features that are supported by {{ecloud}}, check [{{ecloud}} Subscriptions](https://www.elastic.co/cloud/elasticsearch-service/subscriptions?page=docs&placement=docs-body). ## Security [ec-restrictions-security] @@ -49,29 +49,29 @@ The following restrictions apply when using APIs in {{ecloud}}: $$$ec-restrictions-apis-elasticsearch$$$ -Elasticsearch APIs -: The Elasticsearch APIs do not natively enforce rate limiting. However, all requests to the Elasticsearch cluster are subject to Elasticsearch configuration settings, such as the [network HTTP setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) `http:max_content_length` which restricts the maximum size of an HTTP request body. This setting has a default value of 100MB, hence restricting API request payloads to that size. This setting is not currently configurable in {{ecloud}}. For a list of which Elasticsearch settings are supported on Cloud, check [Add Elasticsearch user settings](edit-stack-settings.md). To learn about using the Elasticsearch APIs in {{ecloud}}, check [Access the Elasticsearch API console](cloud://reference/cloud-hosted/ec-api-console.md). And, for full details about the Elasticsearch APIs and their endpoints, check the [Elasticsearch API reference documentation](elasticsearch://reference/elasticsearch/rest-apis/index.md). +{{es}} APIs +: The {{es}} APIs do not natively enforce rate limiting. However, all requests to the {{es}} cluster are subject to {{es}} configuration settings, such as the [network HTTP setting](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) `http:max_content_length` which restricts the maximum size of an HTTP request body. This setting has a default value of 100MB, hence restricting API request payloads to that size. This setting is not currently configurable in {{ecloud}}. For a list of which {{es}} settings are supported on Cloud, check [Add {{es}} user settings](edit-stack-settings.md). To learn about using the {{es}} APIs in {{ecloud}}, check [Access the {{es}} API console](cloud://reference/cloud-hosted/ec-api-console.md). And, for full details about the {{es}} APIs and their endpoints, check the [{{es}} API reference documentation](elasticsearch://reference/elasticsearch/rest-apis/index.md). $$$ec-restrictions-apis-kibana$$$ -Kibana APIs -: There are no rate limits restricting your use of the Kibana APIs. However, Kibana features are affected by the [Kibana configuration settings](kibana://reference/configuration-reference.md), not all of which are supported in {{ecloud}}. For a list of what settings are currently supported, check [Add Kibana user settings](edit-stack-settings.md). For all details about using the Kibana APIs, check the [Kibana API reference documentation](https://www.elastic.co/docs/api/doc/kibana/). +{{kib}} APIs +: There are no rate limits restricting your use of the {{kib}} APIs. However, {{kib}} features are affected by the [{{kib}} configuration settings](kibana://reference/configuration-reference.md), not all of which are supported in {{ecloud}}. For a list of what settings are currently supported, check [Add {{kib}} user settings](edit-stack-settings.md). For all details about using the {{kib}} APIs, check the [{{kib}} API reference documentation](https://www.elastic.co/docs/api/doc/kibana/). ## Transport client [ec-restrictions-transport-client] -* The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on {{ecloud}} are behind proxies, which prevent the transport client from communicating directly with Elasticsearch clusters. +* The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on {{ecloud}} are behind proxies, which prevent the transport client from communicating directly with {{es}} clusters. * The transport client is not supported over [private link connections](../../security/aws-privatelink-traffic-filters.md). Use the Java REST client instead, or connect over the public internet. -% * The transport client does not work with Elasticsearch clusters at version 7.6 and later that are hosted on Cloud. Transport client continues to work with Elasticsearch clusters at version 7.5 and earlier. Note that the transport client was deprecated with version 7.0 and will be removed with 8.0. +% * The transport client does not work with {{es}} clusters at version 7.6 and later that are hosted on Cloud. Transport client continues to work with {{es}} clusters at version 7.5 and earlier. Note that the transport client was deprecated with version 7.0 and will be removed with 8.0. -## Elasticsearch and Kibana plugins [ec-restrictions-plugins] +## {{es}} and {{kib}} plugins [ec-restrictions-plugins] -* Kibana plugins are not supported. -* Elasticsearch plugins, are not enabled by default for security purposes. Please reach out to support if you would like to enable Elasticsearch plugins support on your account. -* Some Elasticsearch plugins do not apply to {{ecloud}}. For example, you won’t ever need to change discovery, as {{ecloud}} handles how nodes discover one another. -% * In Elasticsearch 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins {{ecloud}} might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not Elasticsearch itself. -% * In Elasticsearch 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html) if you want more detailed information or perform index management actions. +* {{kib}} plugins are not supported. +* {{es}} plugins, are not enabled by default for security purposes. Please reach out to support if you would like to enable {{es}} plugins support on your account. +* Some {{es}} plugins do not apply to {{ecloud}}. For example, you won’t ever need to change discovery, as {{ecloud}} handles how nodes discover one another. +% * In {{es}} 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins {{ecloud}} might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not {{es}} itself. +% * In {{es}} 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html) if you want more detailed information or perform index management actions. ## Watcher [ec-restrictions-watcher] @@ -85,21 +85,21 @@ Watcher comes preconfigured with a directly usable email account provided by Ela Alternatively, a custom mail server can be configured as described in [Configuring a custom mail server](../../../explore-analyze/alerts-cases/watcher/enable-watcher.md#watcher-custom-mail-server) -## Private Link and SSO to Kibana URLs [ec-restrictions-traffic-filters-kibana-sso] +## Private Link and SSO to {{kib}} URLs [ec-restrictions-traffic-filters-kibana-sso] -Currently you can’t use SSO to login directly from {{ecloud}} into Kibana endpoints that are protected by Private Link traffic filters. However, you can still SSO into Private Link protected Kibana endpoints individually using the [SAML](../../users-roles/cluster-or-deployment-auth/saml.md) or [OIDC](../../users-roles/cluster-or-deployment-auth/openid-connect.md) protocol from your own identity provider, just not through the {{ecloud}} console. Stack level authentication using the {{es}} username and password should also work with `{{kibana-id}}.{vpce|privatelink|psc}.domain` URLs. +Currently you can’t use SSO to login directly from {{ecloud}} into {{kib}} endpoints that are protected by Private Link traffic filters. However, you can still SSO into Private Link protected {{kib}} endpoints individually using the [SAML](../../users-roles/cluster-or-deployment-auth/saml.md) or [OIDC](../../users-roles/cluster-or-deployment-auth/openid-connect.md) protocol from your own identity provider, just not through the {{ecloud}} console. Stack level authentication using the {{es}} username and password should also work with `{{kibana-id}}.{vpce|privatelink|psc}.domain` URLs. ## PDF report generation using Alerts or Watcher webhooks [ec-restrictions-traffic-filters-watcher] -* PDF report automatic generation via Alerts is not possible on Elastic Cloud. -* PDF report generation isn’t possible for deployments running on Elastic stack version 8.7.0 or before that are protected by traffic filters. This limitation doesn’t apply to public webhooks such as Slack, PagerDuty, and email. For deployments running on Elastic stack version 8.7.1 and beyond, [PDF report automatic generation via Watcher webhook](../../../explore-analyze/report-and-share/automating-report-generation.md#use-watcher) is possible using the `xpack.notification.webhook.additional_token_enabled` configuration setting to bypass traffic filters. +* PDF report automatic generation via Alerts is not possible on {{ecloud}}. +* PDF report generation isn’t possible for deployments running on {{stack}} version 8.7.0 or before that are protected by traffic filters. This limitation doesn’t apply to public webhooks such as Slack, PagerDuty, and email. For deployments running on {{stack}} version 8.7.1 and beyond, [PDF report automatic generation via Watcher webhook](../../../explore-analyze/report-and-share/automating-report-generation.md#use-watcher) is possible using the `xpack.notification.webhook.additional_token_enabled` configuration setting to bypass traffic filters. -## Kibana [ec-restrictions-kibana] +## {{kib}} [ec-restrictions-kibana] * The maximum size of a single {{kib}} instance is 8GB. This means, {{kib}} instances can be scaled up to 8GB before they are scaled out. For example, when creating a deployment with a {{kib}} instance of size 16GB, then 2x8GB instances are created. If you face performance issues with {{kib}} PNG or PDF reports, the recommendations are to create multiple, smaller dashboards to export the data, or to use a third party browser extension for exporting the dashboard in the format you need. -* Running an external Kibana in parallel to {{ecloud}}’s Kibana instances may cause errors, for example [`Unable to decrypt attribute`](../../../explore-analyze/alerts-cases/alerts/alerting-common-issues.md#rule-cannot-decrypt-api-key), due to a mismatched [`xpack.encryptedSavedObjects.encryptionKey`](kibana://reference/configuration-reference/security-settings.md#security-encrypted-saved-objects-settings) as {{ecloud}} does not [allow users to set](edit-stack-settings.md) nor expose this value. While workarounds are possible, this is not officially supported nor generally recommended. +* Running an external {{kib}} in parallel to {{ecloud}}’s {{kib}} instances may cause errors, for example [`Unable to decrypt attribute`](../../../explore-analyze/alerts-cases/alerts/alerting-common-issues.md#rule-cannot-decrypt-api-key), due to a mismatched [`xpack.encryptedSavedObjects.encryptionKey`](kibana://reference/configuration-reference/security-settings.md#security-encrypted-saved-objects-settings) as {{ecloud}} does not [allow users to set](edit-stack-settings.md) nor expose this value. While workarounds are possible, this is not officially supported nor generally recommended. % ## APM Agent central configuration with PrivateLink or traffic filters [ec-restrictions-apm-traffic-filters] @@ -117,11 +117,11 @@ Currently you can’t use SSO to login directly from {{ecloud}} into Kibana endp ## Restoring a snapshot across deployments [ec-snapshot-restore-enterprise-search-kibana-across-deployments] -Kibana does not currently support restoring a snapshot of their indices across Elastic Cloud deployments. +{{kib}} does not currently support restoring a snapshot of their indices across {{ecloud}} deployments. -* [Kibana uses encryption keys](/deploy-manage/security/secure-your-cluster-deployment.md) in various places, ranging from encrypting data in some areas of reporting, alerts, actions, connector tokens, ingest outputs used in Fleet and Synthetics monitoring to user sessions. -* Currently, there is not a way to retrieve the values of Kibana encryption keys, or set them in the target deployment before restoring a snapshot. As a result, once a snapshot is restored, Kibana will not be able to decrypt the data required for some features to function properly in the target deployment. -* If you have already restored a snapshot across deployments and now have broken Kibana saved objects in the target deployment, you will have to recreate all broken configurations and objects, or create a new setup in the target deployment instead of using snapshot restore. +* [{{kib}} uses encryption keys](/deploy-manage/security/secure-your-cluster-deployment.md) in various places, ranging from encrypting data in some areas of reporting, alerts, actions, connector tokens, ingest outputs used in Fleet and Synthetics monitoring to user sessions. +* Currently, there is not a way to retrieve the values of {{kib}} encryption keys, or set them in the target deployment before restoring a snapshot. As a result, once a snapshot is restored, {{kib}} will not be able to decrypt the data required for some features to function properly in the target deployment. +* If you have already restored a snapshot across deployments and now have broken {{kib}} saved objects in the target deployment, you will have to recreate all broken configurations and objects, or create a new setup in the target deployment instead of using snapshot restore. A snapshot taken using the default `found-snapshots` repository can only be restored to deployments in the same region. If you need to restore snapshots across regions, create the destination deployment, connect to the [custom repository](../../tools/snapshot-and-restore/elastic-cloud-hosted.md), and then [restore from a snapshot](../../tools/snapshot-and-restore/restore-snapshot.md). @@ -139,16 +139,16 @@ To make a seamless migration, after restoring from a snapshot there are some add ## Regions and Availability Zones [ec-regions-and-availability-zone] -* The AWS `us-west-1` region is limited to two availability zones for ES data nodes and one (tiebreaker only) virtual zone (as depicted by the `-z` in the AZ (`us-west-1z`). Deployment creation with three availability zones for Elasticsearch data nodes for hot, warm, and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The virtual zone `us-west-1z` can only hold an Elasticsearch tiebreaker node (no data nodes). The workaround is to use a different AWS US region that allows three availability zones, or to scale existing nodes up within the two availability zones. -* The AWS `eu-central-2` region is limited to two availability zones for CPU Optimized (ARM) Hardware profile ES data node and warm/cold tier. Deployment creation with three availability zones for Elasticsearch data nodes for hot (for CPU Optimized (ARM) profile), warm and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The workaround is to use a different AWS region that allows three availability zones, or to scale existing nodes up within the two availability zones. +* The AWS `us-west-1` region is limited to two availability zones for ES data nodes and one (tiebreaker only) virtual zone (as depicted by the `-z` in the AZ (`us-west-1z`). Deployment creation with three availability zones for {{es}} data nodes for hot, warm, and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The virtual zone `us-west-1z` can only hold an {{es}} tiebreaker node (no data nodes). The workaround is to use a different AWS US region that allows three availability zones, or to scale existing nodes up within the two availability zones. +* The AWS `eu-central-2` region is limited to two availability zones for CPU Optimized (ARM) Hardware profile ES data node and warm/cold tier. Deployment creation with three availability zones for {{es}} data nodes for hot (for CPU Optimized (ARM) profile), warm and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The workaround is to use a different AWS region that allows three availability zones, or to scale existing nodes up within the two availability zones. % ## Known problems [ec-known-problems] % * There is a known problem affecting clusters with versions 7.7.0 and 7.7.1 due to [a bug in Elasticsearch](https://github.com/elastic/elasticsearch/issues/56739). Although rare, this bug can prevent you from running plans. If this occurs we recommend that you retry the plan, and if that fails please contact support to get your plan through. Because of this bug we recommend you to upgrade to version 7.8 and higher, where the problem has already been addressed. -% * A known issue can prevent direct rolling upgrades from Elasticsearch version 5.6.10 to version 6.3.0. As a workaround, we have removed version 6.3.0 from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) for new cluster deployments and for upgrading existing ones. If you are affected by this issue, check [Rolling upgrades from 5.6.x to 6.3.0 fails with "java.lang.IllegalStateException: commit doesn’t contain history uuid"](https://elastic.my.salesforce.com/articles/Support_Article/Rolling-upgrades-to-6-3-0-from-5-x-fails-with-java-lang-IllegalStateException-commit-doesn-t-contain-history-uuid?popup=false&id=kA0610000005JFG) in our Elastic Support Portal. If these steps do not work or you do not have access to the Support Portal, you can contact `support@elastic.co`. +% * A known issue can prevent direct rolling upgrades from {{es}} version 5.6.10 to version 6.3.0. As a workaround, we have removed version 6.3.0 from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) for new cluster deployments and for upgrading existing ones. If you are affected by this issue, check [Rolling upgrades from 5.6.x to 6.3.0 fails with "java.lang.IllegalStateException: commit doesn’t contain history uuid"](https://elastic.my.salesforce.com/articles/Support_Article/Rolling-upgrades-to-6-3-0-from-5-x-fails-with-java-lang-IllegalStateException-commit-doesn-t-contain-history-uuid?popup=false&id=kA0610000005JFG) in our Elastic Support Portal. If these steps do not work or you do not have access to the Support Portal, you can contact `support@elastic.co`. -## Repository Analysis API is unavailable in Elastic Cloud [ec-repository-analyis-unavailable] +## Repository Analysis API is unavailable in {{ecloud}} [ec-repository-analyis-unavailable] -* The Elasticsearch [Repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) is not available in {{ecloud}} due to deployments defaulting to having [operator privileges](../../users-roles/cluster-or-deployment-auth/operator-privileges.md) enabled that prevent non-operator privileged users from using it along with a number of other APIs. +* The {{es}} [Repository analysis API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) is not available in {{ecloud}} due to deployments defaulting to having [operator privileges](../../users-roles/cluster-or-deployment-auth/operator-privileges.md) enabled that prevent non-operator privileged users from using it along with a number of other APIs. diff --git a/deploy-manage/deploy/elastic-cloud/serverless.md b/deploy-manage/deploy/elastic-cloud/serverless.md index 1b4fdda0de..127bb227da 100644 --- a/deploy-manage/deploy/elastic-cloud/serverless.md +++ b/deploy-manage/deploy/elastic-cloud/serverless.md @@ -31,7 +31,7 @@ Elastic provides three serverless solutions available on {{ecloud}}. Follow thes Afterwards, you can: -* Learn about the [cloud organization](../../cloud-organization.md) that is the umbrella for all of your Elastic Cloud resources, users, and account settings. +* Learn about the [cloud organization](../../cloud-organization.md) that is the umbrella for all of your {{ecloud}} resources, users, and account settings. * Learn about how {{es-serverless}} is [billed](../../cloud-organization/billing/serverless-project-billing-dimensions.md). * Learn how to [create an API key](../../api-keys/serverless-project-api-keys.md). This key provides access to the API that enables you to manage your deployments. * Learn how manage [users and roles](../../users-roles/cloud-organization.md) in your {{es-serverless}} deployment. @@ -45,7 +45,7 @@ Afterwards, you can: **Optimized data storage:** Your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. The size of the cache layer and the volume of data it holds depend on [settings](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) that you can configure for each project. -**Dedicated experiences:** All serverless solutions are built on the Elastic Search Platform and include the core capabilities of the Elastic Stack. They also each offer a distinct experience and specific capabilities that help you focus on your data, goals, and use cases. +**Dedicated experiences:** All serverless solutions are built on the Elastic Search Platform and include the core capabilities of the {{stack}}. They also each offer a distinct experience and specific capabilities that help you focus on your data, goals, and use cases. **Pay per usage:** Each serverless project type includes product-specific and usage-based pricing. @@ -67,7 +67,7 @@ Migration paths between hosted deployments and serverless projects are currently **How can I move data to or from serverless projects?** -We are working on data migration tools! In the interim, [use Logstash](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html) with Elasticsearch input and output plugins to move data to and from serverless projects. +We are working on data migration tools! In the interim, [use Logstash](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html) with {{es}} input and output plugins to move data to and from serverless projects. **How does serverless ensure compatibility between software versions?** diff --git a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md index 7d8ceb694d..1dfeed3d9a 100644 --- a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md +++ b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md @@ -13,7 +13,7 @@ This example shows how to use the {{ecloud}} RESTful API to switch from using [A ### Requirements [ec_requirements_3] -Given a deployment that is using an APM & Fleet Server with Elastic Stack version 8.0 or later, it is possible to start using Integrations Server instead by updating the deployment with an Integrations Server payload. Switching from APM & Fleet Server to Integrations Server in this way ensures that the endpoints and credentials currently used by APM Server and Fleet Server remain the same after the switch. +Given a deployment that is using an APM & Fleet Server with {{stack}} version 8.0 or later, it is possible to start using Integrations Server instead by updating the deployment with an Integrations Server payload. Switching from APM & Fleet Server to Integrations Server in this way ensures that the endpoints and credentials currently used by APM Server and Fleet Server remain the same after the switch. In order to start using the Integrations Server payload, you first need to enable the APM integration for Elastic Agent by following the steps in [Switch to the Elastic APM integration](/solutions/observability/apps/switch-an-elastic-cloud-cluster-to-apm-integration.md). @@ -375,19 +375,19 @@ curl -XPUT \ ## Manage your APM & Fleet Server [ec-manage-apm-and-fleet] ::::{note} -Beginning with Elastic Stack version 8.0, [Integrations Server](manage-integrations-server.md) is replacing APM & Fleet Server. New deployments with version 8.0 will use Integrations Server automatically. Existing deployments using APM & Fleet Server will continue to use APM & Fleet Server after upgrading to version 8.0. +Beginning with {{stack}} version 8.0, [Integrations Server](manage-integrations-server.md) is replacing APM & Fleet Server. New deployments with version 8.0 will use Integrations Server automatically. Existing deployments using APM & Fleet Server will continue to use APM & Fleet Server after upgrading to version 8.0. :::: -You have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the Elasticsearch cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. +You have the option to add a combined [Application Performance Monitoring (APM) Server](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Fleet Server](/reference/fleet/index.md) to your deployment. APM allows you to monitor software services and applications in real time, turning that data into documents stored in the {{es}} cluster. Fleet allows you to centrally manage Elastic Agents on many hosts. -As part of provisioning, the APM Server and Fleet Server are already configured to work with Elasticsearch and Kibana. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. +As part of provisioning, the APM Server and Fleet Server are already configured to work with {{es}} and {{kib}}. At the end of provisioning, you are shown the secret token to configure communication between the APM Server and the backend [APM Agents](https://www.elastic.co/guide/en/apm/agent/index.html). The APM Agents get deployed within your services and applications. From the deployment **APM & Fleet** page you can also: * Get the URL to complete the APM agent configuration. -* Use the `elastic` credentials to go to the APM area of Kibana. Step by step instructions to configure a variety of agents are available right in Kibana. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. -* Use the `elastic` credentials to go to the Fleet area of Kibana. Step by step instructions to download and install Elastic Agent on your hosts are available right in Kibana. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. +* Use the `elastic` credentials to go to the APM area of {{kib}}. Step by step instructions to configure a variety of agents are available right in {{kib}}. After that, you can use the pre-built, dedicated dashboards and the APM tab to visualize the data that is sent back from the APM Agents. +* Use the `elastic` credentials to go to the Fleet area of {{kib}}. Step by step instructions to download and install Elastic Agent on your hosts are available right in {{kib}}. After that, you can manage enrolled Elastic Agents on the **Agents** tab, and the data shipped back from those Elastic Agents on the **Data streams** tab. * Reset the APM secret token. ::::{important} @@ -396,7 +396,7 @@ From the deployment **APM & Fleet** page you can also: * Access the APM & Fleet logs and metrics. * Stop and restart your APM & Fleet Server. -* Upgrade your APM & Fleet Server version if it is out of sync with your Elasticsearch cluster. +* Upgrade your APM & Fleet Server version if it is out of sync with your {{es}} cluster. * Fully remove the APM & Fleet Server, delete it from the disk, and stop the charges. diff --git a/deploy-manage/deploy/elastic-cloud/tools-apis.md b/deploy-manage/deploy/elastic-cloud/tools-apis.md index 666d6328c3..d1d8e571d9 100644 --- a/deploy-manage/deploy/elastic-cloud/tools-apis.md +++ b/deploy-manage/deploy/elastic-cloud/tools-apis.md @@ -81,7 +81,7 @@ serverless: unavailable ::: -## Elastic Cloud Control: command-line interface for {{ecloud}} +## {{ecloud}} Control: command-line interface for {{ecloud}} ```{applies_to} deployment: ess: ga diff --git a/deploy-manage/deploy/self-managed.md b/deploy-manage/deploy/self-managed.md index f88b707ae1..6ee9bcf7de 100644 --- a/deploy-manage/deploy/self-managed.md +++ b/deploy-manage/deploy/self-managed.md @@ -48,7 +48,7 @@ Learn how to install and configure {{es}}. {{es}} is the distributed search and * [](/deploy-manage/deploy/self-managed/plugins.md): Learn about how to extend {{es}} functionality with plugins. :::{note} - For a complete list of settings that you can apply to your {{es}} cluster, refer to the [Elasticsearch configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). + For a complete list of settings that you can apply to your {{es}} cluster, refer to the [{{es}} configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). ::: ### Deploying Kibana diff --git a/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md b/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md index 6b5127f5f2..413d05829d 100644 --- a/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md +++ b/deploy-manage/deploy/self-managed/_snippets/ca-fingerprint.md @@ -9,6 +9,6 @@ openssl x509 -fingerprint -sha256 -in config/certs/http_ca.crt The command returns the security certificate, including the fingerprint. The `issuer` should be `{{es}} security auto-configuration HTTP CA`. ```sh -issuer= /CN=Elasticsearch security auto-configuration HTTP CA +issuer= /CN={{es}} security auto-configuration HTTP CA SHA256 Fingerprint= ``` \ No newline at end of file diff --git a/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md b/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md index d2437a4268..e0cbb45659 100644 --- a/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md +++ b/deploy-manage/deploy/self-managed/_snippets/enroll-systemd.md @@ -13,7 +13,7 @@ For example: ```sh - Kibana has not been configured. + {{kib}} has not been configured. Go to http://:5601/?code= to get started. ``` diff --git a/deploy-manage/deploy/self-managed/configure-elasticsearch.md b/deploy-manage/deploy/self-managed/configure-elasticsearch.md index b067865c69..697a653444 100644 --- a/deploy-manage/deploy/self-managed/configure-elasticsearch.md +++ b/deploy-manage/deploy/self-managed/configure-elasticsearch.md @@ -14,7 +14,7 @@ The configuration files should contain settings which are node-specific (such as ## Available settings -For a complete list of settings that you can apply to your {{es}} cluster, refer to the [Elasticsearch configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). +For a complete list of settings that you can apply to your {{es}} cluster, refer to the [{{es}} configuration reference](elasticsearch://reference/elasticsearch/configuration-reference/index.md). For a list of settings that must be configured before using your cluster in production, refer to [](/deploy-manage/deploy/self-managed/important-settings-configuration.md). diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md b/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md index f443916a38..89a0a61427 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-docker-compose.md @@ -54,7 +54,7 @@ Use Docker Compose to start a three-node {{es}} cluster with {{kib}}. Docker Com ```txt ... - # Port to expose Elasticsearch HTTP API to the host + # Port to expose {{es}} HTTP API to the host #ES_PORT=9200 ES_PORT=127.0.0.1:9200 ... diff --git a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md index 98f71a26df..598e25a116 100644 --- a/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md +++ b/deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md @@ -57,7 +57,7 @@ Create a file called `elasticsearch.repo` in the `/etc/yum.repos.d/` directory f ```ini [elasticsearch] -name=Elasticsearch repository for 9.x packages +name={{es}} repository for 9.x packages baseurl=https://artifacts.elastic.co/packages/9.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch diff --git a/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md b/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md index 79667e9122..d4e9ae5100 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md +++ b/deploy-manage/deploy/self-managed/install-kibana-from-archive-on-linux-macos.md @@ -13,7 +13,7 @@ applies_to: # Install {{kib}} from archive on Linux or macOS [targz] -{{kib}} is provided for Linux and Darwin as a `.tar.gz` package. These packages are the easiest formats to use when trying out Kibana. +{{kib}} is provided for Linux and Darwin as a `.tar.gz` package. These packages are the easiest formats to use when trying out {{kib}}. :::{include} _snippets/trial.md ::: @@ -107,7 +107,7 @@ By default, {{kib}} runs in the foreground, prints its logs to the standard outp The `.tar.gz` packages are entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. -This is very convenient because you don’t have to create any directories to start using Kibana, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. +This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. | Type | Description | Default Location | Setting | | --- | --- | --- | --- | diff --git a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md index 31a5aed2f3..e1d71480a0 100644 --- a/deploy-manage/deploy/self-managed/install-kibana-on-windows.md +++ b/deploy-manage/deploy/self-managed/install-kibana-on-windows.md @@ -64,7 +64,7 @@ By default, {{kib}} runs in the foreground, prints its logs to `STDOUT`, and can The `.zip` package is entirely self-contained. All files and directories are, by default, contained within `$KIBANA_HOME` — the directory created when unpacking the archive. -This is very convenient because you don’t have to create any directories to start using Kibana, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. +This is very convenient because you don’t have to create any directories to start using {{kib}}, and uninstalling {{kib}} is as easy as removing the `$KIBANA_HOME` directory. However, it is advisable to change the default locations of the config and data directories so that you do not delete important data later on. | Type | Description | Default Location | Setting | | --- | --- | --- | --- | diff --git a/deploy-manage/deploy/self-managed/networkaddress-cache-ttl.md b/deploy-manage/deploy/self-managed/networkaddress-cache-ttl.md index 37037724a9..1ac03d109d 100644 --- a/deploy-manage/deploy/self-managed/networkaddress-cache-ttl.md +++ b/deploy-manage/deploy/self-managed/networkaddress-cache-ttl.md @@ -8,5 +8,5 @@ applies_to: # DNS cache settings [networkaddress-cache-ttl] -{{es}} runs with a security manager in place. With a security manager in place, the JVM defaults to caching positive hostname resolutions indefinitely and defaults to caching negative hostname resolutions for ten seconds. {{es}} overrides this behavior with default values to cache positive lookups for sixty seconds, and to cache negative lookups for ten seconds. These values should be suitable for most environments, including environments where DNS resolutions vary with time. If not, you can edit the values `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl` in the [JVM options](elasticsearch://reference/elasticsearch/jvm-settings.md#set-jvm-options). Note that the values [`networkaddress.cache.ttl=`](https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.md) and [`networkaddress.cache.negative.ttl=`](https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.md) in the [Java security policy](https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.md) are ignored by Elasticsearch unless you remove the settings for `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl`. +{{es}} runs with a security manager in place. With a security manager in place, the JVM defaults to caching positive hostname resolutions indefinitely and defaults to caching negative hostname resolutions for ten seconds. {{es}} overrides this behavior with default values to cache positive lookups for sixty seconds, and to cache negative lookups for ten seconds. These values should be suitable for most environments, including environments where DNS resolutions vary with time. If not, you can edit the values `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl` in the [JVM options](elasticsearch://reference/elasticsearch/jvm-settings.md#set-jvm-options). Note that the values [`networkaddress.cache.ttl=`](https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.md) and [`networkaddress.cache.negative.ttl=`](https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.md) in the [Java security policy](https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.md) are ignored by {{es}} unless you remove the settings for `es.networkaddress.cache.ttl` and `es.networkaddress.cache.negative.ttl`. diff --git a/deploy-manage/distributed-architecture.md b/deploy-manage/distributed-architecture.md index 112109bdc2..f9cb22a41c 100644 --- a/deploy-manage/distributed-architecture.md +++ b/deploy-manage/distributed-architecture.md @@ -23,4 +23,4 @@ The topics in this section provides information about the architecture of {{es}} * [Shard allocation awareness](distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md): Learn how to use custom node attributes to distribute shards across different racks or availability zones. * [Disocvery and cluster formation](distributed-architecture/discovery-cluster-formation.md): Learn about the cluster formation process including voting, adding nodes and publishing the cluster state. * [Shard request cache](/deploy-manage/distributed-architecture/shard-request-cache.md): Learn how {{es}} caches search requests to improve performance. -* [Kibana task management](distributed-architecture/kibana-tasks-management.md): Learn how {{kib}} runs background tasks and distribute work across multiple {{kib}} instances to be persistent and scale with your deployment. \ No newline at end of file +* [{{kib}} task management](distributed-architecture/kibana-tasks-management.md): Learn how {{kib}} runs background tasks and distribute work across multiple {{kib}} instances to be persistent and scale with your deployment. \ No newline at end of file diff --git a/deploy-manage/distributed-architecture/clusters-nodes-shards.md b/deploy-manage/distributed-architecture/clusters-nodes-shards.md index e4d0f5b8d9..6bd772f5a4 100644 --- a/deploy-manage/distributed-architecture/clusters-nodes-shards.md +++ b/deploy-manage/distributed-architecture/clusters-nodes-shards.md @@ -12,8 +12,8 @@ applies_to: Nodes and shards are what make {{es}} distributed and scalable. These concepts aren’t essential if you’re just getting started. How you [deploy {{es}}](../../get-started/deployment-options.md) in production determines what you need to know: * **Self-managed {{es}}**: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies. -* **Elastic Cloud**: Elastic can autoscale resources in response to workload changes. Choose from different deployment types to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. -* **Elastic Cloud Serverless**: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated on the serverless platform, which is designed to scale with your workload. +* **{{ecloud}}**: Elastic can autoscale resources in response to workload changes. Choose from different deployment types to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important. +* **{{serverless-full}}**: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated on the serverless platform, which is designed to scale with your workload. :::: diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md index 349644bb19..bcae661677 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md @@ -9,7 +9,7 @@ applies_to: The elected master periodically checks each of the nodes in the cluster to ensure that they are still connected and healthy. Each node in the cluster also periodically checks the health of the elected master. These checks are known respectively as *follower checks* and *leader checks*. -Elasticsearch allows these checks to occasionally fail or timeout without taking any action. It considers a node to be faulty only after a number of consecutive checks have failed. You can control fault detection behavior with [`cluster.fault_detection.*` settings](elasticsearch://reference/elasticsearch/configuration-reference/discovery-cluster-formation-settings.md). +{{es}} allows these checks to occasionally fail or timeout without taking any action. It considers a node to be faulty only after a number of consecutive checks have failed. You can control fault detection behavior with [`cluster.fault_detection.*` settings](elasticsearch://reference/elasticsearch/configuration-reference/discovery-cluster-formation-settings.md). If the elected master detects that a node has disconnected, however, this situation is treated as an immediate failure. The master bypasses the timeout and retry setting values and attempts to remove the node from the cluster. Similarly, if a node detects that the elected master has disconnected, this situation is treated as an immediate failure. The node bypasses the timeout and retry settings and restarts its discovery phase to try and find or elect a new master. diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md index 7ad63296d3..bc649cac74 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/discovery-hosts-providers.md @@ -7,7 +7,7 @@ applies_to: # Discovery [discovery-hosts-providers] -Discovery is the process by which the cluster formation module finds other nodes with which to form a cluster. This process runs when you start an Elasticsearch node or when a node believes the master node failed and continues until the master node is found or a new master node is elected. +Discovery is the process by which the cluster formation module finds other nodes with which to form a cluster. This process runs when you start an {{es}} node or when a node believes the master node failed and continues until the master node is found or a new master node is elected. This process starts with a list of *seed* addresses from one or more [seed hosts providers](#built-in-hosts-providers), together with the addresses of any master-eligible nodes that were in the last-known cluster. The process operates in two phases: First, each node probes the seed addresses by connecting to each address and attempting to identify the node to which it is connected and to verify that it is master-eligible. Secondly, if successful, it shares with the remote node a list of all of its known master-eligible peers and the remote node responds with *its* peers in turn. The node then probes all the new nodes that it just discovered, requests their peers, and so on. diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md index 2ec224b321..23a6f578ae 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md @@ -7,7 +7,7 @@ applies_to: # Bootstrapping a cluster [modules-discovery-bootstrap-cluster] -Starting an Elasticsearch cluster for the very first time requires the initial set of [master-eligible nodes](../clusters-nodes-shards/node-roles.md#master-node-role) to be explicitly defined on one or more of the master-eligible nodes in the cluster. This is known as *cluster bootstrapping*. This is only required the first time a cluster starts up. Freshly-started nodes that are joining a running cluster obtain this information from the cluster’s elected master. +Starting an {{es}} cluster for the very first time requires the initial set of [master-eligible nodes](../clusters-nodes-shards/node-roles.md#master-node-role) to be explicitly defined on one or more of the master-eligible nodes in the cluster. This is known as *cluster bootstrapping*. This is only required the first time a cluster starts up. Freshly-started nodes that are joining a running cluster obtain this information from the cluster’s elected master. The initial set of master-eligible nodes is defined in the [`cluster.initial_master_nodes` setting](../../deploy/self-managed/important-settings-configuration.md#initial_master_nodes). This should be set to a list containing one of the following items for each master-eligible node: @@ -62,7 +62,7 @@ This message shows the node names `master-a.example.com` and `master-b.example.c ## Choosing a cluster name [bootstrap-cluster-name] -The [`cluster.name`](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-name) setting enables you to create multiple clusters which are separated from each other. Nodes verify that they agree on their cluster name when they first connect to each other, and Elasticsearch will only form a cluster from nodes that all have the same cluster name. The default value for the cluster name is `elasticsearch`, but it is recommended to change this to reflect the logical name of the cluster. +The [`cluster.name`](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-name) setting enables you to create multiple clusters which are separated from each other. Nodes verify that they agree on their cluster name when they first connect to each other, and {{es}} will only form a cluster from nodes that all have the same cluster name. The default value for the cluster name is `elasticsearch`, but it is recommended to change this to reflect the logical name of the cluster. ## Auto-bootstrapping in development mode [bootstrap-auto-bootstrap] diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md index c7fb6699a6..ede117eae6 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-quorums.md @@ -7,11 +7,11 @@ applies_to: # Quorum-based decision making [modules-discovery-quorums] -Electing a master node and changing the cluster state are the two fundamental tasks that master-eligible nodes must work together to perform. It is important that these activities work robustly even if some nodes have failed. Elasticsearch achieves this robustness by considering each action to have succeeded on receipt of responses from a *quorum*, which is a subset of the master-eligible nodes in the cluster. The advantage of requiring only a subset of the nodes to respond is that it means some of the nodes can fail without preventing the cluster from making progress. The quorums are carefully chosen so the cluster does not have a "split brain" scenario where it’s partitioned into two pieces such that each piece may make decisions that are inconsistent with those of the other piece. +Electing a master node and changing the cluster state are the two fundamental tasks that master-eligible nodes must work together to perform. It is important that these activities work robustly even if some nodes have failed. {{es}} achieves this robustness by considering each action to have succeeded on receipt of responses from a *quorum*, which is a subset of the master-eligible nodes in the cluster. The advantage of requiring only a subset of the nodes to respond is that it means some of the nodes can fail without preventing the cluster from making progress. The quorums are carefully chosen so the cluster does not have a "split brain" scenario where it’s partitioned into two pieces such that each piece may make decisions that are inconsistent with those of the other piece. -Elasticsearch allows you to add and remove master-eligible nodes to a running cluster. In many cases you can do this simply by starting or stopping the nodes as required. See [*Add and remove nodes in your cluster*](../../maintenance/add-and-remove-elasticsearch-nodes.md) for more information. +{{es}} allows you to add and remove master-eligible nodes to a running cluster. In many cases you can do this simply by starting or stopping the nodes as required. See [*Add and remove nodes in your cluster*](../../maintenance/add-and-remove-elasticsearch-nodes.md) for more information. -As nodes are added or removed Elasticsearch maintains an optimal level of fault tolerance by updating the cluster’s [voting configuration](modules-discovery-voting.md), which is the set of master-eligible nodes whose responses are counted when making decisions such as electing a new master or committing a new cluster state. A decision is made only after more than half of the nodes in the voting configuration have responded. Usually the voting configuration is the same as the set of all the master-eligible nodes that are currently in the cluster. However, there are some situations in which they may be different. +As nodes are added or removed {{es}} maintains an optimal level of fault tolerance by updating the cluster’s [voting configuration](modules-discovery-voting.md), which is the set of master-eligible nodes whose responses are counted when making decisions such as electing a new master or committing a new cluster state. A decision is made only after more than half of the nodes in the voting configuration have responded. Usually the voting configuration is the same as the set of all the master-eligible nodes that are currently in the cluster. However, there are some situations in which they may be different. ::::{important} To be sure that the cluster remains available you **must not stop half or more of the nodes in the voting configuration at the same time**. As long as more than half of the voting nodes are available the cluster can still work normally. This means that if there are three or four master-eligible nodes, the cluster can tolerate one of them being unavailable. If there are two or fewer master-eligible nodes, they must all remain available. @@ -24,7 +24,7 @@ After a master-eligible node has joined or left the cluster the elected master m ## Master elections [_master_elections] -Elasticsearch uses an election process to agree on an elected master node, both at startup and if the existing elected master fails. Any master-eligible node can start an election, and normally the first election that takes place will succeed. Elections only usually fail when two nodes both happen to start their elections at about the same time, so elections are scheduled randomly on each node to reduce the probability of this happening. Nodes will retry elections until a master is elected, backing off on failure, so that eventually an election will succeed (with arbitrarily high probability). The scheduling of master elections are controlled by the [master election settings](elasticsearch://reference/elasticsearch/configuration-reference/discovery-cluster-formation-settings.md#master-election-settings). +{{es}} uses an election process to agree on an elected master node, both at startup and if the existing elected master fails. Any master-eligible node can start an election, and normally the first election that takes place will succeed. Elections only usually fail when two nodes both happen to start their elections at about the same time, so elections are scheduled randomly on each node to reduce the probability of this happening. Nodes will retry elections until a master is elected, backing off on failure, so that eventually an election will succeed (with arbitrarily high probability). The scheduling of master elections are controlled by the [master election settings](elasticsearch://reference/elasticsearch/configuration-reference/discovery-cluster-formation-settings.md#master-election-settings). ## Cluster maintenance, rolling restarts and migrations [_cluster_maintenance_rolling_restarts_and_migrations] diff --git a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md index f12195a401..d0fd94a63f 100644 --- a/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md +++ b/deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-voting.md @@ -31,21 +31,21 @@ GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_con The current voting configuration is not necessarily the same as the set of all available master-eligible nodes in the cluster. Altering the voting configuration involves taking a vote, so it takes some time to adjust the configuration as nodes join or leave the cluster. Also, there are situations where the most resilient configuration includes unavailable nodes or does not include some available nodes. In these situations, the voting configuration differs from the set of available master-eligible nodes in the cluster. :::: -Larger voting configurations are usually more resilient, so Elasticsearch normally prefers to add master-eligible nodes to the voting configuration after they join the cluster. Similarly, if a node in the voting configuration leaves the cluster and there is another master-eligible node in the cluster that is not in the voting configuration then it is preferable to swap these two nodes over. The size of the voting configuration is thus unchanged but its resilience increases. +Larger voting configurations are usually more resilient, so {{es}} normally prefers to add master-eligible nodes to the voting configuration after they join the cluster. Similarly, if a node in the voting configuration leaves the cluster and there is another master-eligible node in the cluster that is not in the voting configuration then it is preferable to swap these two nodes over. The size of the voting configuration is thus unchanged but its resilience increases. It is not so straightforward to automatically remove nodes from the voting configuration after they have left the cluster. Different strategies have different benefits and drawbacks, so the right choice depends on how the cluster will be used. You can control whether the voting configuration automatically shrinks by using the [`cluster.auto_shrink_voting_configuration` setting](elasticsearch://reference/elasticsearch/configuration-reference/discovery-cluster-formation-settings.md). ::::{note} -If `cluster.auto_shrink_voting_configuration` is set to `true` (which is the default and recommended value) and there are at least three master-eligible nodes in the cluster, Elasticsearch remains capable of processing cluster state updates as long as all but one of its master-eligible nodes are healthy. +If `cluster.auto_shrink_voting_configuration` is set to `true` (which is the default and recommended value) and there are at least three master-eligible nodes in the cluster, {{es}} remains capable of processing cluster state updates as long as all but one of its master-eligible nodes are healthy. :::: -There are situations in which Elasticsearch might tolerate the loss of multiple nodes, but this is not guaranteed under all sequences of failures. If the `cluster.auto_shrink_voting_configuration` setting is `false`, you must remove departed nodes from the voting configuration manually. Use the [voting exclusions API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-post-voting-config-exclusions) to achieve the desired level of resilience. +There are situations in which {{es}} might tolerate the loss of multiple nodes, but this is not guaranteed under all sequences of failures. If the `cluster.auto_shrink_voting_configuration` setting is `false`, you must remove departed nodes from the voting configuration manually. Use the [voting exclusions API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-post-voting-config-exclusions) to achieve the desired level of resilience. -No matter how it is configured, Elasticsearch will not suffer from a "split-brain" inconsistency. The `cluster.auto_shrink_voting_configuration` setting affects only its availability in the event of the failure of some of its nodes and the administrative tasks that must be performed as nodes join and leave the cluster. +No matter how it is configured, {{es}} will not suffer from a "split-brain" inconsistency. The `cluster.auto_shrink_voting_configuration` setting affects only its availability in the event of the failure of some of its nodes and the administrative tasks that must be performed as nodes join and leave the cluster. ## Even numbers of master-eligible nodes [_even_numbers_of_master_eligible_nodes] -There should normally be an odd number of master-eligible nodes in a cluster. If there is an even number, Elasticsearch leaves one of them out of the voting configuration to ensure that it has an odd size. This omission does not decrease the failure-tolerance of the cluster. In fact, improves it slightly: if the cluster suffers from a network partition that divides it into two equally-sized halves then one of the halves will contain a majority of the voting configuration and will be able to keep operating. If all of the votes from master-eligible nodes were counted, neither side would contain a strict majority of the nodes and so the cluster would not be able to make any progress. +There should normally be an odd number of master-eligible nodes in a cluster. If there is an even number, {{es}} leaves one of them out of the voting configuration to ensure that it has an odd size. This omission does not decrease the failure-tolerance of the cluster. In fact, improves it slightly: if the cluster suffers from a network partition that divides it into two equally-sized halves then one of the halves will contain a majority of the voting configuration and will be able to keep operating. If all of the votes from master-eligible nodes were counted, neither side would contain a strict majority of the nodes and so the cluster would not be able to make any progress. For instance if there are four master-eligible nodes in the cluster and the voting configuration contained all of them, any quorum-based decision would require votes from at least three of them. This situation means that the cluster can tolerate the loss of only a single master-eligible node. If this cluster were split into two equal halves, neither half would contain three master-eligible nodes and the cluster would not be able to make any progress. If the voting configuration contains only three of the four master-eligible nodes, however, the cluster is still only fully tolerant to the loss of one node, but quorum-based decisions require votes from two of the three voting nodes. In the event of an even split, one half will contain two of the three voting nodes so that half will remain available. diff --git a/deploy-manage/distributed-architecture/kibana-tasks-management.md b/deploy-manage/distributed-architecture/kibana-tasks-management.md index bc37ebc0cb..3c64a661a3 100644 --- a/deploy-manage/distributed-architecture/kibana-tasks-management.md +++ b/deploy-manage/distributed-architecture/kibana-tasks-management.md @@ -5,12 +5,12 @@ applies_to: stack: --- -# Kibana task management [task-manager-production-considerations] +# {{kib}} task management [task-manager-production-considerations] {{kib}} Task Manager is used by features such as Alerting, Actions, and Reporting to run mission critical work as persistent background tasks. These background tasks distribute work across multiple {{kib}} instances. This has three major benefits: * **Persistence**: All task state and scheduling is stored in {{es}}, so if you restart {{kib}}, tasks will pick up where they left off. -* **Scaling**: Multiple {{kib}} instances can read from and update the same task queue in {{es}}, allowing the work load to be distributed across instances. If a {{kib}} instance no longer has capacity to run tasks, you can increase capacity by adding additional {{kib}} instances. For more information on scaling, see [Kibana task manager scaling considerations](../../deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md#task-manager-scaling-guidance). +* **Scaling**: Multiple {{kib}} instances can read from and update the same task queue in {{es}}, allowing the work load to be distributed across instances. If a {{kib}} instance no longer has capacity to run tasks, you can increase capacity by adding additional {{kib}} instances. For more information on scaling, see [{{kib}} task manager scaling considerations](../../deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md#task-manager-scaling-guidance). * **Load Balancing**: Task Manager is equipped with a reactive self-healing mechanism, which allows it to reduce the amount of work it executes in reaction to an increased load related error rate in {{es}}. Additionally, when Task Manager experiences an increase in recurring tasks, it attempts to space out the work to better balance the load. ::::{important} diff --git a/deploy-manage/distributed-architecture/reading-and-writing-documents.md b/deploy-manage/distributed-architecture/reading-and-writing-documents.md index 10220a5e77..1febca4f48 100644 --- a/deploy-manage/distributed-architecture/reading-and-writing-documents.md +++ b/deploy-manage/distributed-architecture/reading-and-writing-documents.md @@ -10,21 +10,21 @@ applies_to: ## Introduction [_introduction] -Each index in Elasticsearch is [divided into shards](../../deploy-manage/index.md) and each shard can have multiple copies. These copies are known as a *replication group* and must be kept in sync when documents are added or removed. If we fail to do so, reading from one copy will result in very different results than reading from another. The process of keeping the shard copies in sync and serving reads from them is what we call the *data replication model*. +Each index in {{es}} is [divided into shards](../../deploy-manage/index.md) and each shard can have multiple copies. These copies are known as a *replication group* and must be kept in sync when documents are added or removed. If we fail to do so, reading from one copy will result in very different results than reading from another. The process of keeping the shard copies in sync and serving reads from them is what we call the *data replication model*. Elasticsearch’s data replication model is based on the *primary-backup model* and is described very well in the [PacificA paper](https://www.microsoft.com/en-us/research/publication/pacifica-replication-in-log-based-distributed-storage-systems/) of Microsoft Research. That model is based on having a single copy from the replication group that acts as the primary shard. The other copies are called *replica shards*. The primary serves as the main entry point for all indexing operations. It is in charge of validating them and making sure they are correct. Once an index operation has been accepted by the primary, the primary is also responsible for replicating the operation to the other copies. -This purpose of this section is to give a high level overview of the Elasticsearch replication model and discuss the implications it has for various interactions between write and read operations. +This purpose of this section is to give a high level overview of the {{es}} replication model and discuss the implications it has for various interactions between write and read operations. ## Basic write model [basic-write-model] -Every indexing operation in Elasticsearch is first resolved to a replication group using [routing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create), typically based on the document ID. Once the replication group has been determined, the operation is forwarded internally to the current *primary shard* of the group. This stage of indexing is referred to as the *coordinating stage*. +Every indexing operation in {{es}} is first resolved to a replication group using [routing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create), typically based on the document ID. Once the replication group has been determined, the operation is forwarded internally to the current *primary shard* of the group. This stage of indexing is referred to as the *coordinating stage*. :::{image} /deploy-manage/images/elasticsearch-reference-data_processing_flow.png :alt: An example of a basic write model. ::: -The next stage of indexing is the *primary stage*, performed on the primary shard. The primary shard is responsible for validating the operation and forwarding it to the other replicas. Since replicas can be offline, the primary is not required to replicate to all replicas. Instead, Elasticsearch maintains a list of shard copies that should receive the operation. This list is called the *in-sync copies* and is maintained by the master node. As the name implies, these are the set of "good" shard copies that are guaranteed to have processed all of the index and delete operations that have been acknowledged to the user. The primary is responsible for maintaining this invariant and thus has to replicate all operations to each copy in this set. +The next stage of indexing is the *primary stage*, performed on the primary shard. The primary shard is responsible for validating the operation and forwarding it to the other replicas. Since replicas can be offline, the primary is not required to replicate to all replicas. Instead, {{es}} maintains a list of shard copies that should receive the operation. This list is called the *in-sync copies* and is maintained by the master node. As the name implies, these are the set of "good" shard copies that are guaranteed to have processed all of the index and delete operations that have been acknowledged to the user. The primary is responsible for maintaining this invariant and thus has to replicate all operations to each copy in this set. The primary shard follows this basic flow: @@ -55,7 +55,7 @@ This is a valid scenario that can happen due to index configuration or simply be ## Basic read model [_basic_read_model] -Reads in Elasticsearch can be very lightweight lookups by ID or a heavy search request with complex aggregations that take non-trivial CPU power. One of the beauties of the primary-backup model is that it keeps all shard copies identical (with the exception of in-flight operations). As such, a single in-sync copy is sufficient to serve read requests. +Reads in {{es}} can be very lightweight lookups by ID or a heavy search request with complex aggregations that take non-trivial CPU power. One of the beauties of the primary-backup model is that it keeps all shard copies identical (with the exception of in-flight operations). As such, a single in-sync copy is sufficient to serve read requests. When a read request is received by a node, that node is responsible for forwarding it to the nodes that hold the relevant shards, collating the responses, and responding to the client. We call that node the *coordinating node* for that request. The basic flow is as follows: @@ -78,7 +78,7 @@ Responses containing partial results still provide a `200 OK` HTTP status code. ## A few simple implications [_a_few_simple_implications] -Each of these basic flows determines how Elasticsearch behaves as a system for both reads and writes. Furthermore, since read and write requests can be executed concurrently, these two basic flows interact with each other. This has a few inherent implications: +Each of these basic flows determines how {{es}} behaves as a system for both reads and writes. Furthermore, since read and write requests can be executed concurrently, these two basic flows interact with each other. This has a few inherent implications: **Efficient reads**: Under normal operation each read operation is performed once for each relevant replication group. Only under failure conditions do multiple copies of the same shard execute the same search. @@ -94,8 +94,8 @@ A single shard can slow down indexing : Because the primary waits for all replicas in the in-sync copies set during each operation, a single slow shard can slow down the entire replication group. This is the price we pay for the read efficiency mentioned above. Of course a single slow shard will also slow down unlucky searches that have been routed to it. Dirty reads -: An isolated primary can expose writes that will not be acknowledged. This is caused by the fact that an isolated primary will only realize that it is isolated once it sends requests to its replicas or when reaching out to the master. At that point the operation is already indexed into the primary and can be read by a concurrent read. Elasticsearch mitigates this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known. +: An isolated primary can expose writes that will not be acknowledged. This is caused by the fact that an isolated primary will only realize that it is isolated once it sends requests to its replicas or when reaching out to the master. At that point the operation is already indexed into the primary and can be read by a concurrent read. {{es}} mitigates this risk by pinging the master every second (by default) and rejecting indexing operations if no master is known. ## The tip of the iceberg [_the_tip_of_the_iceberg] -This document provides a high level overview of how Elasticsearch deals with data. Of course, there is much more going on under the hood. Things like primary terms, cluster state publishing, and master election all play a role in keeping this system behaving correctly. This document also doesn’t cover known and important bugs (both closed and open). We recognize that [GitHub is hard to keep up with](https://github.com/elastic/elasticsearch/issues?q=label%3Aresiliency). To help people stay on top of those, we maintain a dedicated [resiliency page](https://www.elastic.co/guide/en/elasticsearch/resiliency/current/index.html) on our website. We strongly advise reading it. +This document provides a high level overview of how {{es}} deals with data. Of course, there is much more going on under the hood. Things like primary terms, cluster state publishing, and master election all play a role in keeping this system behaving correctly. This document also doesn’t cover known and important bugs (both closed and open). We recognize that [GitHub is hard to keep up with](https://github.com/elastic/elasticsearch/issues?q=label%3Aresiliency). To help people stay on top of those, we maintain a dedicated [resiliency page](https://www.elastic.co/guide/en/elasticsearch/resiliency/current/index.html) on our website. We strongly advise reading it. diff --git a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery.md b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery.md index 5fa24342fb..5fed712a94 100644 --- a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery.md +++ b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery.md @@ -7,13 +7,13 @@ applies_to: # Shard allocation, relocation, and recovery [shard-allocation-relocation-recovery] -Each [index](../../manage-data/data-store/index-basics.md) in Elasticsearch is divided into one or more [shards](../../deploy-manage/index.md). Each document in an index belongs to a single shard. +Each [index](../../manage-data/data-store/index-basics.md) in {{es}} is divided into one or more [shards](../../deploy-manage/index.md). Each document in an index belongs to a single shard. A cluster can contain multiple copies of a shard. Each shard has one distinguished shard copy called the *primary*, and zero or more non-primary copies called *replicas*. The primary shard copy serves as the main entry point for all indexing operations. The operations on the primary shard copy are then forwarded to its replicas. Replicas maintain redundant copies of your data across the [nodes](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md) in your cluster, protecting against hardware failure and increasing capacity to serve read requests like searching or retrieving a document. If the primary shard copy fails, then a replica is promoted to primary and takes over the primary’s responsibilities. -Over the course of normal operation, Elasticsearch allocates shard copies to nodes, relocates shard copies across nodes to balance the cluster or satisfy new allocation constraints, and recovers shards to initialize new copies. In this topic, you’ll learn how these operations work and how you can control them. +Over the course of normal operation, {{es}} allocates shard copies to nodes, relocates shard copies across nodes to balance the cluster or satisfy new allocation constraints, and recovers shards to initialize new copies. In this topic, you’ll learn how these operations work and how you can control them. ::::{tip} To learn about optimizing the number and size of shards in your cluster, refer to [Size your shards](../production-guidance/optimize-performance/size-shards.md). To learn about how read and write operations are replicated across shards and shard copies, refer to [Reading and writing documents](reading-and-writing-documents.md). diff --git a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md index dd07a2f942..482d396132 100644 --- a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md +++ b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md @@ -71,7 +71,7 @@ GET _cluster/health <1> ## Removing a node permanently [_removing_a_node_permanently] -If a node is not going to return and you would like Elasticsearch to allocate the missing shards immediately, just update the timeout to zero: +If a node is not going to return and you would like {{es}} to allocate the missing shards immediately, just update the timeout to zero: ```console PUT _all/_settings diff --git a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md index 5549189f42..8042f726be 100644 --- a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md +++ b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md @@ -10,7 +10,7 @@ applies_to: # Index-level shard allocation -In Elasticsearch, per-index settings allow you to control the allocation of shards to nodes through index-level shard allocation settings. These settings enable you to specify preferences or constraints for where shards of a particular index should reside. This includes allocating shards to nodes with specific attributes or avoiding certain nodes. This level of control helps optimize resource utilization, balance load, and ensure data redundancy and availability according to your deployment's specific requirements. For additional details, check out: +In {{es}}, per-index settings allow you to control the allocation of shards to nodes through index-level shard allocation settings. These settings enable you to specify preferences or constraints for where shards of a particular index should reside. This includes allocating shards to nodes with specific attributes or avoiding certain nodes. This level of control helps optimize resource utilization, balance load, and ensure data redundancy and availability according to your deployment's specific requirements. For additional details, check out: * [Shard allocation filtering](../../../deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/index-level-shard-allocation.md): Controlling which shards are allocated to which nodes. * [Delayed allocation](../../../deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md): Delaying allocation of unassigned shards caused by a node leaving. diff --git a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md index 6a6516c14e..36f85038f0 100644 --- a/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md +++ b/deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md @@ -26,7 +26,7 @@ Learn more about [designing resilient clusters](../../production-guidance/availa To enable shard allocation awareness: -1. Specify the location of each node with a [custom node attribute](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#custom-node-attributes). For example, if you want Elasticsearch to distribute shards across different racks, you might use an awareness attribute called `rack_id`. +1. Specify the location of each node with a [custom node attribute](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#custom-node-attributes). For example, if you want {{es}} to distribute shards across different racks, you might use an awareness attribute called `rack_id`. You can set custom attributes in two ways: diff --git a/deploy-manage/kibana-reporting-configuration.md b/deploy-manage/kibana-reporting-configuration.md index 40083a9905..4864f90e44 100644 --- a/deploy-manage/kibana-reporting-configuration.md +++ b/deploy-manage/kibana-reporting-configuration.md @@ -1,5 +1,5 @@ --- -navigation_title: Configure Kibana reporting +navigation_title: Configure {{kib}} reporting mapped_pages: - https://www.elastic.co/guide/en/kibana/current/secure-reporting.html applies_to: @@ -13,21 +13,21 @@ applies_to: % this anchor belongs to `kibana reporting production considerations doc` $$$reporting-chromium-sandbox$$$ -# Configure Kibana reporting [secure-reporting] +# Configure {{kib}} reporting [secure-reporting] {{kib}}'s reporting functionality offers multiple ways to share **Discover** sessions, dashboards, **Visualize Library** visualizations, and **Canvas** workpads. This section covers the necessary configuration to ensure reporting works correctly in your deployment. For guidance on using {{report-features}} effectively, refer to [](/explore-analyze/report-and-share.md). ::::{admonition} Note for self-managed deployments -Kibana PNG/PDF reporting uses a custom binary of headless Chromium, and support comes with special caveats: +{{kib}} PNG/PDF reporting uses a custom binary of headless Chromium, and support comes with special caveats: * The functionality requires special OS dependencies which may not be available for all distributions and configurations of Linux. * It is subject to system resource configurations such as the limited number of file descriptors, allowed processes, and types of processes. * Linux versions that are in end-of-life phase are not supported. * Linux systems with `SELinux` or `fapolicyd` are not supported. -Before upgrading Kibana in a production environment, we encourage you to test your screenshotting use cases in a pre-production environment to make sure your hosts support our latest build of Chromium. For the most reliable configuration of PDF/PNG {{report-features}}, consider installing {{kib}} using [Docker](/deploy-manage/deploy/self-managed/install-kibana-with-docker.md), or using [Elastic Cloud](https://cloud.elastic.co). +Before upgrading {{kib}} in a production environment, we encourage you to test your screenshotting use cases in a pre-production environment to make sure your hosts support our latest build of Chromium. For the most reliable configuration of PDF/PNG {{report-features}}, consider installing {{kib}} using [Docker](/deploy-manage/deploy/self-managed/install-kibana-with-docker.md), or using [{{ecloud}}](https://cloud.elastic.co). :::: ## Configuration overview @@ -90,7 +90,7 @@ When security is enabled, you grant users access to {{report-features}} with [{{ 3. Add the {{kib}} privileges. - 1. Click **Add Kibana privilege**. + 1. Click **Add {{kib}} privilege**. 2. Select one or more **Spaces**. 3. Click **Customize**, then click **Analytics**. 4. For each application, select **All**, or to customize the privileges, select **Read** and **Customize sub-feature privileges**. @@ -101,7 +101,7 @@ When security is enabled, you grant users access to {{report-features}} with [{{ :::{image} /deploy-manage/images/kibana-kibana-privileges-with-reporting.png - :alt: Kibana privileges with Reporting options, Gold or higher license + :alt: {{kib}} privileges with Reporting options, Gold or higher license :screenshot: ::: @@ -130,7 +130,7 @@ Granting the privilege to generate reports also grants the user the privilege to With [{{kib}} application privileges](#grant-user-access), you can use the [role APIs](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-roles) to grant access to the {{report-features}}, using **All** privileges, or sub-feature privileges. :::{note} -This API request needs to be run against the [Kibana API endpoint](https://www.elastic.co/guide/en/kibana/current/api.html). +This API request needs to be run against the [{{kib}} API endpoint](https://www.elastic.co/guide/en/kibana/current/api.html). ::: ```console @@ -179,12 +179,12 @@ reporting_user: ::::{tab-item} Basic license -With a Basic license, sub-feature [application privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) are unavailable, requiring you to select **All** privileges for the applications where users can create reports. You can grant users access through the Kibana UI or role API. +With a Basic license, sub-feature [application privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) are unavailable, requiring you to select **All** privileges for the applications where users can create reports. You can grant users access through the {{kib}} UI or role API. -Example using Kibana UI: +Example using {{kib}} UI: :::{image} /deploy-manage/images/kibana-kibana-privileges-with-reporting-basic.png -:alt: Kibana privileges with Reporting options, Basic license +:alt: {{kib}} privileges with Reporting options, Basic license :screenshot: ::: diff --git a/deploy-manage/license/manage-your-license-in-ece.md b/deploy-manage/license/manage-your-license-in-ece.md index e222bc7333..6de3d01b75 100644 --- a/deploy-manage/license/manage-your-license-in-ece.md +++ b/deploy-manage/license/manage-your-license-in-ece.md @@ -8,9 +8,9 @@ mapped_pages: # Manage your license in {{ece}} [ece-add-license] -The use of Elastic Cloud Enterprise requires a valid license, which you can obtain from Elastic and add to your installation following the steps described in this document. When you first install ECE we automatically activate ECE with a trial license that is valid for 30 days. +The use of {{ece}} requires a valid license, which you can obtain from Elastic and add to your installation following the steps described in this document. When you first install ECE we automatically activate ECE with a trial license that is valid for 30 days. -Full ECE licenses that you obtain from Elastic enable all ECE hosted deployments with the same products, features, and support that are available at our Enterprise subscription level on Elastic Cloud for the respective stack version, as described on the [Subscriptions page](https://www.elastic.co/subscriptions/cloud). +Full ECE licenses that you obtain from Elastic enable all ECE hosted deployments with the same products, features, and support that are available at our Enterprise subscription level on {{ecloud}} for the respective stack version, as described on the [Subscriptions page](https://www.elastic.co/subscriptions/cloud). ::::{note} The licenses used to activate the deployments might have a different expiration date than the license used to activate ECE. ECE manages the licenses of the hosted deployments and will automatically update the deployment license when needed. @@ -18,21 +18,21 @@ The licenses used to activate the deployments might have a different expiration ::::{note} -If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. Elastic Cloud Enterprise manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for Elastic Cloud Enterprise to reapply the cluster license. +If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. {{ece}} manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for {{ece}} to reapply the cluster license. :::: ## Licenses Expiration [ece_licenses_expiration] -Elastic Cloud Enterprise Licenses contains two types of licenses - the actual license for Elastic Cloud Enterprise that is validated to enable Elastic Cloud Enterprise features and the *cluster licenses*, which Elastic Cloud Enterprise installs into the individual clusters. +{{ece}} Licenses contains two types of licenses - the actual license for {{ece}} that is validated to enable {{ece}} features and the *cluster licenses*, which {{ece}} installs into the individual clusters. -Elastic Cloud Enterprise installs those cluster licenses with an approximately 3 month window, and updates the cluster licenses automatically as they get within a month of expiration. +{{ece}} installs those cluster licenses with an approximately 3 month window, and updates the cluster licenses automatically as they get within a month of expiration. -When the Elastic Cloud Enterprise license expires, and consequently the cluster license that’s currently installed for all managed clusters since it has the same expiration date, the following takes place: +When the {{ece}} license expires, and consequently the cluster license that’s currently installed for all managed clusters since it has the same expiration date, the following takes place: * **Users cannot create new clusters or modify existing clusters**: They can only delete them. These clusters are still fully accessible for the client though. -* **X-Pack features are degraded**: For the details about what functionality will be reduced when cluster license expires, read more about the [Elastic Stack license expiration](https://www.elastic.co/guide/en/elastic-stack-overview/current/license-expiration.html). +* **X-Pack features are degraded**: For the details about what functionality will be reduced when cluster license expires, read more about the [{{stack}} license expiration](https://www.elastic.co/guide/en/elastic-stack-overview/current/license-expiration.html). ## Download a license [ece_download_a_license] @@ -70,7 +70,7 @@ To check your current license expiry date: To request a trial license extension from Elastic: -1. Fill in the form at [https://www.elastic.co/contact](https://www.elastic.co/contact). Make sure to choose Elastic Cloud Enterprise as the area of interest and state that you request a trial license extension. +1. Fill in the form at [https://www.elastic.co/contact](https://www.elastic.co/contact). Make sure to choose {{ece}} as the area of interest and state that you request a trial license extension. Someone from Elastic will be in touch to respond to your trial extension request. diff --git a/deploy-manage/license/manage-your-license-in-eck.md b/deploy-manage/license/manage-your-license-in-eck.md index c043024a76..a434c085de 100644 --- a/deploy-manage/license/manage-your-license-in-eck.md +++ b/deploy-manage/license/manage-your-license-in-eck.md @@ -9,10 +9,10 @@ mapped_pages: # Manage your license in {{eck}} [k8s-licensing] -When you install the default distribution of ECK, you receive a Basic license. Any Elastic stack application you manage through ECK will also be Basic licensed. Go to [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) to check which features are included in the Basic license for free. +When you install the default distribution of ECK, you receive a Basic license. Any {{stack}} application you manage through ECK will also be Basic licensed. Go to [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) to check which features are included in the Basic license for free. ::::{important} -ECK is only offered in two licensing tiers: Basic and Enterprise. Similar to the Elastic Stack, customers can download and use ECK with a Basic license for free. Basic license users can obtain support from GitHub or through our [community](https://discuss.elastic.co). A paid Enterprise subscription is required to engage the Elastic support team. For more details, check the [Elastic subscriptions](https://www.elastic.co/subscriptions). +ECK is only offered in two licensing tiers: Basic and Enterprise. Similar to the {{stack}}, customers can download and use ECK with a Basic license for free. Basic license users can obtain support from GitHub or through our [community](https://discuss.elastic.co). A paid Enterprise subscription is required to engage the Elastic support team. For more details, check the [Elastic subscriptions](https://www.elastic.co/subscriptions). :::: @@ -62,7 +62,7 @@ When downloading the license choose the "Orchestration license" option. :::: -The downloaded JSON file contains the Enterprise orchestration license which enables ECK Enterprise features. Embedded in the orchestration license are also Enterprise stack licenses for recent Elasticsearch versions and Platinum licenses for older Elasticsearch versions that do not support Enterprise licenses. +The downloaded JSON file contains the Enterprise orchestration license which enables ECK Enterprise features. Embedded in the orchestration license are also Enterprise stack licenses for recent {{es}} versions and Platinum licenses for older {{es}} versions that do not support Enterprise licenses. To add the license to your ECK installation, create a Kubernetes secret of the following form: @@ -89,10 +89,10 @@ kubectl create secret generic eck-license --from-file=my-license-file.json -n el kubectl label secret eck-license "license.k8s.elastic.co/scope"=operator -n elastic-system ``` -After you install a license into ECK, the Enterprise features of the operator are available, like Elasticsearch autoscaling and support for Elastic Maps Server. All the Elastic Stack applications you manage with ECK will have Platinum and Enterprise features enabled. The [`_license`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-license-get) API reports that individual Elasticsearch clusters are running under an Enterprise license, and the [elastic-licensing](#k8s-get-usage-data) ConfigMap contains the current license level of the ECK operator. The applications created before you installed the license are upgraded to Platinum or Enterprise features without interruption of service after a short delay. +After you install a license into ECK, the Enterprise features of the operator are available, like {{es}} autoscaling and support for Elastic Maps Server. All the {{stack}} applications you manage with ECK will have Platinum and Enterprise features enabled. The [`_license`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-license-get) API reports that individual {{es}} clusters are running under an Enterprise license, and the [elastic-licensing](#k8s-get-usage-data) ConfigMap contains the current license level of the ECK operator. The applications created before you installed the license are upgraded to Platinum or Enterprise features without interruption of service after a short delay. ::::{note} -The Elasticsearch `_license` API for versions before 8.0.0 reports a Platinum license level for backwards compatibility even if an Enterprise license is installed. +The {{es}} `_license` API for versions before 8.0.0 reports a Platinum license level for backwards compatibility even if an Enterprise license is installed. :::: @@ -102,11 +102,11 @@ The Elasticsearch `_license` API for versions before 8.0.0 reports a Platinum li Before your current Enterprise license expires, you will receive a new Enterprise license from Elastic (provided that your subscription is valid). ::::{note} -You can check the expiry date of your license in the [elastic-licensing](#k8s-get-usage-data) ConfigMap. Enterprise licenses are container licenses that include multiple licenses for individual Elasticsearch clusters with shorter expiry. Therefore, you get a different expiry in Kibana or through the Elasticsearch [`_license`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-license-get) API. ECK automatically updates the Elasticsearch cluster licenses until the expiry date of the ECK Enterprise license is reached. +You can check the expiry date of your license in the [elastic-licensing](#k8s-get-usage-data) ConfigMap. Enterprise licenses are container licenses that include multiple licenses for individual {{es}} clusters with shorter expiry. Therefore, you get a different expiry in {{kib}} or through the {{es}} [`_license`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-license-get) API. ECK automatically updates the {{es}} cluster licenses until the expiry date of the ECK Enterprise license is reached. :::: -To avoid any unintended downgrade of individual Elasticsearch clusters to a Basic license while installing the new license, we recommend installing the new Enterprise license as a new Kubernetes secret next to your existing Enterprise license. Just replace `eck-license` with a different name in the [Kubernetes secret example](#k8s-add-license). ECK will use the correct license automatically. +To avoid any unintended downgrade of individual {{es}} clusters to a Basic license while installing the new license, we recommend installing the new Enterprise license as a new Kubernetes secret next to your existing Enterprise license. Just replace `eck-license` with a different name in the [Kubernetes secret example](#k8s-add-license). ECK will use the correct license automatically. Once you have created the new license secret you can safely delete the old license secret. @@ -151,10 +151,10 @@ elastic_licensing_enterprise_resource_units_total{license_level="enterprise"} 1 # HELP elastic_licensing_memory_gibibytes_apm Memory used by APM server in GiB # TYPE elastic_licensing_memory_gibibytes_apm gauge elastic_licensing_memory_gibibytes_apm{license_level="enterprise"} 0.5 -# HELP elastic_licensing_memory_gibibytes_elasticsearch Memory used by Elasticsearch in GiB +# HELP elastic_licensing_memory_gibibytes_elasticsearch Memory used by {{es}} in GiB # TYPE elastic_licensing_memory_gibibytes_elasticsearch gauge elastic_licensing_memory_gibibytes_elasticsearch{license_level="enterprise"} 18 -# HELP elastic_licensing_memory_gibibytes_kibana Memory used by Kibana in GiB +# HELP elastic_licensing_memory_gibibytes_kibana Memory used by {{kib}} in GiB # TYPE elastic_licensing_memory_gibibytes_kibana gauge elastic_licensing_memory_gibibytes_kibana{license_level="enterprise"} 1 # HELP elastic_licensing_memory_gibibytes_logstash Memory used by Logstash in GiB diff --git a/deploy-manage/maintenance.md b/deploy-manage/maintenance.md index aff0ce5cf0..692f2fe3a6 100644 --- a/deploy-manage/maintenance.md +++ b/deploy-manage/maintenance.md @@ -14,7 +14,7 @@ This section outlines the key tasks and processes required to maintain a healthy The topics covered include: -* **[](maintenance/ece.md)**: Explains the procedures for maintaining both the host infrastructure and {{es}} deployments within Elastic Cloud Enterprise (ECE). +* **[](maintenance/ece.md)**: Explains the procedures for maintaining both the host infrastructure and {{es}} deployments within {{ece}} (ECE). * **[](maintenance/start-stop-services.md)**: Provides step-by-step instructions on how to safely start and stop your {{es}} deployment or {{kib}} instance, particularly when performing actions that require a restart. * **[](maintenance/start-stop-routing-requests.md)**: Start or stop routing requests to an {{ech}} or {{ece}} deployment, or to all instances on an allocator in {{ece}}. * **[](maintenance/add-and-remove-elasticsearch-nodes.md)**: Guides you through the process of enrolling new nodes or safely removing existing ones from a self-managed {{es}} cluster to optimize resource utilization and cluster performance. diff --git a/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md b/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md index 104c88a271..1451adf50d 100644 --- a/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md +++ b/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md @@ -8,7 +8,7 @@ sub: slash: "\\" --- -# Add and Remove Elasticsearch nodes [add-elasticsearch-nodes] +# Add and Remove {{es}} nodes [add-elasticsearch-nodes] When you start an instance of {{es}}, you are starting a *node*. An {{es}} *cluster* is a group of nodes that have the same `cluster.name` attribute. As nodes join or leave a cluster, the cluster automatically reorganizes itself to evenly distribute the data across the available nodes. @@ -49,13 +49,13 @@ If you installed your new {{es}} node using an [RPM](/deploy-manage/deploy/self- ## Master-eligible nodes [add-elasticsearch-nodes-master-eligible] -As nodes are added or removed Elasticsearch maintains an optimal level of fault tolerance by automatically updating the cluster’s *voting configuration*, which is the set of [master-eligible nodes](../distributed-architecture/clusters-nodes-shards/node-roles.md#master-node-role) whose responses are counted when making decisions such as electing a new master or committing a new cluster state. +As nodes are added or removed {{es}} maintains an optimal level of fault tolerance by automatically updating the cluster’s *voting configuration*, which is the set of [master-eligible nodes](../distributed-architecture/clusters-nodes-shards/node-roles.md#master-node-role) whose responses are counted when making decisions such as electing a new master or committing a new cluster state. It is recommended to have a small and fixed number of master-eligible nodes in a cluster, and to scale the cluster up and down by adding and removing master-ineligible nodes only. However there are situations in which it may be desirable to add or remove some master-eligible nodes to or from a cluster. ### Adding master-eligible nodes [modules-discovery-adding-nodes] -If you wish to add some nodes to your cluster, simply configure the new nodes to find the existing cluster and start them up. Elasticsearch adds the new nodes to the voting configuration if it is appropriate to do so. +If you wish to add some nodes to your cluster, simply configure the new nodes to find the existing cluster and start them up. {{es}} adds the new nodes to the voting configuration if it is appropriate to do so. During master election or when joining an existing formed cluster, a node sends a join request to the master in order to be officially added to the cluster. diff --git a/deploy-manage/maintenance/ece.md b/deploy-manage/maintenance/ece.md index 56deb2ccb0..e724f8084b 100644 --- a/deploy-manage/maintenance/ece.md +++ b/deploy-manage/maintenance/ece.md @@ -6,7 +6,7 @@ applies_to: # ECE maintenance -Elastic Cloud Enterprise (ECE), being a self-managed Elastic Stack deployment platform, abstracts much of the complexity of running {{es}}, but still requires regular maintenance at both the platform and deployment levels. Maintenance activities range from managing individual deployments to performing infrastructure-level updates on ECE hosts. +{{ece}} (ECE), being a self-managed {{stack}} deployment platform, abstracts much of the complexity of running {{es}}, but still requires regular maintenance at both the platform and deployment levels. Maintenance activities range from managing individual deployments to performing infrastructure-level updates on ECE hosts. ## Deployment maintenance and host infrastructure maintenance [ece-deployment-host-infra-maintenance] diff --git a/deploy-manage/maintenance/ece/delete-ece-hosts.md b/deploy-manage/maintenance/ece/delete-ece-hosts.md index ecb816436e..60414a7d5d 100644 --- a/deploy-manage/maintenance/ece/delete-ece-hosts.md +++ b/deploy-manage/maintenance/ece/delete-ece-hosts.md @@ -10,10 +10,10 @@ applies_to: You might need to delete hosts for several reasons: -* To remove some resources from your Elastic Cloud Enterprise installation if they are no longer required. -* To remove a faulty host from the Cloud UI so that it is no longer part of your Elastic Cloud Enterprise installation. +* To remove some resources from your {{ece}} installation if they are no longer required. +* To remove a faulty host from the Cloud UI so that it is no longer part of your {{ece}} installation. -Deleting a host only removes the host from your installation, it does not [remove the Elastic Cloud Enterprise software from the host](../../uninstall/uninstall-elastic-cloud-enterprise.md). After the host has been deleted, you can repurpose or troubleshoot the physical host on which the Elastic Cloud Enterprise host was located. +Deleting a host only removes the host from your installation, it does not [remove the {{ece}} software from the host](../../uninstall/uninstall-elastic-cloud-enterprise.md). After the host has been deleted, you can repurpose or troubleshoot the physical host on which the {{ece}} host was located. To delete hosts: diff --git a/deploy-manage/maintenance/ece/enable-maintenance-mode.md b/deploy-manage/maintenance/ece/enable-maintenance-mode.md index 2595b6a181..c94d56b566 100644 --- a/deploy-manage/maintenance/ece/enable-maintenance-mode.md +++ b/deploy-manage/maintenance/ece/enable-maintenance-mode.md @@ -8,7 +8,7 @@ applies_to: # Enable maintenance mode [ece-maintenance-mode] -Maintenance mode lets you perform actions on an allocator safely that might otherwise carry some risk. For example, if you want to remove the allocator role from a host, enabling maintenance mode prevents new Elasticsearch clusters and Kibana instances from being provisioned on the allocator whilst you are moving the existing nodes to another allocator or whilst you are removing the role. +Maintenance mode lets you perform actions on an allocator safely that might otherwise carry some risk. For example, if you want to remove the allocator role from a host, enabling maintenance mode prevents new {{es}} clusters and {{kib}} instances from being provisioned on the allocator whilst you are moving the existing nodes to another allocator or whilst you are removing the role. To put an allocator into maintenance mode: @@ -17,7 +17,7 @@ To put an allocator into maintenance mode: 3. Choose the allocator you want to work with and select **Enable Maintenance Mode**. Confirm the action. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. -After the allocator enters maintenance mode, no new Elasticsearch nodes or Kibana instances will be started on the allocator. Existing nodes will continue to work as expected. You can now safely perform actions like [moving nodes off the allocator](move-nodes-instances-from-allocators.md). +After the allocator enters maintenance mode, no new {{es}} nodes or {{kib}} instances will be started on the allocator. Existing nodes will continue to work as expected. You can now safely perform actions like [moving nodes off the allocator](move-nodes-instances-from-allocators.md). If you want to make the allocator fully active again, select **Disable Maintenance Mode**. Confirm the action. diff --git a/deploy-manage/maintenance/ece/maintenance-activities.md b/deploy-manage/maintenance/ece/maintenance-activities.md index 52c320aa7f..36ac5839e5 100644 --- a/deploy-manage/maintenance/ece/maintenance-activities.md +++ b/deploy-manage/maintenance/ece/maintenance-activities.md @@ -11,7 +11,7 @@ Maintenance activities ensure the smooth operation and scalability of your {{es} ### [Enable maintenance mode](enable-maintenance-mode.md) -Before performing maintenance on an allocator, you should enable maintenance mode to prevent new Elasticsearch clusters and Kibana instances from being provisioned. This ensures that existing deployments can be safely moved to other allocators or adjusted without disruption. +Before performing maintenance on an allocator, you should enable maintenance mode to prevent new {{es}} clusters and {{kib}} instances from being provisioned. This ensures that existing deployments can be safely moved to other allocators or adjusted without disruption. ### [Scale out installation](scale-out-installation.md) @@ -27,7 +27,7 @@ Maintaining ECE hosts is critical for applying system patches, performing hardwa ### [Delete ECE hosts](delete-ece-hosts.md) -If a host is no longer required or is faulty, it can be removed from the Elastic Cloud Enterprise installation. Deleting a host only removes it from the installation but does not uninstall the software from the physical machine. Before deletion, allocators should be placed in maintenance mode, and nodes should be migrated to avoid disruption. +If a host is no longer required or is faulty, it can be removed from the {{ece}} installation. Deleting a host only removes it from the installation but does not uninstall the software from the physical machine. Before deletion, allocators should be placed in maintenance mode, and nodes should be migrated to avoid disruption. ## Best practices for maintenance diff --git a/deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md b/deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md index 02ace2127b..db3d3d1be7 100644 --- a/deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md +++ b/deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md @@ -8,9 +8,9 @@ applies_to: # Move nodes or instances from allocators [ece-move-nodes] -You might need to move Elasticsearch nodes, Kibana instances, and other components of the Elastic Stack between allocators from time to time for a number of reasons: +You might need to move {{es}} nodes, {{kib}} instances, and other components of the {{stack}} between allocators from time to time for a number of reasons: -* To prepare for removing the allocator role from the first host on which you installed Elastic Cloud Enterprise. +* To prepare for removing the allocator role from the first host on which you installed {{ece}}. * To avoid downtime during maintenance: You can create a new allocator, move all deployments from an existing allocator to the new one, and then deal with the allocator that needs maintenance. * To make room on an allocator: You can move some smaller deployments to another allocator if you need additional room for a larger one on an allocator. * To move deployments after a failure: When host failures happen, you can move all deployments from the affected allocator to a healthy allocator quickly before spending any time on fixing the failure. @@ -21,11 +21,11 @@ When you move all nodes from an existing allocator to the new one, ECE migrates ## Before you begin [ece_before_you_begin_9] -Before you move the nodes and instances that are part of a deployment, you need to make sure that you have sufficient capacity on another allocator. For example: If you have a deployment with a single 32 GB Elasticsearch node and a 4 GB Kibana instance, the allocator that you are moving the deployment to needs to have at least 36 GB of capacity. Note that moving nodes does not actually move the same node onto a different allocator. Under the covers, Elastic Cloud Enterprise creates a new node and then migrates the data for you. +Before you move the nodes and instances that are part of a deployment, you need to make sure that you have sufficient capacity on another allocator. For example: If you have a deployment with a single 32 GB {{es}} node and a 4 GB {{kib}} instance, the allocator that you are moving the deployment to needs to have at least 36 GB of capacity. Note that moving nodes does not actually move the same node onto a different allocator. Under the covers, {{ece}} creates a new node and then migrates the data for you. -Elastic Cloud Enterprise will adhere to the high availability configuration when moving nodes, so make sure you have the additional capacity available in the relevant availability zone. For example: If you selected to deploy your cluster accross 3 availability zones, nodes can only move to an allocator in the same availability zone as the failed allocator. This is meant to ensure that the cluster can tolerate the failure of 2 availability zones. +{{ece}} will adhere to the high availability configuration when moving nodes, so make sure you have the additional capacity available in the relevant availability zone. For example: If you selected to deploy your cluster accross 3 availability zones, nodes can only move to an allocator in the same availability zone as the failed allocator. This is meant to ensure that the cluster can tolerate the failure of 2 availability zones. -If you followed our recommendation and [tagged your allocators](../../deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md) to indicate what allocators you want components of the Elastic Stack to run on, the spare capacity you plan to use must be available on an allocator with the same tags. If you did not tag your allocators and edit the default instance configurations, ECE will move nodes and instances to wherever there is space. +If you followed our recommendation and [tagged your allocators](../../deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md) to indicate what allocators you want components of the {{stack}} to run on, the spare capacity you plan to use must be available on an allocator with the same tags. If you did not tag your allocators and edit the default instance configurations, ECE will move nodes and instances to wherever there is space. When you move all nodes from an existing allocator to the new one, ECE migrates the data to new nodes. The migration can take some time, especially when clusters contain large amounts of data and have a heavy workload. Is your cluster under a heavy workload? You might need to [stop routing requests](deployments-maintenance.md) first. diff --git a/deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md b/deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md index 7a218030ab..a60b770fb4 100644 --- a/deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md +++ b/deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md @@ -8,7 +8,7 @@ applies_to: # Perform ECE hosts maintenance [ece-perform-host-maintenance] -These steps show how you can safely perform maintenance on hosts in your ECE installation. Host maintenance refers to actions that are not part of taking care of Elastic Cloud Enterprise itself and that you might need to perform for a number of different reasons, including: +These steps show how you can safely perform maintenance on hosts in your ECE installation. Host maintenance refers to actions that are not part of taking care of {{ece}} itself and that you might need to perform for a number of different reasons, including: * To apply urgent operating system patches or hot fixes * To perform regularly scheduled software or hardware upgrades @@ -21,17 +21,17 @@ You can perform these maintenance actions on the hosts in your ECE installation * [By deleting the host (destructive)](#ece-perform-host-maintenance-delete-runner) * [By shutting down the host (less destructive)](#ece-perform-host-maintenance-delete-runner) -Which method you choose depends on how invasive your host maintenance needs to be. If your host maintenance could affect ECE, use the destructive method that first deletes the host from your installation. These methods include a step that moves any hosted Elasticsearch clusters and Kibana instances off the affected hosts and are generally considered safe, provided that your ECE installation still has sufficient resources available to operate after the host has been removed. +Which method you choose depends on how invasive your host maintenance needs to be. If your host maintenance could affect ECE, use the destructive method that first deletes the host from your installation. These methods include a step that moves any hosted {{es}} clusters and {{kib}} instances off the affected hosts and are generally considered safe, provided that your ECE installation still has sufficient resources available to operate after the host has been removed. ## By disabling the Docker daemon [ece-perform-host-maintenance-docker-disable] -This method lets you perform maintenance actions on hosts without first removing the associated host from your Elastic Cloud Enterprise installation. It works by disabling the Docker daemon. The host remains a part of your ECE installation throughout these steps but will be offline and the resources it provides will not be available. +This method lets you perform maintenance actions on hosts without first removing the associated host from your {{ece}} installation. It works by disabling the Docker daemon. The host remains a part of your ECE installation throughout these steps but will be offline and the resources it provides will not be available. To perform host maintenance: 1. Recommended: If the host holds the allocator role and you have enough spare capacity: 1. [Enable maintenance mode](enable-maintenance-mode.md) on the allocator. - 2. [Move all nodes off the allocator](move-nodes-instances-from-allocators.md) and to other allocators in your installation. Moving all nodes lets you retain the same level of redundancy for highly available Elasticsearch clusters and ensures that other clusters without high availability remain available. + 2. [Move all nodes off the allocator](move-nodes-instances-from-allocators.md) and to other allocators in your installation. Moving all nodes lets you retain the same level of redundancy for highly available {{es}} clusters and ensures that other clusters without high availability remain available. ::::{important} Skipping Step 1 will affect the availability of clusters with nodes on the allocator. :::: @@ -71,7 +71,7 @@ After the host shows a green status in the Cloud UI, it is fully functional agai ## By deleting the host (destructive) [ece-perform-host-maintenance-delete-runner] -This method lets you perform potentially destructive maintenance actions on hosts. It works by deleting the associated host, which removes the host from your Elastic Cloud Enterprise installation. To add the host to your ECE installation again after host maintenance is complete, you must reinstall ECE. +This method lets you perform potentially destructive maintenance actions on hosts. It works by deleting the associated host, which removes the host from your {{ece}} installation. To add the host to your ECE installation again after host maintenance is complete, you must reinstall ECE. To perform host maintenance: diff --git a/deploy-manage/maintenance/ece/scale-out-installation.md b/deploy-manage/maintenance/ece/scale-out-installation.md index 367e793bbf..9c1ffb2643 100644 --- a/deploy-manage/maintenance/ece/scale-out-installation.md +++ b/deploy-manage/maintenance/ece/scale-out-installation.md @@ -8,7 +8,7 @@ applies_to: # Scale out your installation [ece-add-capacity] -Elastic Cloud Enterprise scales to whatever capacity you need. If you need more processing capacity because your allocators are close to being maxed out or because you want to enable high availability and need an additional availability zone, simply add more capacity and change your deployment configuration to make use of it. +{{ece}} scales to whatever capacity you need. If you need more processing capacity because your allocators are close to being maxed out or because you want to enable high availability and need an additional availability zone, simply add more capacity and change your deployment configuration to make use of it. Check the available capacity: @@ -22,7 +22,7 @@ Check the available capacity: If this is not sufficient, add more capacity to your installation: -1. [Install Elastic Cloud Enterprise on additional hosts](../../deploy/cloud-enterprise/install-ece-on-additional-hosts.md) to create additional capacity. +1. [Install {{ece}} on additional hosts](../../deploy/cloud-enterprise/install-ece-on-additional-hosts.md) to create additional capacity. 2. [Add capacity](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-allocator-settings) to existing allocators by updating the allocator settings when adding memory to the host. 3. [Assign roles](../../deploy/cloud-enterprise/assign-roles-to-hosts.md) to the additional hosts. If you need to handle a larger search or logging workload, assign the new hosts the allocator role. 4. (Optional) [Tag allocators](../../deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md) to the new host to indicate what kind of hardware you have available. diff --git a/deploy-manage/maintenance/start-stop-routing-requests.md b/deploy-manage/maintenance/start-stop-routing-requests.md index ee5ebcf940..38c8b472a0 100644 --- a/deploy-manage/maintenance/start-stop-routing-requests.md +++ b/deploy-manage/maintenance/start-stop-routing-requests.md @@ -23,7 +23,7 @@ It might be helpful to temporarily block upstream requests in order to protect s * ECE and ECH will automatically set and remove routing blocks during plan changes. Elastic recommends avoiding manually overriding these settings for a deployment while its plans are pending. * The [{{es}} API console](/explore-analyze/query-filter/tools/console.md) bypasses cloud proxy routing blocks against {{es}} to enable administrative tasks while plan changes are pending. You should generally default traffic to the {{es}} endpoint. However, if you enable **Stop routing requests** across all {{es}} nodes, you need to use this UI to administer your cluster. -* While {{es}} has **Stop routing requests** set across all nodes, other products with the deployment may become unhealthy. This is because {{es}} is a prerequisite for those other products, such as {{kib}}. In {{kib}}, this results in a [**Kibana server is not ready yet**](/troubleshoot/kibana/error-server-not-ready.md) message. +* While {{es}} has **Stop routing requests** set across all nodes, other products with the deployment may become unhealthy. This is because {{es}} is a prerequisite for those other products, such as {{kib}}. In {{kib}}, this results in a [**{{kib}} server is not ready yet**](/troubleshoot/kibana/error-server-not-ready.md) message. * In {{ech}}, enabling **Stop routing requests** does not affect your [billing](/deploy-manage/cloud-organization/billing.md). If needed, you can stop charges for a deployment by [deleting the deployment](/deploy-manage/uninstall/delete-a-cloud-deployment.md). ## Stop routing requests [stop-routing-requests] diff --git a/deploy-manage/maintenance/start-stop-services.md b/deploy-manage/maintenance/start-stop-services.md index f0173b0ebf..219f4d8c30 100644 --- a/deploy-manage/maintenance/start-stop-services.md +++ b/deploy-manage/maintenance/start-stop-services.md @@ -10,8 +10,8 @@ applies_to: This section covers the recommended procedures for: -* [starting and stopping self-managed Elasticsearch nodes](start-stop-services/start-stop-elasticsearch.md) -* [starting and stopping self-managed Kibana instances](start-stop-services/start-stop-kibana.md) +* [starting and stopping self-managed {{es}} nodes](start-stop-services/start-stop-elasticsearch.md) +* [starting and stopping self-managed {{kib}} instances](start-stop-services/start-stop-kibana.md) * [restarting an ECE deployment](start-stop-services/restart-an-ece-deployment.md) * [restarting {{ech}} deployments](start-stop-services/restart-cloud-hosted-deployment.md) * [full cluster and rolling restarts for self-managed clusters](start-stop-services/full-cluster-restart-rolling-restart-procedures.md) diff --git a/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md b/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md index a1d06faa4d..1b6a0e1f00 100644 --- a/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md +++ b/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md @@ -23,7 +23,7 @@ Note that if you are looking to restart {{es}} to clear out [deployment activity ## Restart an {{es}} resource by using an API [ec_restart_an_elasticsearch_resource] -Restart an Elasticsearch resource by calling the following API request: +Restart an {{es}} resource by calling the following API request: ```sh curl -XPOST \ diff --git a/deploy-manage/maintenance/start-stop-services/start-stop-kibana.md b/deploy-manage/maintenance/start-stop-services/start-stop-kibana.md index 9390bc9c0f..112a697a55 100644 --- a/deploy-manage/maintenance/start-stop-services/start-stop-kibana.md +++ b/deploy-manage/maintenance/start-stop-services/start-stop-kibana.md @@ -6,7 +6,7 @@ applies_to: self: --- -# Start and stop Kibana [start-stop] +# Start and stop {{kib}} [start-stop] The method for starting and stopping {{kib}} varies depending on how you installed it. If a password protected keystore is used, the environment variable `KBN_KEYSTORE_PASSPHRASE_FILE` can be used to point to a file containing the password, the environment variable `KEYSTORE_PASSWORD` can be defined, or you will be prompted to enter to enter the password on startup, @@ -16,13 +16,13 @@ If you installed {{kib}} on Linux or Darwin with a `.tar.gz` package, you can st ### Run {{kib}} from the command line [run-kibana-from-command-line] -Kibana can be started from the command line as follows: +{{kib}} can be started from the command line as follows: ```sh ./bin/kibana ``` -By default, Kibana runs in the foreground, prints its logs to the standard output (`stdout`), and can be stopped by pressing **Ctrl-C**. +By default, {{kib}} runs in the foreground, prints its logs to the standard output (`stdout`), and can be stopped by pressing **Ctrl-C**. If this is the first time you’re starting {{kib}}, this command generates a unique link in your terminal to enroll your {{kib}} instance with {{es}}. @@ -41,13 +41,13 @@ If you installed {{kib}} on Windows with a `.zip` package, you can stop and star ### Run {{kib}} from the command line [_run_kib_from_the_command_line_2] -Kibana can be started from the command line as follows: +{{kib}} can be started from the command line as follows: ```sh .\bin\kibana.bat ``` -By default, Kibana runs in the foreground, prints its logs to `STDOUT`, and can be stopped by pressing **Ctrl-C**. +By default, {{kib}} runs in the foreground, prints its logs to `STDOUT`, and can be stopped by pressing **Ctrl-C**. If this is the first time you’re starting {{kib}}, this command generates a unique link in your terminal to enroll your {{kib}} instance with {{es}}. diff --git a/deploy-manage/monitor/autoops.md b/deploy-manage/monitor/autoops.md index d374d8f170..500c14fb9e 100644 --- a/deploy-manage/monitor/autoops.md +++ b/deploy-manage/monitor/autoops.md @@ -8,7 +8,7 @@ applies_to: # AutoOps [ec-autoops] -AutoOps diagnoses issues in Elasticsearch by analyzing hundreds of metrics, providing root-cause analysis and accurate resolution paths. With AutoOps, customers can prevent and resolve issues, cut down administration time, and optimize resource utilization. +AutoOps diagnoses issues in {{es}} by analyzing hundreds of metrics, providing root-cause analysis and accurate resolution paths. With AutoOps, customers can prevent and resolve issues, cut down administration time, and optimize resource utilization. AutoOps is currently only available for [{{ech}} deployments](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). diff --git a/deploy-manage/monitor/autoops/ec-autoops-faq.md b/deploy-manage/monitor/autoops/ec-autoops-faq.md index 71331cafdb..8a17197aa2 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-faq.md +++ b/deploy-manage/monitor/autoops/ec-autoops-faq.md @@ -11,22 +11,22 @@ applies_to: This frequently-asked-questions list answers some of your more common questions about AutoOps. $$$faq-what-is-autoops$$$What is AutoOps? What does it do? -: AutoOps for Elasticsearch significantly simplifies cluster management with performance recommendations, resource utilization and cost insights, real-time issue detection and resolution paths. By analyzing hundreds of Elasticsearch metrics, your configuration, and usage patterns, AutoOps recommends operational and monitoring insights that deliver savings in administration time and hardware costs. +: AutoOps for {{es}} significantly simplifies cluster management with performance recommendations, resource utilization and cost insights, real-time issue detection and resolution paths. By analyzing hundreds of {{es}} metrics, your configuration, and usage patterns, AutoOps recommends operational and monitoring insights that deliver savings in administration time and hardware costs. $$$faq-autoops-availability$$$When will AutoOps be available for Self-hosted and Serverless users? : AutoOps will be available for Self-hosted and Serverless customers with a different set of capabilities in the future. -$$$faq-autoops-monitoring$$$Does AutoOps monitor the entire Elastic Stack? +$$$faq-autoops-monitoring$$$Does AutoOps monitor the entire {{stack}}? : AutoOps is currently limited to {{es}} (not {{kib}}, Logstash and Beats). -$$$faq-autoops-supported-versions$$$What versions of Elasticsearch are supported for Elastic Cloud Hosted? -: AutoOps supports {es} versions according to the [supported Elastic Stack versions](https://www.elastic.co/support/eol). +$$$faq-autoops-supported-versions$$$What versions of {{es}} are supported for {{ech}}? +: AutoOps supports {es} versions according to the [supported {{stack}} versions](https://www.elastic.co/support/eol). $$$faq-autoops-license$$$How is AutoOps currently licensed? -: AutoOps current feature set is available to Elastic Cloud Hosted customers at all subscription tiers. For more information please refer to the [subscription page](https://www.elastic.co/subscriptions/cloud). +: AutoOps current feature set is available to {{ech}} customers at all subscription tiers. For more information please refer to the [subscription page](https://www.elastic.co/subscriptions/cloud). $$$faq-autoops-installation$$$How does AutoOps get installed and why may I not see AutoOps available on specific deployments? -: AutoOps is automatically applied to {{es}} clusters on Elastic Cloud, rolling out in phases across CSPs and regions. Read more about AutoOps [roll out](ec-autoops-regions.md) status. +: AutoOps is automatically applied to {{es}} clusters on {{ecloud}}, rolling out in phases across CSPs and regions. Read more about AutoOps [roll out](ec-autoops-regions.md) status. $$$faq-autoops-issue-resolution$$$Can AutoOps currently automatically resolve issues? : AutoOps only analyzes metrics, and is a “read-only” solution. diff --git a/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md b/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md index 107e55e42e..6d105da0ad 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md +++ b/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md @@ -9,13 +9,13 @@ applies_to: # How to access AutoOps [ec-autoops-how-to-access] ::::{note} -AutoOps supports {{es}} versions according to the [supported Elastic Stack versions](https://www.elastic.co/support/eol). +AutoOps supports {{es}} versions according to the [supported {{stack}} versions](https://www.elastic.co/support/eol). :::: -To access AutoOps from your Elastic Cloud console, follow these steps: +To access AutoOps from your {{ecloud}} console, follow these steps: -1. Log in to your Elastic Cloud Hosted account. Use your credentials to access your Elastic Cloud dashboard. +1. Log in to your {{ech}} account. Use your credentials to access your {{ecloud}} dashboard. 2. Navigate through your list of deployments and locate the one you want to manage. 3. Click **Manage** on the right side of the selected deployment. 4. On the deployment details page, click **Open AutoOps**. diff --git a/deploy-manage/monitor/autoops/ec-autoops-overview-view.md b/deploy-manage/monitor/autoops/ec-autoops-overview-view.md index 5c92ff3d5c..80808754b3 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-overview-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-overview-view.md @@ -8,7 +8,7 @@ applies_to: # Overview [ec-autoops-overview-view] -The **Overview** page displays the current status of customer deployments in Elastic Cloud Hosted that are linked to the same Elastic organization. +The **Overview** page displays the current status of customer deployments in {{ech}} that are linked to the same Elastic organization. :::{image} /deploy-manage/images/cloud-autoops-overview-page.png :alt: The Overview page diff --git a/deploy-manage/monitor/cloud-health-perf.md b/deploy-manage/monitor/cloud-health-perf.md index c83c333dc4..8a5436ddd1 100644 --- a/deploy-manage/monitor/cloud-health-perf.md +++ b/deploy-manage/monitor/cloud-health-perf.md @@ -118,7 +118,7 @@ deployment: ess: ``` -Elastic Cloud allows smaller instance sizes to get temporarily boosted vCPU when under heavy load. [vCPU boosting](/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md) is governed by vCPU credits that instances can earn over time when vCPU usage is less than the assigned amount. +{{ecloud}} allows smaller instance sizes to get temporarily boosted vCPU when under heavy load. [vCPU boosting](/deploy-manage/deploy/elastic-cloud/ec-vcpu-boost-instance.md) is governed by vCPU credits that instances can earn over time when vCPU usage is less than the assigned amount. You can check the **Monitoring > Performance > CPU Credits** section of the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), and find the related metrics: @@ -139,7 +139,7 @@ We’ve compiled some guidelines to help you ensure the health of your deploymen : Provide instructions on how to troubleshoot issues related to unassigned shards. [](/troubleshoot/monitoring/performance.md) -: Address performance degradation on a smaller size Elasticsearch cluster. +: Address performance degradation on a smaller size {{es}} cluster. [](/troubleshoot/monitoring/high-availability.md) : High availability involves more than setting multiple availability zones (although that’s really important!). Learn how to assess performance and workloads to determine if your deployment has adequate resources to mitigate a potential node failure. diff --git a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md index cb81fe7b02..a4f9810e9b 100644 --- a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md +++ b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md @@ -1,12 +1,12 @@ --- -navigation_title: "Kibana task manager monitoring" +navigation_title: "{{kib}} task manager monitoring" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/task-manager-health-monitoring.html applies_to: stack: preview --- -# Kibana task manager health monitoring [task-manager-health-monitoring] +# {{kib}} task manager health monitoring [task-manager-health-monitoring] ::::{warning} @@ -86,7 +86,7 @@ By default, the health API runs at a regular cadence, and each time it runs, it This message looks like: ```txt -Detected potential performance issue with Task Manager. Set 'xpack.task_manager.monitored_stats_health_verbose_log.enabled: true' in your Kibana.yml to enable debug logging` +Detected potential performance issue with Task Manager. Set 'xpack.task_manager.monitored_stats_health_verbose_log.enabled: true' in your {{kib}}.yml to enable debug logging` ``` If this message appears, set [`xpack.task_manager.monitored_stats_health_verbose_log.enabled`](kibana://reference/configuration-reference/task-manager-settings.md#task-manager-settings) to `true` in your `kibana.yml`. This will start logging the health metrics at either a `warn` or `error` log level, depending on the detected severity of the potential problem. diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md index 6b3e5d9820..24286d004f 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md @@ -9,7 +9,7 @@ applies_to: self: all --- -# Elasticsearch deprecation logs [logging] +# {{es}} deprecation logs [logging] {{es}} writes deprecation logs to the [log directory](/deploy-manage/monitor/logging-configuration.md#access-kib-and-es-logs). These logs record a message when you use deprecated {{es}} functionality. You can use the deprecation logs to update your application before upgrading {{es}} to a new major version. diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging.md b/deploy-manage/monitor/logging-configuration/kibana-logging.md index 6780693d45..788f64d8a3 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging.md @@ -9,7 +9,7 @@ applies_to: ess: --- -# Kibana logging [logging-configuration] +# {{kib}} logging [logging-configuration] $$$pattern-layout$$$ $$$time-interval-triggering-policy$$$ @@ -17,7 +17,7 @@ $$$size-limit-triggering-policy$$$ $$$logging-appenders$$$ $$$dedicated-loggers$$$ -You do not need to configure any additional settings to use the logging features in Kibana. Logging is enabled by default. +You do not need to configure any additional settings to use the logging features in {{kib}}. Logging is enabled by default. In all deployment types, you might want to change the log level for {{kib}}. In a self-managed, ECE, or ECK deployment, you might want to further customize your logging settings to define where log messages are displayed, stored, and formatted, or provide granular settings for different loggers. diff --git a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md index df67b0780c..77486ad86d 100644 --- a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md +++ b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md @@ -9,7 +9,7 @@ applies_to: self: all --- -# Update Elasticsearch logging levels [logging] +# Update {{es}} logging levels [logging] $$$deprecation-logging$$$ $$$_deprecation_logs_throttling$$$ diff --git a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md index 4c10e71f8f..d24a87f331 100644 --- a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md +++ b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md @@ -142,6 +142,6 @@ An action is triggered if any of the following conditions are met within the las The 60-day and 30-day thresholds are skipped for Trial licenses, which are only valid for 30 days. :::{note} -For the `Elasticsearch nodes changed` alert, if you have only one master node in your cluster, during the master node vacate no notification will be sent. Kibana needs to communicate with the master node in order to send a notification. One way to avoid this is by shipping your deployment metrics to a dedicated monitoring cluster. +For the `{{es}} nodes changed` alert, if you have only one master node in your cluster, during the master node vacate no notification will be sent. {{kib}} needs to communicate with the master node in order to send a notification. One way to avoid this is by shipping your deployment metrics to a dedicated monitoring cluster. ::: :::: \ No newline at end of file diff --git a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md index d034bf37b8..dda0e98150 100644 --- a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md +++ b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md @@ -11,7 +11,7 @@ applies_to: -# Elasticsearch metrics [elasticsearch-metrics] +# {{es}} metrics [elasticsearch-metrics] You can drill down into the status of your {{es}} cluster in {{kib}} by clicking the [Overview](#cluster-overview-page), [Nodes](#nodes-page), [Indices](#indices-overview-page) and [Logs](#logs-monitor-page) links on the **Stack Monitoring** page. @@ -36,7 +36,7 @@ Conditions that require your attention are listed at the top of the Clusters pag The panel at the top shows the current cluster statistics, the charts show the search and indexing performance over time, and the table at the bottom shows information about any shards that are being recovered. If you use {{filebeat}} to collect log data from this cluster, you can also see its recent logs. :::{image} /deploy-manage/images/kibana-monitoring-overview.png -:alt: Elasticsearch Cluster Overview +:alt: {{es}} Cluster Overview :screenshot: ::: diff --git a/deploy-manage/monitor/monitoring-data/kibana-page.md b/deploy-manage/monitor/monitoring-data/kibana-page.md index 6bae59aa30..d631408bfd 100644 --- a/deploy-manage/monitor/monitoring-data/kibana-page.md +++ b/deploy-manage/monitor/monitoring-data/kibana-page.md @@ -11,13 +11,13 @@ applies_to: -# Kibana metrics [kibana-page] +# {{kib}} metrics [kibana-page] To view the key metrics that indicate the overall health of {{kib}} itself, click **Overview** in the {{kib}} section of the **Stack Monitoring** page. :::{image} /deploy-manage/images/kibana-monitoring-kibana-overview.png -:alt: Kibana Overview +:alt: {{kib}} Overview :screenshot: ::: diff --git a/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md b/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md index 9e31924758..08e1fe4aa9 100644 --- a/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md +++ b/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md @@ -22,7 +22,7 @@ Review the following topics to learn more about the metrics and visualizations a * [](/deploy-manage/monitor/monitoring-data/beats-page.md) * [](/deploy-manage/monitor/monitoring-data/logstash-page.md) -From the **Stack Monitoring** section, you can also configure [Kibana alerts](/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md) for stack monitoring metrics. +From the **Stack Monitoring** section, you can also configure [{{kib}} alerts](/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md) for stack monitoring metrics. If you're having trouble accessing your monitoring data within the **Stack Monitoring** section, then refer to [](/deploy-manage/monitor/monitoring-data/monitor-troubleshooting.md). diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md index fe854796ff..d540568bb8 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md @@ -36,7 +36,7 @@ To access logs and metrics for your deployment: 3. Depending on the logs or metrics that you want to access, click on the **Elasticsearch** or **Kibana** page. ::::{tip} - If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. Elastic Cloud Enterprise manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for Elastic Cloud Enterprise to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. + If you have a license from 2018 or earlier, you might receive a warning that your cluster license is about to expire. Don’t panic, it isn’t really. {{ece}} manages the cluster licenses so that you don’t have to. In rare cases, such as when a cluster is overloaded, it can take longer for {{ece}} to reapply the cluster license. If you have a license from 2019 and later, you’ll receive a warning only when your full platform license is about to expire, which you’ll need to renew. :::: 4. Select one of the links and log in with the `elastic` user of the `logging-and-metrics` deployment. If you do not know the password, you can [reset it first](../../users-roles/cluster-or-deployment-auth/built-in-users.md). diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md index 7bcef52379..0d41177a3b 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md @@ -8,7 +8,7 @@ applies_to: # Set the retention period for logging and metrics indices [ece-monitoring-ece-set-retention] -Elastic Cloud Enterprise sets up default index lifecycle management (ILM) policies on the logging and metrics indices it collects. By default, metrics indices are kept for one day and logging indices are kept for seven days. This retention period can be adjusted. +{{ece}} sets up default index lifecycle management (ILM) policies on the logging and metrics indices it collects. By default, metrics indices are kept for one day and logging indices are kept for seven days. This retention period can be adjusted. You might need to adjust the retention period for one of the following reasons: diff --git a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md index 7fa1064ca9..db4e39ea71 100644 --- a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md +++ b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md @@ -26,5 +26,5 @@ In this section, you'll learn the following about using ECE platform monitoring: For information about troubleshooting {{ECE}} using these metrics, and guidance on capturing other diagnostic information like heap dumps and thread dumps, refer to [](/troubleshoot/deployments/cloud-enterprise/cloud-enterprise.md). ::::{important} -The `logging-and-metrics` deployment is for use by your ECE installation only. You must not use this deployment to index monitoring data from your own Elasticsearch clusters or use it to index data from Beats and Logstash. Always create a separate, dedicated monitoring deployment for your own use. +The `logging-and-metrics` deployment is for use by your ECE installation only. You must not use this deployment to index monitoring data from your own {{es}} clusters or use it to index data from Beats and Logstash. Always create a separate, dedicated monitoring deployment for your own use. :::: \ No newline at end of file diff --git a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md index d672494ab6..a32b933b3a 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md @@ -34,8 +34,8 @@ To collect {{es}} monitoring data, add an {{es}} integration to an {{agent}} and 5. Configure the integration name and optionally add a description. Make sure you configure all required settings: - 1. Under **Collect Elasticsearch logs**, modify the log paths to match your {{es}} environment. - 2. Under **Collect Elasticsearch metrics**, make sure the hosts setting points to your {{es}} host URLs. By default, the integration collects {{es}} monitoring metrics from `localhost:9200`. If that host and port number are not correct, update the `hosts` setting. If you configured {{es}} to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://localhost:9200`. + 1. Under **Collect {{es}} logs**, modify the log paths to match your {{es}} environment. + 2. Under **Collect {{es}} metrics**, make sure the hosts setting points to your {{es}} host URLs. By default, the integration collects {{es}} monitoring metrics from `localhost:9200`. If that host and port number are not correct, update the `hosts` setting. If you configured {{es}} to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://localhost:9200`. 3. Expand **Advanced options**. If the Elastic {{security-features}} are enabled, enter the username and password of a user that has the `remote_monitoring_collector` role. 4. Specify the scope: diff --git a/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md index 4826cb6a33..2c67b8a04e 100644 --- a/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md @@ -30,7 +30,7 @@ Monitoring consists of two components: With logging and monitoring enabled for a deployment, metrics are collected for {{es}}, {{kib}}, and APM with Fleet Server. :::{admonition} Simplify monitoring with AutoOps -If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. +If you’re using {{ech}}, then you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. For more information, refer to [Monitor with AutoOps](/deploy-manage/monitor/autoops.md). ::: @@ -38,7 +38,7 @@ For more information, refer to [Monitor with AutoOps](/deploy-manage/monitor/aut ## Before you begin [logging-and-monitoring-limitations] * Some limitations apply when you use monitoring on ECH or ECE. To learn more, check the monitoring [restrictions and limitations](#restrictions-monitoring). -* Enabling logs and monitoring consumes extra resources on a deployment. For production systems, we recommend sizing deployments with logs and monitoring enabled to at least 4 GB of RAM on each Elasticsearch instance. +* Enabling logs and monitoring consumes extra resources on a deployment. For production systems, we recommend sizing deployments with logs and monitoring enabled to at least 4 GB of RAM on each {{es}} instance. ## Monitoring for production use [logging-and-monitoring-production] @@ -67,7 +67,7 @@ Elastic manages the installation and configuration of the monitoring agent for y **Before you start** -- Enabling logging and monitoring increases the resource consumption of the deployment. For production systems, we recommend sizing deployments with logging and monitoring enabled to at least 4 GB of RAM on each Elasticsearch instance. +- Enabling logging and monitoring increases the resource consumption of the deployment. For production systems, we recommend sizing deployments with logging and monitoring enabled to at least 4 GB of RAM on each {{es}} instance. - Enabling logging and monitoring can trigger a plan change on your deployment. You can monitor the plan change progress from the deployment's **Activity** page. To enable monitoring on your deployment: @@ -116,7 +116,7 @@ When shipping logs to a monitoring deployment there are more logging features av * [Slow query and index logging](elasticsearch://reference/elasticsearch/index-settings/slow-log.md) - helps find and debug slow queries and indexing * Verbose logging - helps debug stack issues by increasing component logs -After you’ve enabled log delivery on your deployment, you can [add the Elasticsearch user settings](/deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to enable these features. +After you’ve enabled log delivery on your deployment, you can [add the {{es}} user settings](/deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to enable these features. ### For {{kib}} [extra-logging-features-kibana] diff --git a/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md index a5732a7830..1878ef7fa2 100644 --- a/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md @@ -14,7 +14,7 @@ applies_to: # Enable stack monitoring on ECK deployments [k8s-stack-monitoring] -You can enable [Stack Monitoring](/deploy-manage/monitor.md) on Elasticsearch, Kibana, Beats and Logstash to collect and ship their metrics and logs to a monitoring cluster. Although self-monitoring is possible, it is advised to use a [separate monitoring cluster](/deploy-manage/monitor/stack-monitoring.md). +You can enable [Stack Monitoring](/deploy-manage/monitor.md) on {{es}}, {{kib}}, Beats and Logstash to collect and ship their metrics and logs to a monitoring cluster. Although self-monitoring is possible, it is advised to use a [separate monitoring cluster](/deploy-manage/monitor/stack-monitoring.md). ## How stack monitoring works in ECK @@ -107,7 +107,7 @@ spec: 2. The use of `namespace` is optional if the monitoring {{es}} cluster and the monitored {{stack}} resource are running in the same namespace. ::::{note} -If stack monitoring is configured for a Beat, but the corresponding {{es}} cluster is not monitored, the Kibana stack monitoring page will not show the Beats data. +If stack monitoring is configured for a Beat, but the corresponding {{es}} cluster is not monitored, the {{kib}} stack monitoring page will not show the Beats data. :::: ::::{note} diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md index ff816bedaa..90083b3618 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md @@ -43,9 +43,9 @@ Collection is currently done serially, rather than in parallel, to avoid extra o For more information about the configuration options for the collectors, see [Monitoring collection settings](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md#monitoring-collection-settings). -## Collecting data from across the Elastic Stack [es-monitoring-stack] +## Collecting data from across the {{stack}} [es-monitoring-stack] -{{es}} {{monitor-features}} also receive monitoring data from other parts of the Elastic Stack. In this way, it serves as an unscheduled monitoring data collector for the stack. +{{es}} {{monitor-features}} also receive monitoring data from other parts of the {{stack}}. In this way, it serves as an unscheduled monitoring data collector for the stack. By default, data collection is disabled. {{es}} monitoring data is not collected and all monitoring data from other sources such as {{kib}}, Beats, and Logstash is ignored. You must set `xpack.monitoring.collection.enabled` to `true` to enable the collection of monitoring data. See [Monitoring settings](elasticsearch://reference/elasticsearch/configuration-reference/monitoring-settings.md). diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md index cba50e865d..92b4343beb 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md @@ -12,7 +12,7 @@ applies_to: :::{include} _snippets/legacy-warning.md ::: -The purpose of exporters is to take data collected from any Elastic Stack source and route it to the monitoring cluster. It is possible to configure more than one exporter, but the general and default setup is to use a single exporter. +The purpose of exporters is to take data collected from any {{stack}} source and route it to the monitoring cluster. It is possible to configure more than one exporter, but the general and default setup is to use a single exporter. There are two types of exporters in {{es}}: diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md index 9d00dfdbe4..8491b9f954 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md @@ -41,8 +41,8 @@ To collect {{kib}} monitoring data, add a {{kib}} integration to an {{agent}} an 5. Configure the integration name and optionally add a description. Make sure you configure all required settings: - * Under **Collect Kibana logs**, modify the log paths to match your {{kib}} environment. - * Under **Collect Kibana metrics**, make sure the hosts setting points to your Kibana host URLs. By default, the integration collects {{kib}} monitoring metrics from `localhost:5601`. If that host and port number are not correct, update the `hosts` setting. If you configured {{kib}} to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://localhost:5601`. + * Under **Collect {{kib}} logs**, modify the log paths to match your {{kib}} environment. + * Under **Collect {{kib}} metrics**, make sure the hosts setting points to your {{kib}} host URLs. By default, the integration collects {{kib}} monitoring metrics from `localhost:5601`. If that host and port number are not correct, update the `hosts` setting. If you configured {{kib}} to use encrypted communications, you must access it via HTTPS. For example, use a `hosts` setting like `https://localhost:5601`. * If the Elastic {{security-features}} are enabled, expand **Advanced options** under the Hosts setting and enter the username and password of a user that has the `remote_monitoring_collector` role. 6. Choose where to add the integration policy. Click **New hosts** to add it to new agent policy or **Existing hosts** to add it to an existing agent policy. diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md index 50778f39c5..5d3885c873 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md @@ -8,7 +8,7 @@ applies_to: -# Kibana monitoring self-managed [configuring-monitoring] +# {{kib}} monitoring self-managed [configuring-monitoring] If you enable the {{monitor-features}} in your cluster, there are a few methods available to collect metrics about {{kib}}: diff --git a/deploy-manage/production-guidance.md b/deploy-manage/production-guidance.md index 6d8bf80a9c..8333ef247e 100644 --- a/deploy-manage/production-guidance.md +++ b/deploy-manage/production-guidance.md @@ -9,14 +9,14 @@ applies_to: self: all --- -% scope: the scope of this page is just a brief introduction to prod guidance at elastic stack level, links to ES and KIB, +% scope: the scope of this page is just a brief introduction to prod guidance at {{stack}} level, links to ES and KIB, # Production guidance Running the {{stack}} in production requires careful planning to ensure resilience, performance, and scalability. This section outlines best practices and recommendations for optimizing {{es}} and {{kib}} in production environments. You’ll learn how to design highly available and resilient deployments, implement best practices for managing workloads, and apply performance optimizations to handle scaling demands efficiently. -For {{es}}, this includes strategies for fault tolerance, data replication, and workload distribution to maintain stability under load. For {{kib}}, you’ll explore how to deploy multiple Kibana instances within the same environment and make informed decisions about scaling horizontally or vertically based on the task manager metrics, which provide insights into background task execution and resource consumption. +For {{es}}, this includes strategies for fault tolerance, data replication, and workload distribution to maintain stability under load. For {{kib}}, you’ll explore how to deploy multiple {{kib}} instances within the same environment and make informed decisions about scaling horizontally or vertically based on the task manager metrics, which provide insights into background task execution and resource consumption. By following this guidance, you can ensure your {{stack}} deployment is robust, efficient, and prepared for production-scale workloads. diff --git a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-larger-clusters.md b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-larger-clusters.md index 01c4774603..4f64e8d0c1 100644 --- a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-larger-clusters.md +++ b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-larger-clusters.md @@ -68,6 +68,6 @@ The cluster will be resilient to the loss of any zone as long as: * Every index that is not a [searchable snapshot index](../../tools/snapshot-and-restore/searchable-snapshots.md) has at least one replica of each shard, in addition to the primary. * [Shard allocation awareness](../../distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) is configured to avoid concentrating all copies of a shard within a single zone. * The cluster has at least three master-eligible nodes. At least two of these nodes are not [voting-only master-eligible nodes](../../distributed-architecture/clusters-nodes-shards/node-roles.md#voting-only-node), and they are spread evenly across at least three zones. -* Clients are configured to send their requests to nodes in more than one zone or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The [Elastic Cloud](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. +* Clients are configured to send their requests to nodes in more than one zone or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. diff --git a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md index 8dd1209b66..de168ad578 100644 --- a/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md +++ b/deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md @@ -51,7 +51,7 @@ The two original nodes should not be voting-only master-eligible nodes since a r We recommend assigning both non-tiebreaker nodes all other roles. This creates redundancy by ensuring any task in the cluster can be handled by either node. -You should not send any client requests to the dedicated tiebreaker node. You should also avoid sending client requests to just one of the other two nodes. If you do, and this node fails, then any requests will not receive responses, even if the remaining nodes form a healthy cluster. Ideally, you should balance your client requests across both of the non-tiebreaker nodes. You can do this by specifying the address of both nodes when configuring your client to connect to your cluster. Alternatively, you can use a resilient load balancer to balance client requests across the appropriate nodes in your cluster. The [Elastic Cloud](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. +You should not send any client requests to the dedicated tiebreaker node. You should also avoid sending client requests to just one of the other two nodes. If you do, and this node fails, then any requests will not receive responses, even if the remaining nodes form a healthy cluster. Ideally, you should balance your client requests across both of the non-tiebreaker nodes. You can do this by specifying the address of both nodes when configuring your client to connect to your cluster. Alternatively, you can use a resilient load balancer to balance client requests across the appropriate nodes in your cluster. The [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. A two-node cluster with an additional tiebreaker node is the smallest possible cluster that is suitable for production deployments. @@ -60,7 +60,7 @@ A two-node cluster with an additional tiebreaker node is the smallest possible c If you have three nodes, we recommend they all be [data nodes](../../distributed-architecture/clusters-nodes-shards/node-roles.md#data-node-role) and every index that is not a [searchable snapshot index](../../tools/snapshot-and-restore/searchable-snapshots.md) should have at least one replica. Nodes are data nodes by default. You may prefer for some indices to have two replicas so that each node has a copy of each shard in those indices. You should also configure each node to be [master-eligible](../../distributed-architecture/clusters-nodes-shards/node-roles.md#master-node-role) so that any two of them can hold a master election without needing to communicate with the third node. Nodes are master-eligible by default. This cluster will be resilient to the loss of any single node. -You should avoid sending client requests to just one of your nodes. If you do, and this node fails, then any requests will not receive responses even if the remaining two nodes form a healthy cluster. Ideally, you should balance your client requests across all three nodes. You can do this by specifying the address of multiple nodes when configuring your client to connect to your cluster. Alternatively you can use a resilient load balancer to balance client requests across your cluster. The [Elastic Cloud](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. +You should avoid sending client requests to just one of your nodes. If you do, and this node fails, then any requests will not receive responses even if the remaining two nodes form a healthy cluster. Ideally, you should balance your client requests across all three nodes. You can do this by specifying the address of multiple nodes when configuring your client to connect to your cluster. Alternatively you can use a resilient load balancer to balance client requests across your cluster. The [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. ## Clusters with more than three nodes [high-availability-cluster-design-three-plus-nodes] @@ -80,6 +80,6 @@ The cluster will be resilient to the loss of any node as long as: * There are at least two data nodes. * Every index that is not a [searchable snapshot index](../../tools/snapshot-and-restore/searchable-snapshots.md) has at least one replica of each shard, in addition to the primary. * The cluster has at least three master-eligible nodes, as long as at least two of these nodes are not voting-only master-eligible nodes. -* Clients are configured to send their requests to more than one node or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The [Elastic Cloud](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. +* Clients are configured to send their requests to more than one node or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The [{{ecloud}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body) service provides such a load balancer. diff --git a/deploy-manage/production-guidance/elasticsearch-in-production-environments.md b/deploy-manage/production-guidance/elasticsearch-in-production-environments.md index 6392809b73..9d4fcc21af 100644 --- a/deploy-manage/production-guidance/elasticsearch-in-production-environments.md +++ b/deploy-manage/production-guidance/elasticsearch-in-production-environments.md @@ -1,5 +1,5 @@ --- -navigation_title: Run Elasticsearch in production +navigation_title: Run {{es}} in production mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html - https://www.elastic.co/guide/en/cloud/current/ec-planning.html diff --git a/deploy-manage/production-guidance/general-recommendations.md b/deploy-manage/production-guidance/general-recommendations.md index 285c16e07c..c149d48e47 100644 --- a/deploy-manage/production-guidance/general-recommendations.md +++ b/deploy-manage/production-guidance/general-recommendations.md @@ -15,13 +15,13 @@ This page offers general best practices to improve performance and avoid common ## Don’t return large result sets [large-size] -Elasticsearch is designed as a search engine, which makes it very good at getting back the top documents that match a query. However, it is not as good for workloads that fall into the database domain, such as retrieving all documents that match a particular query. If you need to do this, make sure to use the [Scroll](elasticsearch://reference/elasticsearch/rest-apis/paginate-search-results.md#scroll-search-results) API. +{{es}} is designed as a search engine, which makes it very good at getting back the top documents that match a query. However, it is not as good for workloads that fall into the database domain, such as retrieving all documents that match a particular query. If you need to do this, make sure to use the [Scroll](elasticsearch://reference/elasticsearch/rest-apis/paginate-search-results.md#scroll-search-results) API. ## Avoid large documents [maximum-document-size] -Given that the default [`http.max_content_length`](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) is set to 100MB, Elasticsearch will refuse to index any document that is larger than that. You might decide to increase that particular setting, but Lucene still has a limit of about 2GB. +Given that the default [`http.max_content_length`](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) is set to 100MB, {{es}} will refuse to index any document that is larger than that. You might decide to increase that particular setting, but Lucene still has a limit of about 2GB. -Even without considering hard limits, large documents are usually not practical. Large documents put more stress on network, memory usage and disk, even for search requests that do not request the `_source` since Elasticsearch needs to fetch the `_id` of the document in all cases, and the cost of getting this field is bigger for large documents due to how the filesystem cache works. Indexing this document can use an amount of memory that is a multiplier of the original size of the document. Proximity search (phrase queries for instance) and [highlighting](elasticsearch://reference/elasticsearch/rest-apis/highlighting.md) also become more expensive since their cost directly depends on the size of the original document. +Even without considering hard limits, large documents are usually not practical. Large documents put more stress on network, memory usage and disk, even for search requests that do not request the `_source` since {{es}} needs to fetch the `_id` of the document in all cases, and the cost of getting this field is bigger for large documents due to how the filesystem cache works. Indexing this document can use an amount of memory that is a multiplier of the original size of the document. Proximity search (phrase queries for instance) and [highlighting](elasticsearch://reference/elasticsearch/rest-apis/highlighting.md) also become more expensive since their cost directly depends on the size of the original document. It is sometimes useful to reconsider what the unit of information should be. For instance, the fact you want to make books searchable doesn’t necessarily mean that a document should consist of a whole book. It might be a better idea to use chapters or even paragraphs as documents, and then have a property in these documents that identifies which book they belong to. This does not only avoid the issues with large documents, it also makes the search experience better. For instance if a user searches for two words `foo` and `bar`, a match across different chapters is probably very poor, while a match within the same paragraph is likely good. diff --git a/deploy-manage/production-guidance/kibana-alerting-production-considerations.md b/deploy-manage/production-guidance/kibana-alerting-production-considerations.md index b1a574bb9d..6b54ce2296 100644 --- a/deploy-manage/production-guidance/kibana-alerting-production-considerations.md +++ b/deploy-manage/production-guidance/kibana-alerting-production-considerations.md @@ -12,7 +12,7 @@ applies_to: -# Kibana alerting: performance and scaling [alerting-production-considerations] +# {{kib}} alerting: performance and scaling [alerting-production-considerations] Alerting runs both rule checks and actions as persistent background tasks managed by the Task Manager. @@ -62,7 +62,7 @@ This functionality is in technical preview and may be changed or removed in a fu :::: -Alerts and actions log activity in a set of "event log" data streams, one per Kibana version, named `.kibana-event-log-{{VERSION}}`. These data streams are configured with a lifecycle data retention of 90 days. This can be updated to other values via the standard data stream lifecycle APIs. Note that the event log data contains the data shown in the alerting pages in {{kib}}, so reducing the data retention period will result in less data being available to view. +Alerts and actions log activity in a set of "event log" data streams, one per {{kib}} version, named `.kibana-event-log-{{VERSION}}`. These data streams are configured with a lifecycle data retention of 90 days. This can be updated to other values via the standard data stream lifecycle APIs. Note that the event log data contains the data shown in the alerting pages in {{kib}}, so reducing the data retention period will result in less data being available to view. For more information on data stream lifecycle management, see: [Data stream lifecycle](../../manage-data/lifecycle/data-stream.md). diff --git a/deploy-manage/production-guidance/kibana-configure-memory.md b/deploy-manage/production-guidance/kibana-configure-memory.md index 8213e41aa3..9cbd4222d7 100644 --- a/deploy-manage/production-guidance/kibana-configure-memory.md +++ b/deploy-manage/production-guidance/kibana-configure-memory.md @@ -5,9 +5,9 @@ applies_to: self: all --- -# Configure Kibana memory +# Configure {{kib}} memory -Kibana has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs, detection rules, managing alerting workflows, or working with SLOs, it may make sense to tweak memory limits to meet more specific performance requirements. +{{kib}} has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs, detection rules, managing alerting workflows, or working with SLOs, it may make sense to tweak memory limits to meet more specific performance requirements. ## Limit memory usage [memory] diff --git a/deploy-manage/production-guidance/kibana-in-production-environments.md b/deploy-manage/production-guidance/kibana-in-production-environments.md index d77276ed9a..39f840feca 100644 --- a/deploy-manage/production-guidance/kibana-in-production-environments.md +++ b/deploy-manage/production-guidance/kibana-in-production-environments.md @@ -1,5 +1,5 @@ --- -navigation_title: "Run Kibana in production" +navigation_title: "Run {{kib}} in production" applies_to: deployment: ess: all @@ -8,17 +8,17 @@ applies_to: self: all --- -# Run Kibana in production +# Run {{kib}} in production How you deploy {{kib}} largely depends on your use case. If you are the only user, you can run {{kib}} on your local machine and configure it to point to whatever {{es}} instance you want to interact with. Conversely, if you have a large number of heavy {{kib}} users, you might need to load balance across multiple {{kib}} instances that are all connected to the same {{es}} cluster or deployment. ## Scalability -Historically, Kibana’s scalability was primarily influenced by the number of concurrent users and the complexity of dashboards and visualizations. However, with the introduction of new capabilities such as [Kibana Alerting](/explore-analyze/alerts-cases.md) and the [Detection Rules](/solutions/security/detect-and-alert.md) engine, critical components for [Observability](/solutions/observability.md) and [Security](/solutions/security.md) solutions, the scalability factors have evolved significantly. +Historically, Kibana’s scalability was primarily influenced by the number of concurrent users and the complexity of dashboards and visualizations. However, with the introduction of new capabilities such as [{{kib}} Alerting](/explore-analyze/alerts-cases.md) and the [Detection Rules](/solutions/security/detect-and-alert.md) engine, critical components for [Observability](/solutions/observability.md) and [Security](/solutions/security.md) solutions, the scalability factors have evolved significantly. -Now, Kibana’s resource requirements extend beyond user activity. The system must also handle workloads generated by automated processes, such as scheduled alerts, background detection rules, and other periodic tasks. These operations are managed by [Kibana Task Manager](./kibana-task-manager-scaling-considerations.md), which is responsible for scheduling, executing, and coordinating all background tasks. +Now, Kibana’s resource requirements extend beyond user activity. The system must also handle workloads generated by automated processes, such as scheduled alerts, background detection rules, and other periodic tasks. These operations are managed by [{{kib}} Task Manager](./kibana-task-manager-scaling-considerations.md), which is responsible for scheduling, executing, and coordinating all background tasks. -Additionally, the task manager enables distributed coordination across multiple Kibana instances, allowing Kibana to function as a logical cluster in certain aspects. +Additionally, the task manager enables distributed coordination across multiple {{kib}} instances, allowing {{kib}} to function as a logical cluster in certain aspects. ::::{important} * {{kib}} does not support rolling [upgrades](/deploy-manage/upgrade/deployment-or-cluster/kibana.md), and deploying mixed versions of {{kib}} can result in data loss or upgrade failures. Please shut down all instances of {{kib}} before performing an upgrade, and ensure all running {{kib}} instances have matching versions. @@ -39,7 +39,7 @@ Topics covered include: * [Optimize {{kib}} alerting performance](./kibana-alerting-production-considerations.md): Learn how {{kib}} runs alerting rules and actions using background tasks, and how to scale alerting by tuning task throughput and circuit breakers. Applicable to all deployment types. -* [Kibana reporting production setup](./kibana-reporting-production-considerations.md): Learn how {{kib}} generates reports using a headless version of Chromium, and how to configure your environment securely for production, including sandboxing and OS compatibility. +* [{{kib}} reporting production setup](./kibana-reporting-production-considerations.md): Learn how {{kib}} generates reports using a headless version of Chromium, and how to configure your environment securely for production, including sandboxing and OS compatibility. ::::{note} diff --git a/deploy-manage/production-guidance/kibana-load-balance-traffic.md b/deploy-manage/production-guidance/kibana-load-balance-traffic.md index 5609195815..f3afe0412b 100644 --- a/deploy-manage/production-guidance/kibana-load-balance-traffic.md +++ b/deploy-manage/production-guidance/kibana-load-balance-traffic.md @@ -72,7 +72,7 @@ To access multiple load-balanced {{kib}} deployments from the same browser, expl This prevents cookie conflicts between {{kib}} instances, ensuring seamless high availability and maintaining the session active in case of an instance failure. ::::{note} -In this context, a Kibana cluster or deployment refers to multiple Kibana instances connected to the same {{es}} cluster. +In this context, a {{kib}} cluster or deployment refers to multiple {{kib}} instances connected to the same {{es}} cluster. :::: ## High availability across multiple {{es}} nodes [high-availability] diff --git a/deploy-manage/production-guidance/kibana-reporting-production-considerations.md b/deploy-manage/production-guidance/kibana-reporting-production-considerations.md index d0e7feb7a7..0883e19158 100644 --- a/deploy-manage/production-guidance/kibana-reporting-production-considerations.md +++ b/deploy-manage/production-guidance/kibana-reporting-production-considerations.md @@ -7,9 +7,9 @@ applies_to: self: all --- -# Kibana reporting in production environments [reporting-production-considerations] +# {{kib}} reporting in production environments [reporting-production-considerations] -In production environments, it's important to ensure that Kibana reporting is both reliable and secure. To generate [reports](/explore-analyze/report-and-share.md), {{kib}} uses the Chromium web browser, which runs on the server in headless mode (without a graphical user interface). +In production environments, it's important to ensure that {{kib}} reporting is both reliable and secure. To generate [reports](/explore-analyze/report-and-share.md), {{kib}} uses the Chromium web browser, which runs on the server in headless mode (without a graphical user interface). Because this process requires launching a browser within your server environment, you should pay special attention to operating system compatibility, sandboxing, and dependencies. diff --git a/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md b/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md index 339ee02498..f95a59176a 100644 --- a/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md +++ b/deploy-manage/production-guidance/kibana-task-manager-scaling-considerations.md @@ -10,7 +10,7 @@ applies_to: self: all --- -# Kibana task manager: performance and scaling guide [task-manager-production-considerations] +# {{kib}} task manager: performance and scaling guide [task-manager-production-considerations] {{kib}} Task Manager is leveraged by features such as [alerting](/explore-analyze/alerts-cases/alerts.md), [actions](/explore-analyze/alerts-cases/alerts.md#rules-actions), and [reporting](/explore-analyze/report-and-share.md) to run mission critical work as persistent background tasks. These background tasks distribute work across multiple {{kib}} instances. This has three major benefits: @@ -106,7 +106,7 @@ A recommended strategy is to follow these steps: 3. If the throughput is insufficient, and {{kib}} instances exhibit low resource usage, incrementally scale vertically while [monitoring](../monitor/monitoring-data/kibana-page.md) the impact of these changes. 4. If the throughput is insufficient, and {{kib}} instances are exhibiting high resource usage, incrementally scale horizontally by provisioning new {{kib}} instances and reassess. -Task Manager, like the rest of the Elastic Stack, is designed to scale horizontally. Take advantage of this ability to ensure mission critical services, such as Alerting, Actions, and Reporting, always have the capacity they need. +Task Manager, like the rest of the {{stack}}, is designed to scale horizontally. Take advantage of this ability to ensure mission critical services, such as Alerting, Actions, and Reporting, always have the capacity they need. Scaling horizontally requires a higher degree of coordination between {{kib}} instances. One way Task Manager coordinates with other instances is by delaying its polling schedule to avoid conflicts with other instances. By using [health monitoring](../monitor/kibana-task-manager-health-monitoring.md) to evaluate the [date of the `last_polling_delay`](../../troubleshoot/kibana/task-manager.md#task-manager-health-evaluate-the-runtime) across a deployment, you can estimate the frequency at which Task Manager resets its delay mechanism. A higher frequency suggests {{kib}} instances conflict at a high rate, which you can address by scaling vertically rather than horizontally, reducing the required coordination. diff --git a/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md b/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md index 16e474bce8..d0d4b67657 100644 --- a/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md +++ b/deploy-manage/production-guidance/optimize-performance/approximate-knn-search.md @@ -64,7 +64,7 @@ The data nodes should also leave a buffer for other ways that RAM is needed. For ## Warm up the filesystem cache [dense-vector-preloading] -If the machine running Elasticsearch is restarted, the filesystem cache will be empty, so it will take some time before the operating system loads hot regions of the index into memory so that search operations are fast. You can explicitly tell the operating system which files should be loaded into memory eagerly depending on the file extension using the [`index.store.preload`](elasticsearch://reference/elasticsearch/index-settings/preloading-data-into-file-system-cache.md) setting. +If the machine running {{es}} is restarted, the filesystem cache will be empty, so it will take some time before the operating system loads hot regions of the index into memory so that search operations are fast. You can explicitly tell the operating system which files should be loaded into memory eagerly depending on the file extension using the [`index.store.preload`](elasticsearch://reference/elasticsearch/index-settings/preloading-data-into-file-system-cache.md) setting. ::::{warning} Loading data into the filesystem cache eagerly on too many indices or too many files will make search *slower* if the filesystem cache is not large enough to hold all the data. Use with caution. @@ -111,7 +111,7 @@ When possible, it’s best to avoid heavy indexing during approximate kNN search Search can cause a lot of randomized read I/O. When the underlying block device has a high readahead value, there may be a lot of unnecessary read I/O done, especially when files are accessed using memory mapping (see [storage types](elasticsearch://reference/elasticsearch/index-settings/store.md#file-system)). -Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing Elasticsearch [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. +Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing {{es}} [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.html) as a transient setting). We recommend a value of `128KiB` for readahead. diff --git a/deploy-manage/production-guidance/optimize-performance/disk-usage.md b/deploy-manage/production-guidance/optimize-performance/disk-usage.md index f33b8ee05d..b7ff869a64 100644 --- a/deploy-manage/production-guidance/optimize-performance/disk-usage.md +++ b/deploy-manage/production-guidance/optimize-performance/disk-usage.md @@ -11,7 +11,7 @@ applies_to: # Tune for disk usage [tune-for-disk-usage] -This page provides strategies to reduce the storage footprint of your Elasticsearch indices. Disk usage is influenced by field mappings, index settings, document structure, and how you manage segments and shards. Use these recommendations to improve compression, eliminate unnecessary data, and optimize storage for your specific use case. +This page provides strategies to reduce the storage footprint of your {{es}} indices. Disk usage is influenced by field mappings, index settings, document structure, and how you manage segments and shards. Use these recommendations to improve compression, eliminate unnecessary data, and optimize storage for your specific use case. ## Disable the features you do not need [_disable_the_features_you_do_not_need] @@ -81,7 +81,7 @@ The `_source` and stored fields can easily take a non negligible amount of disk ## Force merge [_force_merge] -Indices in Elasticsearch are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data. +Indices in {{es}} are stored in one or more shards. Each shard is a Lucene index and made up of one or more segments - the actual files on disk. Larger segments are more efficient for storing data. The [force merge API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) can be used to reduce the number of segments per shard. In many cases, the number of segments can be reduced to one per shard by setting `max_num_segments=1`. @@ -103,7 +103,7 @@ The type that you pick for [numeric data](elasticsearch://reference/elasticsearc ## Use index sorting to colocate similar documents [_use_index_sorting_to_colocate_similar_documents] -When Elasticsearch stores `_source`, it compresses multiple documents at once in order to improve the overall compression ratio. For instance it is very common that documents share the same field names, and quite common that they share some field values, especially on fields that have a low cardinality or a [zipfian](https://en.wikipedia.org/wiki/Zipf%27s_law) distribution. +When {{es}} stores `_source`, it compresses multiple documents at once in order to improve the overall compression ratio. For instance it is very common that documents share the same field names, and quite common that they share some field values, especially on fields that have a low cardinality or a [zipfian](https://en.wikipedia.org/wiki/Zipf%27s_law) distribution. By default documents are compressed together in the order that they are added to the index. If you enabled [index sorting](elasticsearch://reference/elasticsearch/index-settings/sorting.md) then instead they are compressed in sorted order. Sorting documents with similar structure, fields, and values together should improve the compression ratio. diff --git a/deploy-manage/production-guidance/optimize-performance/indexing-speed.md b/deploy-manage/production-guidance/optimize-performance/indexing-speed.md index 0e208a286f..f50e4ea176 100644 --- a/deploy-manage/production-guidance/optimize-performance/indexing-speed.md +++ b/deploy-manage/production-guidance/optimize-performance/indexing-speed.md @@ -24,13 +24,13 @@ Make sure to consider also your cluster’s shard count, index layout, and overa Bulk requests will yield much better performance than single-document index requests. In order to know the optimal size of a bulk request, you should run a benchmark on a single node with a single shard. First try to index 100 documents at once, then 200, then 400, etc. doubling the number of documents in a bulk request in every benchmark run. When the indexing speed starts to plateau then you know you reached the optimal size of a bulk request for your data. In case of tie, it is better to err in the direction of too few rather than too many documents. Beware that too large bulk requests might put the cluster under memory pressure when many of them are sent concurrently, so it is advisable to avoid going beyond a couple tens of megabytes per request even if larger requests seem to perform better. -## Use multiple workers/threads to send data to Elasticsearch [multiple-workers-threads] +## Use multiple workers/threads to send data to {{es}} [multiple-workers-threads] -A single thread sending bulk requests is unlikely to be able to max out the indexing capacity of an Elasticsearch cluster. In order to use all resources of the cluster, you should send data from multiple threads or processes. In addition to making better use of the resources of the cluster, this should help reduce the cost of each fsync. +A single thread sending bulk requests is unlikely to be able to max out the indexing capacity of an {{es}} cluster. In order to use all resources of the cluster, you should send data from multiple threads or processes. In addition to making better use of the resources of the cluster, this should help reduce the cost of each fsync. On the other hand, sending data to a single shard from too many concurrent threads or processes can overwhelm the cluster. If the indexing load exceeds what {{es}} can handle, it may become a bottleneck and start rejecting requests or slowing down overall performance. -Make sure to watch for `TOO_MANY_REQUESTS (429)` response codes (`EsRejectedExecutionException` with the Java client), which is the way that Elasticsearch tells you that it cannot keep up with the current indexing rate. When it happens, you should pause indexing a bit before trying again, ideally with randomized exponential backoff. +Make sure to watch for `TOO_MANY_REQUESTS (429)` response codes (`EsRejectedExecutionException` with the Java client), which is the way that {{es}} tells you that it cannot keep up with the current indexing rate. When it happens, you should pause indexing a bit before trying again, ideally with randomized exponential backoff. Similarly to sizing bulk requests, only testing can tell what the optimal number of workers is. This can be tested by progressively increasing the number of workers until either I/O or CPU is saturated on the cluster. @@ -38,16 +38,16 @@ Similarly to sizing bulk requests, only testing can tell what the optimal number The operation that consists of making changes visible to search - called a [refresh](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-refresh) - is costly, and calling it often while there is ongoing indexing activity can hurt indexing speed. -By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. +By default, {{es}} periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. This is the optimal configuration if you have no or very little search traffic (e.g. less than one search request every 5 minutes) and want to optimize for indexing speed. This behavior aims to automatically optimize bulk indexing in the default case when no searches are performed. In order to opt out of this behavior set the refresh interval explicitly. -On the other hand, if your index experiences regular search requests, this default behavior means that Elasticsearch will refresh your index every 1 second. If you can afford to increase the amount of time between when a document gets indexed and when it becomes visible, increasing the [`index.refresh_interval`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#index-refresh-interval-setting) to a larger value, e.g. `30s`, might help improve indexing speed. +On the other hand, if your index experiences regular search requests, this default behavior means that {{es}} will refresh your index every 1 second. If you can afford to increase the amount of time between when a document gets indexed and when it becomes visible, increasing the [`index.refresh_interval`](elasticsearch://reference/elasticsearch/index-settings/index-modules.md#index-refresh-interval-setting) to a larger value, e.g. `30s`, might help improve indexing speed. ## Disable replicas for initial loads [_disable_replicas_for_initial_loads] -If you have a large amount of data that you want to load all at once into Elasticsearch, it may be beneficial to set `index.number_of_replicas` to `0` in order to speed up indexing. Having no replicas means that losing a single node may incur data loss, so it is important that the data lives elsewhere so that this initial load can be retried in case of an issue. Once the initial load is finished, you can set `index.number_of_replicas` back to its original value. +If you have a large amount of data that you want to load all at once into {{es}}, it may be beneficial to set `index.number_of_replicas` to `0` in order to speed up indexing. Having no replicas means that losing a single node may incur data loss, so it is important that the data lives elsewhere so that this initial load can be retried in case of an issue. Once the initial load is finished, you can set `index.number_of_replicas` back to its original value. If `index.refresh_interval` is configured in the index settings, it may further help to unset it during this initial load and setting it back to its original value once the initial load is finished. @@ -78,12 +78,12 @@ On Linux, the filesystem cache uses any memory not actively used by applications ## Use auto-generated ids [_use_auto_generated_ids] -When indexing a document that has an explicit id, Elasticsearch needs to check whether a document with the same id already exists within the same shard, which is a costly operation and gets even more costly as the index grows. By using auto-generated ids, Elasticsearch can skip this check, which makes indexing faster. +When indexing a document that has an explicit id, {{es}} needs to check whether a document with the same id already exists within the same shard, which is a costly operation and gets even more costly as the index grows. By using auto-generated ids, {{es}} can skip this check, which makes indexing faster. ## Use faster hardware [indexing-use-faster-hardware] -If indexing is I/O-bound, consider increasing the size of the filesystem cache (see above) or using faster storage. Elasticsearch generally creates individual files with sequential writes. However, indexing involves writing multiple files concurrently, and a mix of random and sequential reads too, so SSD drives tend to perform better than spinning disks. +If indexing is I/O-bound, consider increasing the size of the filesystem cache (see above) or using faster storage. {{es}} generally creates individual files with sequential writes. However, indexing involves writing multiple files concurrently, and a mix of random and sequential reads too, so SSD drives tend to perform better than spinning disks. Stripe your index across multiple SSDs by configuring a RAID 0 array. Remember that it will increase the risk of failure since the failure of any one SSD destroys the index. However this is typically the right tradeoff to make: optimize single shards for maximum performance, and then add replicas across different nodes so there’s redundancy for any node failures. You can also use [snapshot and restore](../../tools/snapshot-and-restore.md) to backup the index for further insurance. @@ -109,7 +109,7 @@ For {{eck}} deployments, refer to the [ECK storage recommendations](/deploy-mana ## Indexing buffer size [_indexing_buffer_size] -If your node is doing only heavy indexing, be sure [`indices.memory.index_buffer_size`](elasticsearch://reference/elasticsearch/configuration-reference/indexing-buffer-settings.md) is large enough to give at most 512 MB indexing buffer per shard doing heavy indexing (beyond that indexing performance does not typically improve). Elasticsearch takes that setting (a percentage of the java heap or an absolute byte-size), and uses it as a shared buffer across all active shards. Very active shards will naturally use this buffer more than shards that are performing lightweight indexing. +If your node is doing only heavy indexing, be sure [`indices.memory.index_buffer_size`](elasticsearch://reference/elasticsearch/configuration-reference/indexing-buffer-settings.md) is large enough to give at most 512 MB indexing buffer per shard doing heavy indexing (beyond that indexing performance does not typically improve). {{es}} takes that setting (a percentage of the java heap or an absolute byte-size), and uses it as a shared buffer across all active shards. Very active shards will naturally use this buffer more than shards that are performing lightweight indexing. The default is `10%` which is often plenty: for example, if you give the JVM 10GB of memory, it will give 1GB to the index buffer, which is enough to host two shards that are heavily indexing. diff --git a/deploy-manage/production-guidance/optimize-performance/search-speed.md b/deploy-manage/production-guidance/optimize-performance/search-speed.md index 2d837179b3..056b4f0f92 100644 --- a/deploy-manage/production-guidance/optimize-performance/search-speed.md +++ b/deploy-manage/production-guidance/optimize-performance/search-speed.md @@ -47,7 +47,7 @@ deployment: Search can cause a lot of randomized read I/O. When the underlying block device has a high readahead value, there may be a lot of unnecessary read I/O done, especially when files are accessed using memory mapping (see [storage types](elasticsearch://reference/elasticsearch/index-settings/store.md#file-system)). -Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing Elasticsearch [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. +Most Linux distributions use a sensible readahead value of `128KiB` for a single plain device, however, when using software raid, LVM or dm-crypt the resulting block device (backing {{es}} [path.data](../../deploy/self-managed/important-settings-configuration.md#path-settings)) may end up having a very large readahead value (in the range of several MiB). This usually results in severe page (filesystem) cache thrashing adversely affecting search (or [update](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document)) performance. You can check the current value in `KiB` using `lsblk -o NAME,RA,MOUNTPOINT,TYPE,SIZE`. Consult the documentation of your distribution on how to alter this value (for example with a `udev` rule to persist across reboots, or via [blockdev --setra](https://man7.org/linux/man-pages/man8/blockdev.8.html) as a transient setting). We recommend a value of `128KiB` for readahead. @@ -335,7 +335,7 @@ PUT index ## Warm up the filesystem cache [_warm_up_the_filesystem_cache] -If the machine running Elasticsearch is restarted, the filesystem cache will be empty, so it will take some time before the operating system loads hot regions of the index into memory so that search operations are fast. You can explicitly tell the operating system which files should be loaded into memory eagerly depending on the file extension using the [`index.store.preload`](elasticsearch://reference/elasticsearch/index-settings/preloading-data-into-file-system-cache.md) setting. +If the machine running {{es}} is restarted, the filesystem cache will be empty, so it will take some time before the operating system loads hot regions of the index into memory so that search operations are fast. You can explicitly tell the operating system which files should be loaded into memory eagerly depending on the file extension using the [`index.store.preload`](elasticsearch://reference/elasticsearch/index-settings/preloading-data-into-file-system-cache.md) setting. ::::{warning} Loading data into the filesystem cache eagerly on too many indices or too many files will make search *slower* if the filesystem cache is not large enough to hold all the data. Use with caution. @@ -386,7 +386,7 @@ The [`text`](elasticsearch://reference/elasticsearch/mapping-reference/text.md) There is a general rule that the cost of a filter is mostly a function of the number of matched documents. Imagine that you have an index containing cycles. There are a large number of bicycles and many searches perform a filter on `cycle_type: bicycle`. This very common filter is unfortunately also very costly since it matches most documents. There is a simple way to avoid running this filter: move bicycles to their own index and filter bicycles by searching this index instead of adding a filter to the query. -Unfortunately this can make client-side logic tricky, which is where `constant_keyword` helps. By mapping `cycle_type` as a `constant_keyword` with value `bicycle` on the index that contains bicycles, clients can keep running the exact same queries as they used to run on the monolithic index and Elasticsearch will do the right thing on the bicycles index by ignoring filters on `cycle_type` if the value is `bicycle` and returning no hits otherwise. +Unfortunately this can make client-side logic tricky, which is where `constant_keyword` helps. By mapping `cycle_type` as a `constant_keyword` with value `bicycle` on the index that contains bicycles, clients can keep running the exact same queries as they used to run on the monolithic index and {{es}} will do the right thing on the bicycles index by ignoring filters on `cycle_type` if the value is `bicycle` and returning no hits otherwise. Here is what mappings could look like: @@ -443,7 +443,7 @@ GET bicycles,other_cycles/_search } ``` -On the `bicycles` index, Elasticsearch will simply ignore the `cycle_type` filter and rewrite the search request to the one below: +On the `bicycles` index, {{es}} will simply ignore the `cycle_type` filter and rewrite the search request to the one below: ```console GET bicycles,other_cycles/_search @@ -456,7 +456,7 @@ GET bicycles,other_cycles/_search } ``` -On the `other_cycles` index, Elasticsearch will quickly figure out that `bicycle` doesn’t exist in the terms dictionary of the `cycle_type` field and return a search response with no hits. +On the `other_cycles` index, {{es}} will quickly figure out that `bicycle` doesn’t exist in the terms dictionary of the `cycle_type` field and return a search response with no hits. This is a powerful way of making queries cheaper by putting common values in a dedicated index. This idea can also be combined across multiple fields: for instance if you track the color of each cycle and your `bicycles` index ends up having a majority of black bikes, you could split it into a `bicycles-black` and a `bicycles-other-colors` indices. diff --git a/deploy-manage/production-guidance/optimize-performance/size-shards.md b/deploy-manage/production-guidance/optimize-performance/size-shards.md index d0838439b5..47c7c8137a 100644 --- a/deploy-manage/production-guidance/optimize-performance/size-shards.md +++ b/deploy-manage/production-guidance/optimize-performance/size-shards.md @@ -378,7 +378,7 @@ See this [fixing "max shards open" video](https://www.youtube.com/watch?v=tZKbDe Each {{es}} shard is a separate Lucene index, so it shares Lucene’s [`MAX_DOC` limit](https://github.com/apache/lucene/issues/5176) of having at most 2,147,483,519 (`(2^31)-129`) documents. This per-shard limit applies to the sum of `docs.count` plus `docs.deleted` as reported by the [Index stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-stats). Exceeding this limit will result in errors like the following: ```txt -Elasticsearch exception [type=illegal_argument_exception, reason=Number of documents in the shard cannot exceed [2147483519]] +{{es}} exception [type=illegal_argument_exception, reason=Number of documents in the shard cannot exceed [2147483519]] ``` ::::{tip} diff --git a/deploy-manage/production-guidance/scaling-considerations.md b/deploy-manage/production-guidance/scaling-considerations.md index 8cbd7ec47d..94fcc58e5b 100644 --- a/deploy-manage/production-guidance/scaling-considerations.md +++ b/deploy-manage/production-guidance/scaling-considerations.md @@ -60,9 +60,9 @@ These platforms offer built-in autoscaling and flexible resource management: * [Configure {{ech}} deployments](/deploy-manage/deploy/elastic-cloud/configure.md): Change instance sizes, increase capacity, or [select a different hardware profile](/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md). -* [Resize an ECE deployment](/deploy-manage/deploy/cloud-enterprise/resize-deployment.md): Adjust deployment size or topology in the Elastic Cloud Enterprise console. +* [Resize an ECE deployment](/deploy-manage/deploy/cloud-enterprise/resize-deployment.md): Adjust deployment size or topology in the {{ece}} console. -* [Configure ECK deployments](/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md): Change the number of Elasticsearch nodes, [customize compute resources](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md), or configure `nodeSets` to adjust the cluster architecture and topology. +* [Configure ECK deployments](/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md): Change the number of {{es}} nodes, [customize compute resources](/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md), or configure `nodeSets` to adjust the cluster architecture and topology. ### Self-managed deployments diff --git a/deploy-manage/reference-architectures.md b/deploy-manage/reference-architectures.md index 56f3764b0e..3ad1abeef4 100644 --- a/deploy-manage/reference-architectures.md +++ b/deploy-manage/reference-architectures.md @@ -12,7 +12,7 @@ applies_to: # Reference architectures [reference-architectures-overview] -Elasticsearch reference architectures are blueprints for deploying Elasticsearch clusters tailored to different use cases. Whether you’re handling logs or metrics these reference architectures focus on scalability, reliability, and cost efficiency. Use these guidelines to deploy Elasticsearch for your use case. +{{es}} reference architectures are blueprints for deploying {{es}} clusters tailored to different use cases. Whether you’re handling logs or metrics these reference architectures focus on scalability, reliability, and cost efficiency. Use these guidelines to deploy {{es}} for your use case. These architectures are designed by architects and engineers to provide standardized, proven solutions that help you to follow best practices when deploying {{es}}. diff --git a/deploy-manage/reference-architectures/hotfrozen-high-availability.md b/deploy-manage/reference-architectures/hotfrozen-high-availability.md index 1cb7421a25..24cd973c46 100644 --- a/deploy-manage/reference-architectures/hotfrozen-high-availability.md +++ b/deploy-manage/reference-architectures/hotfrozen-high-availability.md @@ -68,7 +68,7 @@ This table shows our specific recommendations for nodes in a Hot/Frozen architec | ![Frozen data node](/deploy-manage/images/reference-architectures-frozen.png "") | i3en | e8dsv4 | N2 | 8 vCPU
64 GB RAM
6-20+ TB NVMe SSD
Depending on days cached | | ![Machine learning node](/deploy-manage/images/reference-architectures-machine-learning.png "") | m6gd | f16sv2 | N2 | 16 vCPU
64 GB RAM
256 GB SSD | | ![Master node](/deploy-manage/images/reference-architectures-master.png "") | c5d | f16sv2 | N2 | 8 vCPU
16 GB RAM
256 GB SSD | -| ![Kibana node](/deploy-manage/images/reference-architectures-kibana.png "") | c6gd | f16sv2 | N2 | 8-16 vCPU
8 GB RAM
256 GB SSD | +| ![{{kib}} node](/deploy-manage/images/reference-architectures-kibana.png "") | c6gd | f16sv2 | N2 | 8-16 vCPU
8 GB RAM
256 GB SSD | ## Important considerations [hot-frozen-considerations] @@ -90,7 +90,7 @@ This table shows our specific recommendations for nodes in a Hot/Frozen architec * If auditable or business critical events are being logged, a backup is necessary. The choice to back up data will depend on each individual business’s needs and requirements. Refer to our [snapshot repository](/deploy-manage/tools/snapshot-and-restore/self-managed.md) documentation to learn more. * To automate snapshots and attach to Index lifecycle management policies, refer to [SLM (Snapshot lifecycle management)](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm). -**Kibana:** +**{{kib}}:** * If self-deploying outside of {{ech}}, ensure that {{kib}} is configured for [high availability](/deploy-manage/production-guidance/kibana-load-balance-traffic.md#high-availability). diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index 8a002da7c6..f28eddc1ae 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -9,7 +9,7 @@ mapped_pages: # Remote clusters with {{eck}} [k8s-remote-clusters] -The [remote clusters module](/deploy-manage/remote-clusters.md) in Elasticsearch enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. +The [remote clusters module](/deploy-manage/remote-clusters.md) in {{es}} enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. When using remote cluster connections with ECK, the setup process depends on where the remote cluster is deployed. diff --git a/deploy-manage/security.md b/deploy-manage/security.md index 9e99f99df7..41f5b0bdd6 100644 --- a/deploy-manage/security.md +++ b/deploy-manage/security.md @@ -25,7 +25,7 @@ The availability and configurability of security features vary by deployment typ :::{include} /deploy-manage/security/_snippets/complete-security.md ::: -## Managed security in Elastic Cloud +## Managed security in {{ecloud}} [managed-security-in-elastic-cloud] ```yaml {applies_to} deployment: ess: all diff --git a/deploy-manage/security/_snippets/cluster-communication-network.md b/deploy-manage/security/_snippets/cluster-communication-network.md index d4751d1f58..7d16451e88 100644 --- a/deploy-manage/security/_snippets/cluster-communication-network.md +++ b/deploy-manage/security/_snippets/cluster-communication-network.md @@ -1,7 +1,7 @@ * [Manage TLS certificates](/deploy-manage/security/secure-cluster-communications.md): TLS certificates apply security controls to network communications. Elastic uses TLS certificates to secure communications in two places: * **The HTTP layer**: Used for communication between your cluster or deployment and the internet. * **The transport layer**: Used mainly for inter-node communications, and in certain cases for cluster to cluster communication. - * In self-managed {{es}} clusters, you can also [Configure Kibana and Elasticsearch to use mutual TLS](/deploy-manage/security/kibana-es-mutual-tls.md). + * In self-managed {{es}} clusters, you can also [Configure {{kib}} and {{es}} to use mutual TLS](/deploy-manage/security/kibana-es-mutual-tls.md). * [Enable cipher suites for stronger encryption](/deploy-manage/security/enabling-cipher-suites-for-stronger-encryption.md): The TLS and SSL protocols use a cipher suite that determines the strength of encryption used to protect the data. You may want to enable the use of additional cipher suites, so you can use different cipher suites for your TLS communications or communications with authentication providers. * [Restrict connections using traffic filtering](/deploy-manage/security/traffic-filtering.md): Traffic filtering allows you to limit how your deployments can be accessed. Add another layer of security to your installation and deployments by restricting inbound traffic to only the sources that you trust. Restrict access based on IP addresses or CIDR ranges, or, in {{ech}} deployments, secure connectivity through AWS PrivateLink, Azure Private Link, or GCP Private Service Connect. * [Allow or deny {{ech}} IP ranges](/deploy-manage/security/elastic-cloud-static-ips.md): {{ecloud}} publishes a list of IP addresses used by its {{ech}} services for both incoming and outgoing traffic. Users can use these lists to configure their network firewalls as needed to allow or restrict traffic related to {{ech}} services. \ No newline at end of file diff --git a/deploy-manage/security/aws-privatelink-traffic-filters.md b/deploy-manage/security/aws-privatelink-traffic-filters.md index eb12c3bc57..07e79ebd65 100644 --- a/deploy-manage/security/aws-privatelink-traffic-filters.md +++ b/deploy-manage/security/aws-privatelink-traffic-filters.md @@ -36,7 +36,7 @@ Check [interface endpoint availability zone considerations](https://docs.aws.ama Elastic [charges](/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md) for inter-node traffic regardless of whether nodes are in the same or different availability zones (AZ). As a result, placing the deployment nodes within a single AZ, instead of two or three, does not reduce inter-node costs. -On the customer VPC side, the inter-availability zone data transfer, within the same AWS region, towards AWS PrivateLink endpoints, [is free of charge](https://aws.amazon.com/about-aws/whats-new/2022/04/aws-data-transfer-price-reduction-privatelink-transit-gateway-client-vpn-services/). As a result, you do not incur charges for cross-AZ data transfer within your VPC when the target is the AWS Privatelink Elastic Cloud service endpoint. We recommend you set up the VPC endpoints in all supported Elastic Cloud AZs for a particular region for maximum traffic throughput and resiliency. +On the customer VPC side, the inter-availability zone data transfer, within the same AWS region, towards AWS PrivateLink endpoints, [is free of charge](https://aws.amazon.com/about-aws/whats-new/2022/04/aws-data-transfer-price-reduction-privatelink-transit-gateway-client-vpn-services/). As a result, you do not incur charges for cross-AZ data transfer within your VPC when the target is the AWS Privatelink {{ecloud}} service endpoint. We recommend you set up the VPC endpoints in all supported {{ecloud}} AZs for a particular region for maximum traffic throughput and resiliency. If Elastic and your VPC overlap in two AZs or less, you can create subnets and VPC PrivateLink endpoints in your VPC within the same availability zones where Elastic PrivateLink service has presence. diff --git a/deploy-manage/security/k8s-https-settings.md b/deploy-manage/security/k8s-https-settings.md index bc6af7fa2f..cada17e5dc 100644 --- a/deploy-manage/security/k8s-https-settings.md +++ b/deploy-manage/security/k8s-https-settings.md @@ -40,13 +40,13 @@ QLYL4zLEby3vRxq65+xofVBJAaM= ### Custom HTTP certificate [k8s-custom-http-certificate] -You can provide your own CA and certificates instead of the self-signed certificate to connect to Elastic stack applications through HTTPS using a Kubernetes secret. +You can provide your own CA and certificates instead of the self-signed certificate to connect to {{stack}} applications through HTTPS using a Kubernetes secret. Check [Setup your own certificate](./set-up-basic-security-plus-https.md#encrypt-http-communication) to learn how to do that with `elasticsearch-certutil` tool. #### Custom self-signed certificate using OpenSSL [k8s_custom_self_signed_certificate_using_openssl] -This example illustrates how to create your own self-signed certificate for the [quickstart Elasticsearch cluster](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md) using the OpenSSL command line utility. Note the subject alternative name (SAN) entry for `quickstart-es-http.default.svc`. +This example illustrates how to create your own self-signed certificate for the [quickstart {{es}} cluster](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md) using the OpenSSL command line utility. Note the subject alternative name (SAN) entry for `quickstart-es-http.default.svc`. ```sh $ openssl req -x509 -sha256 -nodes -newkey rsa:4096 -days 365 -subj "/CN=quickstart-es-http" -addext "subjectAltName=DNS:quickstart-es-http.default.svc" -keyout tls.key -out tls.crt @@ -55,7 +55,7 @@ $ kubectl create secret generic quickstart-es-cert --from-file=ca.crt=tls.crt -- #### Custom self-signed certificate using cert-manager [k8s_custom_self_signed_certificate_using_cert_manager] -This example illustrates how to issue a self-signed certificate for the [quickstart Elasticsearch cluster](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md) using a [cert-manager](https://cert-manager.io) self-signed issuer. +This example illustrates how to issue a self-signed certificate for the [quickstart {{es}} cluster](/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md) using a [cert-manager](https://cert-manager.io) self-signed issuer. ```yaml --- @@ -85,7 +85,7 @@ spec: - quickstart ``` -Here is how to issue multiple Elasticsearch certificates from a single self-signed CA. This is useful for example for [Remote clusters](/deploy-manage/remote-clusters/eck-remote-clusters.md) which need to trust each other’s CA, in order to avoid mounting N CAs when a cluster is connected to N other clusters. +Here is how to issue multiple {{es}} certificates from a single self-signed CA. This is useful for example for [Remote clusters](/deploy-manage/remote-clusters/eck-remote-clusters.md) which need to trust each other’s CA, in order to avoid mounting N CAs when a cluster is connected to N other clusters. ```yaml apiVersion: cert-manager.io/v1 @@ -160,7 +160,7 @@ spec: ### Provide your own certificate [k8s-setting-up-your-own-certificate] -You can bring your own certificate to configure TLS to ensure that communication between HTTP clients and the Elastic Stack application is encrypted. +You can bring your own certificate to configure TLS to ensure that communication between HTTP clients and the {{stack}} application is encrypted. Create a Kubernetes secret with: @@ -197,7 +197,7 @@ spec: secretName: my-cert ``` -## Kibana HTTP configuration in ECK [k8s-kibana-http-configuration] +## {{kib}} HTTP configuration in ECK [k8s-kibana-http-configuration] By default, ECK creates a `ClusterIP` [Service](https://kubernetes.io/docs/concepts/services-networking/service/) and associates it with the {{kib}} deployment. diff --git a/deploy-manage/security/k8s-network-policies.md b/deploy-manage/security/k8s-network-policies.md index 012c0ff8e4..d212578317 100644 --- a/deploy-manage/security/k8s-network-policies.md +++ b/deploy-manage/security/k8s-network-policies.md @@ -109,7 +109,7 @@ spec: ``` -## Isolating Elasticsearch [k8s-network-policies-elasticsearch-isolation] +## Isolating {{es}} [k8s-network-policies-elasticsearch-isolation] | | | | --- | --- | @@ -171,7 +171,7 @@ spec: ``` -## Isolating Kibana [k8s-network-policies-kibana-isolation] +## Isolating {{kib}} [k8s-network-policies-kibana-isolation] | | | | --- | --- | @@ -201,7 +201,7 @@ spec: - ports: - port: 53 protocol: UDP - # [Optional] If Agent is deployed, this is to allow Kibana to access the Elastic Package Registry (https://epr.elastic.co). + # [Optional] If Agent is deployed, this is to allow {{kib}} to access the Elastic Package Registry (https://epr.elastic.co). # - port: 443 # protocol: TCP ingress: diff --git a/deploy-manage/security/k8s-secure-settings.md b/deploy-manage/security/k8s-secure-settings.md index 57a6c444f2..d4a5490a02 100644 --- a/deploy-manage/security/k8s-secure-settings.md +++ b/deploy-manage/security/k8s-secure-settings.md @@ -15,7 +15,7 @@ The secrets should contain a key-value pair for each secure setting you want to To allow the operator to inject the settings into the application, you must reference your secrets in the `spec.secureSettings` field of your {{es}} or {{kib}} object definition. Next, you’ll find examples for both {{es}} and {{kib}}. -## Elasticsearch basic usage [k8s_basic_usage] +## {{es}} basic usage [k8s_basic_usage] It is possible to reference several secrets: @@ -126,7 +126,7 @@ stringData: } ``` -## Kibana secure settings [k8s-kibana-secure-settings] +## {{kib}} secure settings [k8s-kibana-secure-settings] Similar to {{es}} secure settings, you can use Kubernetes secrets to manage keystore settings for {{kib}}. diff --git a/deploy-manage/security/k8s-transport-settings.md b/deploy-manage/security/k8s-transport-settings.md index ec22d23ad8..a736c710e5 100644 --- a/deploy-manage/security/k8s-transport-settings.md +++ b/deploy-manage/security/k8s-transport-settings.md @@ -8,11 +8,11 @@ mapped_pages: # Manage transport certificates on ECK [k8s-transport-settings] -The transport module in Elasticsearch is used for internal communication between nodes within the cluster as well as communication between remote clusters. Check the [Elasticsearch documentation](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md) for details. For customization options of the HTTP layer, check [Access services](../deploy/cloud-on-k8s/accessing-services.md) and [HTTP TLS certificates](./k8s-https-settings.md). +The transport module in {{es}} is used for internal communication between nodes within the cluster as well as communication between remote clusters. Check the [{{es}} documentation](elasticsearch://reference/elasticsearch/configuration-reference/networking-settings.md) for details. For customization options of the HTTP layer, check [Access services](../deploy/cloud-on-k8s/accessing-services.md) and [HTTP TLS certificates](./k8s-https-settings.md). ## Customize the Transport Service [k8s_customize_the_transport_service] -In the `spec.transport.service.` section, you can change the Kubernetes service used to expose the Elasticsearch transport module: +In the `spec.transport.service.` section, you can change the Kubernetes service used to expose the {{es}} transport module: ```yaml spec: @@ -34,7 +34,7 @@ When you change the `clusterIP` setting of the service, ECK deletes and re-creat ## Configure a custom Certificate Authority [k8s-transport-ca] -Elasticsearch uses X.509 certificates to establish encrypted and authenticated connections across nodes in the cluster. By default, ECK creates a self-signed CA certificate to issue a certificate [for each node in the cluster](/deploy-manage/security/set-up-basic-security.md#encrypt-internode-communication). +{{es}} uses X.509 certificates to establish encrypted and authenticated connections across nodes in the cluster. By default, ECK creates a self-signed CA certificate to issue a certificate [for each node in the cluster](/deploy-manage/security/set-up-basic-security.md#encrypt-internode-communication). You can use a Kubernetes secret to provide your own CA instead of the self-signed certificate that ECK will then use to create node certificates for transport connections. The CA certificate must be stored in the secret under `ca.crt` and the private key must be stored under `ca.key`. @@ -75,7 +75,7 @@ spec: When following the instructions in [Configure a custom Certificate Authority](#k8s-transport-ca) the issuance of certificates is orchestrated by the ECK operator and the operator needs access to the CAs private key. If this is undesirable it is also possible to configure node transport certificates without involving the ECK operator. The following two pre-requisites apply: -1. The tooling used must be able to issue individual certificates for each Elasticsearch node and dynamically add or remove certificates as the cluster scales up and down. +1. The tooling used must be able to issue individual certificates for each {{es}} node and dynamically add or remove certificates as the cluster scales up and down. 2. The ECK operator must be configured to be aware of the CA in use for the [remote cluster](../remote-clusters/eck-remote-clusters.md#k8s-remote-clusters-connect-external) support to work. The following example configuration using [cert-manager csi-driver](https://cert-manager.io/docs/projects/csi-driver/) and [trust-manager](https://cert-manager.io/docs/projects/trust-manager/) meets these two requirements: @@ -118,7 +118,7 @@ spec: ``` 1. Disables the self-signed certificates generated by ECK for the transport layer. -2. The example assumes that a `ClusterIssuer` by the name of `ca-cluster-issuer` exists and a PEM encoded version of the CA certificate is available in a ConfigMap (in the example named `trust`). The CA certificate must be in a file called `ca.crt` inside the ConfigMap in the same namespace as the Elasticsearch resource. +2. The example assumes that a `ClusterIssuer` by the name of `ca-cluster-issuer` exists and a PEM encoded version of the CA certificate is available in a ConfigMap (in the example named `trust`). The CA certificate must be in a file called `ca.crt` inside the ConfigMap in the same namespace as the {{es}} resource. 3. If the remote cluster server is enabled, then the DNS names must also include both:* The DNS name for the related Kubernetes `Service`: `-es-remote-cluster.${POD_NAMESPACE}.svc` * The Pod DNS name: `${POD_NAME}.-es-.${POD_NAMESPACE}.svc` @@ -155,7 +155,7 @@ spec: ... ``` -1. This example uses a self-signed issuer for the root CA and a second issuer for the Elasticsearch cluster transport certificates as the cert-manager CSI driver does not support self-signed CAs. +1. This example uses a self-signed issuer for the root CA and a second issuer for the {{es}} cluster transport certificates as the cert-manager CSI driver does not support self-signed CAs. When transitioning from a configuration that uses externally provisioned certificates back to ECK-managed self-signed transport certificates it is important to ensure that the externally provisioned CA remains configured as a trusted CA through the `.spec.transport.tls.certificateAuthorities` attribute until all nodes in the cluster have been updated to use the ECK-managed certificates. When transitioning from ECK-managed certificates to externally provisioned ones, ECK ensures automatically that the ECK CA remains configured until the transition has been completed. diff --git a/deploy-manage/security/logging-configuration/auditing-search-queries.md b/deploy-manage/security/logging-configuration/auditing-search-queries.md index 6df7c14710..3df3a50cd5 100644 --- a/deploy-manage/security/logging-configuration/auditing-search-queries.md +++ b/deploy-manage/security/logging-configuration/auditing-search-queries.md @@ -10,7 +10,7 @@ applies_to: serverless: unavailable --- -# Audit Elasticsearch search queries [auditing-search-queries] +# Audit {{es}} search queries [auditing-search-queries] There is no [audit event type](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) specifically dedicated to search queries. Search queries are analyzed and then processed; the processing triggers authorization actions that are audited. However, the original raw query, as submitted by the client, is not accessible downstream when authorization auditing occurs. diff --git a/deploy-manage/security/logging-configuration/configuring-audit-logs.md b/deploy-manage/security/logging-configuration/configuring-audit-logs.md index 4286c13bb8..2e65efcc1a 100644 --- a/deploy-manage/security/logging-configuration/configuring-audit-logs.md +++ b/deploy-manage/security/logging-configuration/configuring-audit-logs.md @@ -12,7 +12,7 @@ applies_to: When auditing security events, a single client request might generate multiple audit events across multiple cluster nodes, potentially leading to a high volume of log data and I/O operations. To maintain clarity and ensure logs remain actionable, {{es}} and {{kib}} provide configuration mechanisms to control what events are logged and which can be ignored. -### Elasticsearch auditing configuration +### {{es}} auditing configuration {{es}} configuration options include: @@ -28,9 +28,9 @@ For a complete description of event details and format, refer to the following r * [{{es}} audit events details and schema](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) * [{{es}} log entry output format](./logfile-audit-output.md#audit-log-entry-format) -### Kibana auditing configuration +### {{kib}} auditing configuration -To control the logs that are outputted by Kibana, you can use [{{kib}} ignore filters](kibana://reference/configuration-reference/security-settings.md#audit-logging-ignore-filters). These are a list of filters that determine which events should be excluded from the audit log. +To control the logs that are outputted by {{kib}}, you can use [{{kib}} ignore filters](kibana://reference/configuration-reference/security-settings.md#audit-logging-ignore-filters). These are a list of filters that determine which events should be excluded from the audit log. In self-managed systems, you can optionally configure audit logs location, and file/rolling file using [{{kib}} audit logging settings](kibana://reference/configuration-reference/security-settings.md#audit-logging-settings). diff --git a/deploy-manage/security/logging-configuration/enabling-audit-logs.md b/deploy-manage/security/logging-configuration/enabling-audit-logs.md index f977098e3c..924944d658 100644 --- a/deploy-manage/security/logging-configuration/enabling-audit-logs.md +++ b/deploy-manage/security/logging-configuration/enabling-audit-logs.md @@ -61,7 +61,7 @@ Audit logs are disabled by default and must be explicitly enabled. To learn how to consume these logs in an {{es}} cluster, refer to [](/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md). ::::: -:::::{tab-item} Elastic Cloud Hosted +:::::{tab-item} {{ech}} To enable audit logging in an {{ech}} deployment: @@ -71,11 +71,11 @@ To enable audit logging in an {{ech}} deployment: 3. From your deployment menu, go to the **Edit** page. -4. To enable auditing for Elasticsearch: +4. To enable auditing for {{es}}: * In the **Elasticsearch** section, select **Manage user settings and extensions**. For deployments with existing user settings, you may have to expand the **Edit elasticsearch.yml** caret for each node instead. * Add the setting `xpack.security.audit.enabled: true`. -5. To enable auditing for Kibana: +5. To enable auditing for {{kib}}: * In the **Kibana** section, select **Edit user settings**. For deployments with existing user settings, you may have to expand the **Edit kibana.yml** caret instead. * Add the setting `xpack.security.audit.enabled: true`. @@ -102,7 +102,7 @@ To enable audit logging in an ECE deployment: 5. To enable auditing for {{kib}}: * In the **Kibana** section, select **Edit user settings**. For deployments with existing user settings, you may have to expand the **Edit kibana.yml** caret instead. * Add the setting `xpack.security.audit.enabled: true`. - * If your Elastic Stack version is below 7.6.0, add the setting `logging.quiet: false`. + * If your {{stack}} version is below 7.6.0, add the setting `logging.quiet: false`. 6. Select **Save**. diff --git a/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md b/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md index 9b19cca887..3633c1de53 100644 --- a/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md +++ b/deploy-manage/security/logging-configuration/logfile-audit-events-ignore-policies.md @@ -11,7 +11,7 @@ applies_to: serverless: unavailable --- -# Elasticsearch audit events ignore policies [audit-log-ignore-policy] +# {{es}} audit events ignore policies [audit-log-ignore-policy] The comprehensive audit trail is necessary to ensure accountability. It offers tremendous value during incident response and can even be required for demonstrating compliance. diff --git a/deploy-manage/security/logging-configuration/logfile-audit-output.md b/deploy-manage/security/logging-configuration/logfile-audit-output.md index f7c8b6346b..eb94f5f12a 100644 --- a/deploy-manage/security/logging-configuration/logfile-audit-output.md +++ b/deploy-manage/security/logging-configuration/logfile-audit-output.md @@ -11,7 +11,7 @@ applies_to: serverless: unavailable --- -# Elasticsearch logfile audit output [audit-log-output] +# {{es}} logfile audit output [audit-log-output] The `logfile` audit output is the only output for auditing. By default, it writes data to the `_audit.json` file in the logs directory. The file is also rotated and archived daily or upon reaching the 1GB file size limit. diff --git a/deploy-manage/security/logging-configuration/security-event-audit-logging.md b/deploy-manage/security/logging-configuration/security-event-audit-logging.md index 564ad3a952..15c609c7a8 100644 --- a/deploy-manage/security/logging-configuration/security-event-audit-logging.md +++ b/deploy-manage/security/logging-configuration/security-event-audit-logging.md @@ -33,5 +33,5 @@ By following these guidelines, you can effectively audit system activity, enhanc For a complete description of audit event details and format, refer to: -* [Elasticsearch audit events](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) -* [Kibana audit events](kibana://reference/kibana-audit-events.md) +* [{{es}} audit events](elasticsearch://reference/elasticsearch/elasticsearch-audit-events.md) +* [{{kib}} audit events](kibana://reference/kibana-audit-events.md) diff --git a/deploy-manage/security/secure-cluster-communications.md b/deploy-manage/security/secure-cluster-communications.md index f73cf415dc..ec0cb29031 100644 --- a/deploy-manage/security/secure-cluster-communications.md +++ b/deploy-manage/security/secure-cluster-communications.md @@ -10,7 +10,7 @@ mapped_pages: - https://www.elastic.co/guide/en/kibana/current/elasticsearch-mutual-tls.html --- -% Scope: landing page for manually handling TLS certificates, and for information about TLS in Elastic Stack in general. +% Scope: landing page for manually handling TLS certificates, and for information about TLS in {{stack}} in general. # TLS encryption for cluster communications This page explains how to secure communications and set up TLS certificates in your {{stack}} deployments. @@ -34,7 +34,7 @@ Keep this distinction in mind when configuring security settings. Both {{es}} and {{kib}}, the core components of the {{stack}}, expose service endpoints that must be secured. {{es}} handles traffic at two levels: * The **transport layer** (defaults to port `9300`), used for internal communication between nodes in the cluster. -* The **HTTP layer** (defaults to port `9200`), used by external clients — including Kibana — to send requests using the REST API. +* The **HTTP layer** (defaults to port `9200`), used by external clients — including {{kib}} — to send requests using the REST API. Additionally, {{kib}} functions as a web server, exposing its own **HTTP endpoint** (defaults to port `5601`) to users, and also acts as a client when sending requests to {{es}}. diff --git a/deploy-manage/security/secure-hosting-environment.md b/deploy-manage/security/secure-hosting-environment.md index e487812b2f..71d029fae2 100644 --- a/deploy-manage/security/secure-hosting-environment.md +++ b/deploy-manage/security/secure-hosting-environment.md @@ -16,5 +16,5 @@ This section covers security measures specific to: :::{note} * For security features at deployment level, refer to [](./secure-your-cluster-deployment.md). -* For details on security mechanisms applied by default in Elastic Cloud SaaS platform, refer to [](../security.md#managed-security-in-elastic-cloud). +* For details on security mechanisms applied by default in {{ecloud}} SaaS platform, refer to [](../security.md#managed-security-in-elastic-cloud). ::: diff --git a/deploy-manage/security/secure-settings.md b/deploy-manage/security/secure-settings.md index 61ddf5d025..ef56f97315 100644 --- a/deploy-manage/security/secure-settings.md +++ b/deploy-manage/security/secure-settings.md @@ -56,7 +56,7 @@ The instructions below cover how to manage {{es}} keystore settings for each dep ::::{tab-item} ECH and ECE :sync: cloud % ### ECE and ECH -You can manage {{es}} secure settings in the **Security > {{es}} keystore** section of your deployment page in the Elastic Cloud Console or ECE Cloud UI. +You can manage {{es}} secure settings in the **Security > {{es}} keystore** section of your deployment page in the {{ecloud}} Console or ECE Cloud UI. :::{note} {{ece}} also supports managing {{es}} keystore of your deployments through its [RESTful API](https://www.elastic.co/docs/api/doc/cloud-enterprise/). Refer to [Configure {{es}} keystore through ECE API](cloud://reference/cloud-enterprise/ece-restful-api-examples-configuring-keystore.md) for an example. @@ -111,7 +111,7 @@ Refer to [Configure secure settings on ECK](./k8s-secure-settings.md) for detail ::::{tab-item} Self-managed :sync: self-managed % ### Self-managed -In self-managed deployments, you're responsible for configuring and maintaining the Elasticsearch keystore on each node individually. +In self-managed deployments, you're responsible for configuring and maintaining the {{es}} keystore on each node individually. {{es}} provides the [`elasticsearch-keystore`](elasticsearch://reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) command-line tool to help with this task. It allows you to list, add, remove, and update secure settings, as well as protect the keystore with a password if wanted. @@ -123,7 +123,7 @@ Changes to the keystore take effect only after restarting {{es}}, except for [re #### Create the keystore [creating-keystore] -To create the Elasticsearch keystore, use the `create` command: +To create the {{es}} keystore, use the `create` command: ```sh bin/elasticsearch-keystore create -p @@ -204,7 +204,7 @@ deployment: {{kib}} supports secure settings through its own keystore, similar to the {{es}} keystore. It provides a way to protect sensitive configuration values, such as authentication credentials or encryption keys, by storing them securely outside of `kibana.yml`. ::::{note} -Unlike with {{es}} keystore, any valid {{kib}} setting can be stored securely in the keystore. However, as with {{es}}, adding invalid, unsupported, or extraneous settings will cause Kibana to fail to start. Always ensure the setting exists and is properly formatted in the [configuration reference](kibana://reference/configuration-reference.md) before adding it to the keystore. +Unlike with {{es}} keystore, any valid {{kib}} setting can be stored securely in the keystore. However, as with {{es}}, adding invalid, unsupported, or extraneous settings will cause {{kib}} to fail to start. Always ensure the setting exists and is properly formatted in the [configuration reference](kibana://reference/configuration-reference.md) before adding it to the keystore. :::: This section provides examples of {{kib}} secure settings handling using the `kibana-keystore` command-line tool for self-managed deployments. diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md index e9f7ccc8dd..b1cf620369 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/allow-x509-certificates-signed-with-sha-1.md @@ -8,7 +8,7 @@ mapped_pages: # Allow x509 Certificates Signed with SHA-1 [ece-allow-x509-sha1] -Elastic Cloud Enterprise 3.5.0 and later defaults to rejecting x509 certificates signed with the SHA-1 hash function. This does not apply to self-signed root certificates. Practical attacks against SHA-1 have been demonstrated since 2017 and publicly trusted Certificate Authorities have not issues SHA-1 certificates since 2015. +{{ece}} 3.5.0 and later defaults to rejecting x509 certificates signed with the SHA-1 hash function. This does not apply to self-signed root certificates. Practical attacks against SHA-1 have been demonstrated since 2017 and publicly trusted Certificate Authorities have not issues SHA-1 certificates since 2015. You can temporarily bring back the legacy behavior by running the following script. Note that this requires a proxy restart, and support for x509 SHA-1 certificates will be entirely removed in a future release. diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md index 471599ab5d..6f8a794824 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/configure-tls-version.md @@ -8,9 +8,9 @@ mapped_pages: # Configure the TLS version [ece-configure-tls-version] -Elastic Cloud Enterprise 2.4.0 and later defaults to minimum TLS version 1.2 with a modern set of cipher suites. +{{ece}} 2.4.0 and later defaults to minimum TLS version 1.2 with a modern set of cipher suites. -| Elastic Cloud Enterprise version | Default minimum TLS version | Default allowed cipher suites | +| {{ece}} version | Default minimum TLS version | Default allowed cipher suites | | --- | --- | --- | | 2.4.0 and later | TLS 1.2 | `ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256` | | 2.3.1 and earlier | TLS 1.0 | `CDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA` | diff --git a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md index a9c90eede8..387829983b 100644 --- a/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md +++ b/deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md @@ -8,7 +8,7 @@ mapped_pages: # Manage security certificates [ece-manage-certificates] -During installation, Elastic Cloud Enterprise generates certificates so that you can connect to your installation securely. In order to connect securely, you must first download and trust the CA certificates generated during installation before issuing any requests to ECE. If your organization operates as its own certificate authority, you can provide your certificates for ECE to avoid a security warning when connecting to the Cloud UI over HTTPS. +During installation, {{ece}} generates certificates so that you can connect to your installation securely. In order to connect securely, you must first download and trust the CA certificates generated during installation before issuing any requests to ECE. If your organization operates as its own certificate authority, you can provide your certificates for ECE to avoid a security warning when connecting to the Cloud UI over HTTPS. In these instructions, we show you how you can download the security certificate that gets generated during the ECE installation and use it to add your own TLS/SSL certificates. You can add your TLS/SSL certificates any time after you have installed ECE on your hosts. In addition to the steps shown here, you might also need to import your CA certificate into your browser certificate chain, if you don’t already use the same certificate within your organization. @@ -18,7 +18,7 @@ Cloud UI certificate : Used to connect securely to the Cloud UI and to make RESTful API calls. Proxy certificate -: Used to connect securely to Elasticsearch clusters and Kibana. You should use a wildcard certificate rooted at the [cluster endpoint that you set](../../deploy/cloud-enterprise/change-endpoint-urls.md) (`*.example.com`, for example). A wildcard certificate is required, because the first label of the DNS address is distinct for Elasticsearch clusters and Kibana (`bc898abb421843918ebc31a513169a.example.com`, for example). +: Used to connect securely to {{es}} clusters and {{kib}}. You should use a wildcard certificate rooted at the [cluster endpoint that you set](../../deploy/cloud-enterprise/change-endpoint-urls.md) (`*.example.com`, for example). A wildcard certificate is required, because the first label of the DNS address is distinct for {{es}} clusters and {{kib}} (`bc898abb421843918ebc31a513169a.example.com`, for example). If you wish to enable [custom endpoint aliases](../../deploy/cloud-enterprise/enable-custom-endpoint-aliases.md) in ECE 2.10 or later, please also follow the directions for adding Subject Alternative Name (SAN) entries to support these aliases. @@ -27,7 +27,7 @@ Proxy certificate :::: - After the certificates have been installed, connecting securely to Elasticsearch, Kibana, and the Cloud UI or making secure RESTful API calls to ECE should not result in any security warnings or errors. + After the certificates have been installed, connecting securely to {{es}}, {{kib}}, and the Cloud UI or making secure RESTful API calls to ECE should not result in any security warnings or errors. @@ -283,8 +283,8 @@ To add a proxy certificate from the command line: curl --cacert CA_CERTIFICATE_FILENAME -H 'Content-Type: application/json' --data-binary @PROXY_PEM_FILENAME --user "admin:PASSWORD" "https://admin:12343/api/v1/platform/configuration/security/tls/proxy" ``` -2. Log out of any Kibana instances you might be logged into and log in again. -3. Verify that you are now using the new certificate chain. There should be no security warnings when you connect to the Elasticsearch or Kibana endpoints over HTTPS in your web browser. +2. Log out of any {{kib}} instances you might be logged into and log in again. +3. Verify that you are now using the new certificate chain. There should be no security warnings when you connect to the {{es}} or {{kib}} endpoints over HTTPS in your web browser. Alternatively, you can also use the openssl command to check the proxy certificates, where HOSTNAME_OR_IP is the hostname or IP address of the proxy host: diff --git a/deploy-manage/security/self-auto-setup.md b/deploy-manage/security/self-auto-setup.md index 039c66fb14..44bf0ae8aa 100644 --- a/deploy-manage/security/self-auto-setup.md +++ b/deploy-manage/security/self-auto-setup.md @@ -11,7 +11,7 @@ mapped_pages: --- % Scope: Automatic setup -% Original title: Start the Elastic Stack with security enabled automatically +% Original title: Start the {{stack}} with security enabled automatically # Automatic security setup [configuring-stack-security] :::{include} /deploy-manage/deploy/self-managed/_snippets/auto-security-config.md diff --git a/deploy-manage/security/self-setup.md b/deploy-manage/security/self-setup.md index 55044f9afc..cabf058b0d 100644 --- a/deploy-manage/security/self-setup.md +++ b/deploy-manage/security/self-setup.md @@ -50,7 +50,7 @@ Each step builds on the previous one. For production environments, it’s strong For additional TLS configuration options, refer to [](./self-tls.md). -## Kibana security configuration +## {{kib}} security configuration Refer to [](./using-kibana-with-security.md) to learn how to implement the following security best practices for {{kib}}: diff --git a/deploy-manage/security/set-up-basic-security-plus-https.md b/deploy-manage/security/set-up-basic-security-plus-https.md index 93680e2b62..0721d2fb47 100644 --- a/deploy-manage/security/set-up-basic-security-plus-https.md +++ b/deploy-manage/security/set-up-basic-security-plus-https.md @@ -8,7 +8,7 @@ mapped_pages: --- % Scope: HTTP certificates setup / manual configuration / multi or single node cluster -% original title: Set up basic security for the Elastic Stack plus secured HTTPS traffic +% original title: Set up basic security for the {{stack}} plus secured HTTPS traffic # Set up HTTPS [security-basic-setup-https] Enabling TLS on the HTTP layer, widely known as HTTPS, ensures that all client communications with your cluster are encrypted, adding a critical layer of security. diff --git a/deploy-manage/security/set-up-basic-security.md b/deploy-manage/security/set-up-basic-security.md index 97c3aafcdb..bfc0648675 100644 --- a/deploy-manage/security/set-up-basic-security.md +++ b/deploy-manage/security/set-up-basic-security.md @@ -8,7 +8,7 @@ mapped_pages: --- % Scope: TLS certificates setup / multi-node cluster / manual configuration -% original title: Set up basic security for the Elastic Stack +% original title: Set up basic security for the {{stack}} # Set up transport TLS [security-basic-setup] Configuring TLS between nodes is the basic security setup to prevent unauthorized nodes from accessing to your {{es}} cluster, and it's required by multi-node clusters. [Production mode](../deploy/self-managed/bootstrap-checks.md#dev-vs-prod-mode) clusters will not start if you do not enable TLS. diff --git a/deploy-manage/tools.md b/deploy-manage/tools.md index 55d6569c9a..132709b200 100644 --- a/deploy-manage/tools.md +++ b/deploy-manage/tools.md @@ -19,19 +19,19 @@ Elastic provides comprehensive tools to safeguard data, ensure continuous availa For strategies to design resilient clusters, see **[Availability and resilience](production-guidance/availability-and-resilience.md)**. ::::{note} -The snapshot and restore and cross-cluster replication features are currently not available for Elastic Cloud Serverless projects. These features will be introduced in the future. For more information, refer to [Serverless differences](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-feature-categories). +The snapshot and restore and cross-cluster replication features are currently not available for {{serverless-full}} projects. These features will be introduced in the future. For more information, refer to [Serverless differences](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-feature-categories). :::: ## Snapshot and restore -Snapshots in Elasticsearch are point-in-time backups that include your cluster's data, settings, and overall state. They capture all the information necessary to restore your cluster to a specific moment in time, making them essential for protecting data, recovering from unexpected issues, and transferring data between clusters. Snapshots are a reliable way to ensure the safety of your data and maintain the continuity of your operations. +Snapshots in {{es}} are point-in-time backups that include your cluster's data, settings, and overall state. They capture all the information necessary to restore your cluster to a specific moment in time, making them essential for protecting data, recovering from unexpected issues, and transferring data between clusters. Snapshots are a reliable way to ensure the safety of your data and maintain the continuity of your operations. You can perform the following tasks to manage snapshots and snapshot repositories: - **[Register a repository](tools/snapshot-and-restore/manage-snapshot-repositories.md):** Configure storage repositories (for example, S3, Azure, Google Cloud) to store snapshots. The way that you register repositories differs depending on your deployment method: - - **[Elastic Cloud Hosted](tools/snapshot-and-restore/elastic-cloud-hosted.md):** Deployments come with a preconfigured S3 repository for automatic backups, simplifying the setup process. You can also register external repositories, such as Azure, and Google Cloud, for more flexibility. - - **[Elastic Cloud Enterprise](tools/snapshot-and-restore/cloud-enterprise.md):** Repository configuration is managed through the Elastic Cloud Enterprise user interface and automatically linked to deployments. - - **[Elastic Cloud on Kubernetes](tools/snapshot-and-restore/cloud-on-k8s.md) and [self-managed](tools/snapshot-and-restore/self-managed.md) deployments:** Repositories must be configured manually. + - **[{{ech}}](tools/snapshot-and-restore/elastic-cloud-hosted.md):** Deployments come with a preconfigured S3 repository for automatic backups, simplifying the setup process. You can also register external repositories, such as Azure, and Google Cloud, for more flexibility. + - **[{{ece}}](tools/snapshot-and-restore/cloud-enterprise.md):** Repository configuration is managed through the {{ece}} user interface and automatically linked to deployments. + - **[{{eck}}](tools/snapshot-and-restore/cloud-on-k8s.md) and [self-managed](tools/snapshot-and-restore/self-managed.md) deployments:** Repositories must be configured manually. - **[Create snapshots](tools/snapshot-and-restore/create-snapshots.md):** Manually or automatically create backups of your cluster. - **[Restore a snapshot](tools/snapshot-and-restore/restore-snapshot.md):** Recover indices, data streams, or the entire cluster to revert to a previous state. You can choose to restore specific parts of a snapshot, such as a single index, or perform a full restore. @@ -39,12 +39,12 @@ You can perform the following tasks to manage snapshots and snapshot repositorie To reduce storage costs for infrequently accessed data while maintaining access, you can also create **[searchable snapshots](tools/snapshot-and-restore/searchable-snapshots.md)**. ::::{note} -Snapshot configurations vary across Elastic Cloud Hosted, Elastic Cloud Enterprise (ECE), Elastic Cloud on Kubernetes (ECK), and self-managed deployments. +Snapshot configurations vary across {{ech}}, {{ece}} (ECE), {{eck}} (ECK), and self-managed deployments. :::: ## Cross-cluster replication (CCR) -**[Cross-cluster replication (CCR)](tools/cross-cluster-replication.md)** is a feature in Elasticsearch that allows you to replicate data in real time from a leader cluster to one or more follower clusters. This replication ensures that data is synchronized across clusters, providing continuity, redundancy, and enhanced data accessibility. +**[Cross-cluster replication (CCR)](tools/cross-cluster-replication.md)** is a feature in {{es}} that allows you to replicate data in real time from a leader cluster to one or more follower clusters. This replication ensures that data is synchronized across clusters, providing continuity, redundancy, and enhanced data accessibility. CCR provides a way to automatically synchronize indices from a leader cluster to a follower cluster. This cluster could be in a different data center or even a different continent from the leader cluster. If the primary cluster fails, the secondary cluster can take over. diff --git a/deploy-manage/tools/cross-cluster-replication.md b/deploy-manage/tools/cross-cluster-replication.md index 8606862f8e..dcf5e8b2c5 100644 --- a/deploy-manage/tools/cross-cluster-replication.md +++ b/deploy-manage/tools/cross-cluster-replication.md @@ -58,7 +58,7 @@ In a uni-directional configuration, the cluster containing follower indices must ## Multi-cluster architectures [ccr-multi-cluster-architectures] -Use {{ccr}} to construct several multi-cluster architectures within the Elastic Stack: +Use {{ccr}} to construct several multi-cluster architectures within the {{stack}}: * [Disaster recovery](#ccr-disaster-recovery) in case a primary cluster fails, with a secondary cluster serving as a hot backup * [Data locality](#ccr-data-locality) to maintain multiple copies of the dataset close to the application servers (and users), and reduce costly latency diff --git a/deploy-manage/tools/snapshot-and-restore.md b/deploy-manage/tools/snapshot-and-restore.md index 009b094a17..3e05f98687 100644 --- a/deploy-manage/tools/snapshot-and-restore.md +++ b/deploy-manage/tools/snapshot-and-restore.md @@ -16,7 +16,7 @@ applies_to: # Snapshot and restore -A snapshot is a backup of a running Elasticsearch cluster. You can use snapshots to: +A snapshot is a backup of a running {{es}} cluster. You can use snapshots to: - Regularly back up a cluster with no downtime - Recover data after deletion or a hardware failure @@ -25,18 +25,18 @@ A snapshot is a backup of a running Elasticsearch cluster. You can use snapshots ## Snapshot workflow -Elasticsearch stores snapshots in an off-cluster storage location called a **snapshot repository**. Before you can take or restore snapshots, you must [register a snapshot repository](snapshot-and-restore/self-managed.md#manage-snapshot-repos) on the cluster. Elasticsearch supports different repository types depending on your deployment type: +{{es}} stores snapshots in an off-cluster storage location called a **snapshot repository**. Before you can take or restore snapshots, you must [register a snapshot repository](snapshot-and-restore/self-managed.md#manage-snapshot-repos) on the cluster. {{es}} supports different repository types depending on your deployment type: -* [**Elastic Cloud Hosted repository types**](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md) +* [**{{ech}} repository types**](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md) * [**Self-managed repository types**](/deploy-manage/tools/snapshot-and-restore/self-managed.md) After you register a snapshot repository, you can use [snapshot lifecycle management (SLM)](snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) to automatically take and manage snapshots. You can then [restore a snapshot](snapshot-and-restore/restore-snapshot.md) to recover or transfer its data. ::::{note} -While the majority of snapshot-related operations are similar across all deployment types, Elastic Cloud Hosted, Elastic Cloud Enterprise (ECE), and Elastic Cloud on Kubernetes (ECK) offer additional capabilities, as described below. +While the majority of snapshot-related operations are similar across all deployment types, {{ech}}, {{ece}} (ECE), and {{eck}} (ECK) offer additional capabilities, as described below. :::: -::::{dropdown} Elastic Cloud Hosted +::::{dropdown} {{ech}} When you create a deployment, a default repository called `found-snapshots` is automatically added to the {{es}} cluster. This repository is specific to that cluster: the `cluster ID` is part of the repository’s `base_path`, such as `/snapshots/[cluster-id]`. :::{note} @@ -49,27 +49,27 @@ The default policy and repository are used when: - Restoring a snapshot to a different deployment - Taking automated snapshots in case of deployment changes -In Elastic Cloud Hosted, you can [restore snapshots](snapshot-and-restore/restore-snapshot.md) across clusters, but only within the same region. +In {{ech}}, you can [restore snapshots](snapshot-and-restore/restore-snapshot.md) across clusters, but only within the same region. You can customize the snapshot retention settings in that policy to adjust them to your needs. To use a custom snapshot repository, [register a new snapshot repository](snapshot-and-restore/self-managed.md#manage-snapshot-repos) and [create another SLM policy](snapshot-and-restore/create-snapshots.md#automate-snapshots-slm). :::: -::::{dropdown} Elastic Cloud Enterprise -To enable snapshots for your Elasticsearch clusters, you must first [configure a repository](snapshot-and-restore/cloud-enterprise.md) at the platform level in ECE and then associate it with your deployments. Once configured, snapshots are taken every 30 minutes or at the interval you specify. +::::{dropdown} {{ece}} +To enable snapshots for your {{es}} clusters, you must first [configure a repository](snapshot-and-restore/cloud-enterprise.md) at the platform level in ECE and then associate it with your deployments. Once configured, snapshots are taken every 30 minutes or at the interval you specify. -Use **Kibana** to manage your snapshots. In Kibana, you can: +Use **Kibana** to manage your snapshots. In {{kib}}, you can: -- Set up additional repositories where snapshots are stored (other than the one managed by Elastic Cloud Enterprise) +- Set up additional repositories where snapshots are stored (other than the one managed by {{ece}}) - View and delete snapshots - Configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted -In **Elastic Cloud Enterprise**, you can also [restore snapshots](snapshot-and-restore/restore-snapshot.md) across clusters. +In **{{ece}}**, you can also [restore snapshots](snapshot-and-restore/restore-snapshot.md) across clusters. :::: -::::{dropdown} Elastic Cloud on Kubernetes (ECK) -On Elastic Cloud on Kubernetes, you must manually configure snapshot repositories. The system does not create **Snapshot Lifecycle Management (SLM) policies** or **automatic snapshots** by default. +::::{dropdown} {{eck}} (ECK) +On {{eck}}, you must manually configure snapshot repositories. The system does not create **Snapshot Lifecycle Management (SLM) policies** or **automatic snapshots** by default. For detailed configuration steps, refer to [Configuring snapshots on ECK](snapshot-and-restore/cloud-on-k8s.md). :::: @@ -101,7 +101,7 @@ Snapshots don’t contain or back up: ### Feature states [feature-state] -A **feature state** contains the indices and data streams used to store configurations, history, and other data for an Elastic feature, such as **Elasticsearch security** or **Kibana**. +A **feature state** contains the indices and data streams used to store configurations, history, and other data for an Elastic feature, such as **{{es}} security** or **Kibana**. ::::{note} To retrieve a list of feature states, use the [Features API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-features-get-features). @@ -109,19 +109,19 @@ To retrieve a list of feature states, use the [Features API](https://www.elastic A feature state typically includes one or more [system indices or system data streams](elasticsearch://reference/elasticsearch/rest-apis/api-conventions.md#system-indices). It may also include regular indices and data streams used by the feature. For example, a feature state may include a regular index that contains the feature’s execution history. Storing this history in a regular index lets you more easily search it. -In Elasticsearch 8.0 and later versions, feature states are the only way to back up and restore system indices and system data streams. +In {{es}} 8.0 and later versions, feature states are the only way to back up and restore system indices and system data streams. ## How snapshots work Snapshots are **automatically deduplicated** to save storage space and reduce network transfer costs. To back up an index, a snapshot makes a copy of the index’s [segments](/manage-data/data-store/near-real-time-search.md) and stores them in the snapshot repository. Since segments are immutable, the snapshot only needs to copy any new segments created since the repository’s last snapshot. -Each snapshot is **logically independent**. When you delete a snapshot, Elasticsearch only deletes the segments used exclusively by that snapshot. Elasticsearch doesn’t delete segments used by other snapshots in the repository. +Each snapshot is **logically independent**. When you delete a snapshot, {{es}} only deletes the segments used exclusively by that snapshot. {{es}} doesn’t delete segments used by other snapshots in the repository. ### Snapshots and shard allocation [snapshots-shard-allocation] -A snapshot copies segments from an index’s primary shards. When you start a snapshot, Elasticsearch immediately starts copying the segments of any available primary shards. If a shard is starting or relocating, Elasticsearch will wait for these processes to complete before copying the shard’s segments. If one or more primary shards aren’t available, the snapshot attempt fails. +A snapshot copies segments from an index’s primary shards. When you start a snapshot, {{es}} immediately starts copying the segments of any available primary shards. If a shard is starting or relocating, {{es}} will wait for these processes to complete before copying the shard’s segments. If one or more primary shards aren’t available, the snapshot attempt fails. -Once a snapshot begins copying a shard’s segments, Elasticsearch won’t move the shard to another node, even if rebalancing or shard allocation settings would typically trigger reallocation. Elasticsearch will only move the shard after the snapshot finishes copying the shard’s data. +Once a snapshot begins copying a shard’s segments, {{es}} won’t move the shard to another node, even if rebalancing or shard allocation settings would typically trigger reallocation. {{es}} will only move the shard after the snapshot finishes copying the shard’s data. ### Snapshot start and stop times @@ -133,7 +133,7 @@ To restore a snapshot to a cluster, the versions for the snapshot, cluster, and ### Snapshot version compatibility [snapshot-restore-version-compatibility] -You can’t restore a snapshot to an earlier version of Elasticsearch. For example, you can’t restore a snapshot taken in 7.6.0 to a cluster running 7.5.0. +You can’t restore a snapshot to an earlier version of {{es}}. For example, you can’t restore a snapshot taken in 7.6.0 to a cluster running 7.5.0. ### Index compatibility @@ -150,11 +150,11 @@ Any index you restore from a snapshot must also be compatible with the current c [¹] Supported with [archive indices](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md). -You can’t restore an index to an earlier version of Elasticsearch. For example, you can’t restore an index created in 7.6.0 to a cluster running 7.5.0. +You can’t restore an index to an earlier version of {{es}}. For example, you can’t restore an index created in 7.6.0 to a cluster running 7.5.0. A compatible snapshot can contain indices created in an older incompatible version. For example, a snapshot of a 7.17 cluster can contain an index created in 6.8. Restoring the 6.8 index to an 8.17 cluster fails unless you can use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md). Keep this in mind if you take a snapshot before upgrading a cluster. -As a workaround, you can first restore the index to another cluster running the latest version of Elasticsearch that’s compatible with both the index and your current cluster. You can then use [reindex-from-remote](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-reindex.html#reindex-from-remote) to rebuild the index on your current cluster. Reindex from remote is only possible if the index’s [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) is enabled. +As a workaround, you can first restore the index to another cluster running the latest version of {{es}} that’s compatible with both the index and your current cluster. You can then use [reindex-from-remote](https://www.elastic.co/guide/en/elasticsearch/reference/8.17/docs-reindex.html#reindex-from-remote) to rebuild the index on your current cluster. Reindex from remote is only possible if the index’s [`_source`](elasticsearch://reference/elasticsearch/mapping-reference/mapping-source-field.md) is enabled. Reindexing from remote can take significantly longer than restoring a snapshot. Before you start, test the reindex from remote process with a subset of the data to estimate your time requirements. @@ -162,19 +162,19 @@ Reindexing from remote can take significantly longer than restoring a snapshot. ### Other backup methods -**Taking a snapshot is the only reliable and supported way to back up a cluster.** You cannot back up an Elasticsearch cluster by making copies of the data directories of its nodes. There are no supported methods to restore any data from a filesystem-level backup. If you try to restore a cluster from such a backup, it may fail with reports of corruption or missing files or other data inconsistencies, or it may appear to have succeeded having silently lost some of your data. +**Taking a snapshot is the only reliable and supported way to back up a cluster.** You cannot back up an {{es}} cluster by making copies of the data directories of its nodes. There are no supported methods to restore any data from a filesystem-level backup. If you try to restore a cluster from such a backup, it may fail with reports of corruption or missing files or other data inconsistencies, or it may appear to have succeeded having silently lost some of your data. -A copy of the data directories of a cluster’s nodes does not work as a backup because it is not a consistent representation of their contents at a single point in time. You cannot fix this by shutting down nodes while making the copies, nor by taking atomic filesystem-level snapshots, because Elasticsearch has consistency requirements that span the whole cluster. You must use the built-in snapshot functionality for cluster backups. +A copy of the data directories of a cluster’s nodes does not work as a backup because it is not a consistent representation of their contents at a single point in time. You cannot fix this by shutting down nodes while making the copies, nor by taking atomic filesystem-level snapshots, because {{es}} has consistency requirements that span the whole cluster. You must use the built-in snapshot functionality for cluster backups. ### Repository contents [snapshot-repository-contents] -**Don’t modify anything within the repository or run processes that might interfere with its contents.** If something other than Elasticsearch modifies the contents of the repository then future snapshot or restore operations may fail, reporting corruption or other data inconsistencies, or may appear to succeed having silently lost some of your data. +**Don’t modify anything within the repository or run processes that might interfere with its contents.** If something other than {{es}} modifies the contents of the repository then future snapshot or restore operations may fail, reporting corruption or other data inconsistencies, or may appear to succeed having silently lost some of your data. You may however safely [restore a repository from a backup](snapshot-and-restore/self-managed.md#snapshots-repository-backup) as long as -1. The repository is not registered with Elasticsearch while you are restoring its contents. +1. The repository is not registered with {{es}} while you are restoring its contents. 2. When you have finished restoring the repository its contents are exactly as they were when you took the backup. -If you no longer need any of the snapshots in a repository, unregister it from Elasticsearch before deleting its contents from the underlying storage. +If you no longer need any of the snapshots in a repository, unregister it from {{es}} before deleting its contents from the underlying storage. Additionally, snapshots may contain security-sensitive information, which you may wish to [store in a dedicated repository](snapshot-and-restore/create-snapshots.md#cluster-state-snapshots). diff --git a/deploy-manage/tools/snapshot-and-restore/access-isolation-for-found-snapshots-repository.md b/deploy-manage/tools/snapshot-and-restore/access-isolation-for-found-snapshots-repository.md index 7b54128021..6b5550a7c8 100644 --- a/deploy-manage/tools/snapshot-and-restore/access-isolation-for-found-snapshots-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/access-isolation-for-found-snapshots-repository.md @@ -12,7 +12,7 @@ In {{ech}}, access isolation ensures that each deployment can access only its ow Any newly created deployment has snapshot isolation set up by default. The guides in these section apply only to older deployments created before this default was implemented, where deployments within the same region may still have access to each other’s snapshots. -If a deployment can access the snapshots of other deployments, a notification will appear in the deployments menu under **Elasticsearch > Snapshots**, prompting you to set up access isolation. +If a deployment can access the snapshots of other deployments, a notification will appear in the deployments menu under **{{es}} > Snapshots**, prompting you to set up access isolation. The process for enabling access isolation depends on your cloud provider: diff --git a/deploy-manage/tools/snapshot-and-restore/azure-storage-repository.md b/deploy-manage/tools/snapshot-and-restore/azure-storage-repository.md index c32f0f3e50..8fd32f80db 100644 --- a/deploy-manage/tools/snapshot-and-restore/azure-storage-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/azure-storage-repository.md @@ -8,7 +8,7 @@ applies_to: # Azure Storage repository [ece-configure-azure-snapshotting] -With Elastic Cloud Enterprise, you can enable your Elasticsearch clusters to regularly snapshot data to Microsoft Azure Storage. +With {{ece}}, you can enable your {{es}} clusters to regularly snapshot data to Microsoft Azure Storage. ## Add the Azure repository [ece_add_the_azure_repository] @@ -32,7 +32,7 @@ Add your Azure Storage Container as a repository to the platform: ``` -Snapshots are stored in the container you provide. Use the repository name you define here to configure your Elasticsearch clusters for snapshotting to this repository. +Snapshots are stored in the container you provide. Use the repository name you define here to configure your {{es}} clusters for snapshotting to this repository. ## Configure your deployment for Azure snapshots [ece_configure_your_deployment_for_azure_snapshots] diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md b/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md index 8760cde90e..a3a5712de3 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md @@ -1,21 +1,21 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-repositories.html -navigation_title: "Elastic Cloud Enterprise" +navigation_title: "{{ece}}" applies_to: deployment: ece: --- -# Manage snapshot repositories in Elastic Cloud Enterprise [ece-manage-repositories] +# Manage snapshot repositories in {{ece}} [ece-manage-repositories] -Snapshots enable you to back up and restore Elasticsearch indices, protecting data from accidental deletion and supporting migration between clusters. In Elastic Cloud Enterprise (ECE), snapshot repositories are managed at the platform level and can be assigned to individual deployments. +Snapshots enable you to back up and restore {{es}} indices, protecting data from accidental deletion and supporting migration between clusters. In {{ece}} (ECE), snapshot repositories are managed at the platform level and can be assigned to individual deployments. When a repository is assigned to a deployment, a snapshot is taken every 30 minutes by default. The snapshot interval can be adjusted per deployment. ## Supported repository types -Elastic Cloud Enterprise installations support the following {{es}} snapshot repository types: +{{ece}} installations support the following {{es}} snapshot repository types: * AWS S3 * Azure Blob Storage @@ -23,11 +23,11 @@ Elastic Cloud Enterprise installations support the following {{es}} snapshot rep * Minio S3 ::::{note} -No repository types other than those listed are supported in the Elastic Cloud Enterprise platform, even if they are supported by {{es}}. +No repository types other than those listed are supported in the {{ece}} platform, even if they are supported by {{es}}. :::: -For more details about how snapshots are used with Elasticsearch, check [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md). You can also review the official documentation for these storage repository options: +For more details about how snapshots are used with {{es}}, check [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md). You can also review the official documentation for these storage repository options: * [Amazon S3 documentation](https://docs.aws.amazon.com/s3/) * [Microsoft Azure Blob Storage documentation](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) @@ -49,7 +49,7 @@ The following guides provide instructions on adding a snapshot repository in ECE ## Edit snapshot repository configurations [ece_edit_snapshot_repository_configurations] -To edit a snapshot repository configuration from your Elastic Cloud Enterprise installation: +To edit a snapshot repository configuration from your {{ece}} installation: 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). 2. From the **Platform** menu, select **Repositories**. @@ -59,9 +59,9 @@ To edit a snapshot repository configuration from your Elastic Cloud Enterprise i ## Delete snapshot repository configurations [ece_delete_snapshot_repository_configurations] -Deleting a snapshot repository configuration does not remove the snapshot repository itself from S3. Only the configuration that enables Elastic Cloud Enterprise to access the repository is removed. Existing snapshots are also retained and need to be deleted separately if you no longer need them. +Deleting a snapshot repository configuration does not remove the snapshot repository itself from S3. Only the configuration that enables {{ece}} to access the repository is removed. Existing snapshots are also retained and need to be deleted separately if you no longer need them. -To delete a snapshot repository configuration from your Elastic Cloud Enterprise installation: +To delete a snapshot repository configuration from your {{ece}} installation: 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). 2. From the **Platform** menu, select **Repositories**. @@ -78,14 +78,14 @@ To delete a snapshot repository configuration from your Elastic Cloud Enterprise -## Manage Elasticsearch cluster repositories [ece-manage-repositories-clusters] +## Manage {{es}} cluster repositories [ece-manage-repositories-clusters] -You might need to update existing Elasticsearch clusters to use a different snapshot repository for one of the following reasons: +You might need to update existing {{es}} clusters to use a different snapshot repository for one of the following reasons: -* If you do not want all snapshots for a specific Elasticsearch cluster to go into the same bucket as your other clusters, you can add a new snapshot repository configuration with separate permissions and then change your Elasticsearch cluster to use the new repository. -* If you created an Elasticsearch cluster with no snapshot repository configured, you can add a repository later on. Elastic Cloud Enterprise will start taking snapshots of the cluster automatically. +* If you do not want all snapshots for a specific {{es}} cluster to go into the same bucket as your other clusters, you can add a new snapshot repository configuration with separate permissions and then change your {{es}} cluster to use the new repository. +* If you created an {{es}} cluster with no snapshot repository configured, you can add a repository later on. {{ece}} will start taking snapshots of the cluster automatically. -To change the snapshot repository for an existing Elasticsearch cluster: +To change the snapshot repository for an existing {{es}} cluster: 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Optional: If you need to use a repository that is not yet listed, add a snapshot repository configuration first. diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md index 54ef907d80..e78b7fb830 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md @@ -1,35 +1,35 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-snapshots.html -navigation_title: "Elastic Cloud on Kubernetes" +navigation_title: "{{eck}}" applies_to: deployment: eck: --- -# Manage snapshot repositories in Elastic Cloud on Kubernetes [k8s-snapshots] +# Manage snapshot repositories in {{eck}} [k8s-snapshots] -Snapshots allow you to back up and restore Elasticsearch indices, helping protect data from accidental deletion and enabling migration between clusters. In Elastic Cloud on Kubernetes (ECK), you can register snapshot repositories and configure snapshot lifecycle policies to automate backups. +Snapshots allow you to back up and restore {{es}} indices, helping protect data from accidental deletion and enabling migration between clusters. In {{eck}} (ECK), you can register snapshot repositories and configure snapshot lifecycle policies to automate backups. -To set up automated snapshots for Elasticsearch on Kubernetes you have to: +To set up automated snapshots for {{es}} on Kubernetes you have to: -1. Register the snapshot repository with the Elasticsearch API. -2. Set up a Snapshot Lifecycle Management Policy through [API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-slm) or the [Kibana UI](/deploy-manage/tools/snapshot-and-restore.md) +1. Register the snapshot repository with the {{es}} API. +2. Set up a Snapshot Lifecycle Management Policy through [API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-slm) or the [{{kib}} UI](/deploy-manage/tools/snapshot-and-restore.md) ::::{note} -Support for S3, GCS and Azure repositories is bundled in Elasticsearch by default from version 8.0. On older versions of Elasticsearch, or if another snapshot repository plugin should be used, you have to [Install a snapshot repository plugin](#k8s-install-plugin). +Support for S3, GCS and Azure repositories is bundled in {{es}} by default from version 8.0. On older versions of {{es}}, or if another snapshot repository plugin should be used, you have to [Install a snapshot repository plugin](#k8s-install-plugin). :::: -For more information on Elasticsearch snapshots, check [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md) in the Elasticsearch documentation. +For more information on {{es}} snapshots, check [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md) in the {{es}} documentation. ## Configuration examples [k8s_configuration_examples] -What follows is a non-exhaustive list of configuration examples. The first example might be worth reading even if you are targeting a Cloud provider other than GCP as it covers adding snapshot repository credentials to the Elasticsearch keystore and illustrates the basic workflow of setting up a snapshot repository: +What follows is a non-exhaustive list of configuration examples. The first example might be worth reading even if you are targeting a Cloud provider other than GCP as it covers adding snapshot repository credentials to the {{es}} keystore and illustrates the basic workflow of setting up a snapshot repository: * [Basic snapshot repository setup using GCS as an example](#k8s-basic-snapshot-gcs) -The following examples cover approaches that use Cloud-provider specific means to leverage Kubernetes service accounts to avoid having to configure snapshot repository credentials in Elasticsearch: +The following examples cover approaches that use Cloud-provider specific means to leverage Kubernetes service accounts to avoid having to configure snapshot repository credentials in {{es}}: * [Use GKE Workload Identity](#k8s-gke-workload-identiy) * [Use AWS IAM roles for service accounts (IRSA)](#k8s-iam-service-accounts) @@ -41,11 +41,11 @@ The final example illustrates how to configure secure and trusted communication ### Basic snapshot repository setup using GCS as an example [k8s-basic-snapshot-gcs] -#### Configure GCS credentials through the Elasticsearch keystore [k8s-secure-settings] +#### Configure GCS credentials through the {{es}} keystore [k8s-secure-settings] -The Elasticsearch GCS repository plugin requires a JSON file that contains service account credentials. These need to be added as secure settings to the Elasticsearch keystore. For more details, check [Google Cloud Storage Repository](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md). +The {{es}} GCS repository plugin requires a JSON file that contains service account credentials. These need to be added as secure settings to the {{es}} keystore. For more details, check [Google Cloud Storage Repository](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md). -Using ECK, you can automatically inject secure settings into a cluster node by providing them through a secret in the Elasticsearch Spec. +Using ECK, you can automatically inject secure settings into a cluster node by providing them through a secret in the {{es}} Spec. 1. Create a file containing the GCS credentials. For this example, name it `gcs.client.default.credentials_file`. The file name is important as it is reflected in the secure setting. @@ -70,7 +70,7 @@ Using ECK, you can automatically inject secure settings into a cluster node by p kubectl create secret generic gcs-credentials --from-file=gcs.client.default.credentials_file ``` -3. Edit the `secureSettings` section of the Elasticsearch resource: +3. Edit the `secureSettings` section of the {{es}} resource: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -79,7 +79,7 @@ Using ECK, you can automatically inject secure settings into a cluster node by p name: elasticsearch-sample spec: version: 8.16.1 - # Inject secure settings into Elasticsearch nodes from a k8s secret reference + # Inject secure settings into {{es}} nodes from a k8s secret reference secureSettings: - secretName: gcs-credentials ``` @@ -93,12 +93,12 @@ Using ECK, you can automatically inject secure settings into a cluster node by p ``` -GCS credentials are automatically propagated into each Elasticsearch node’s keystore. It can take up to a few minutes, depending on the number of secrets in the keystore. You don’t have to restart the nodes. +GCS credentials are automatically propagated into each {{es}} node’s keystore. It can take up to a few minutes, depending on the number of secrets in the keystore. You don’t have to restart the nodes. -#### Register the repository in Elasticsearch [k8s-create-repository] +#### Register the repository in {{es}} [k8s-create-repository] -1. Create the GCS snapshot repository in Elasticsearch. You can either use the [Snapshot and Restore UI](/deploy-manage/tools/snapshot-and-restore.md) in Kibana version 7.4.0 or higher, or follow the procedure described in [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md): +1. Create the GCS snapshot repository in {{es}}. You can either use the [Snapshot and Restore UI](/deploy-manage/tools/snapshot-and-restore.md) in {{kib}} version 7.4.0 or higher, or follow the procedure described in [Snapshot and Restore](/deploy-manage/tools/snapshot-and-restore.md): ```sh PUT /_snapshot/my_gcs_repository @@ -122,7 +122,7 @@ GCS credentials are automatically propagated into each Elasticsearch node’s ke ### Use GKE Workload Identity [k8s-gke-workload-identiy] -GKE Workload Identity allows a Kubernetes service account to impersonate a Google Cloud IAM service account and therefore to configure a snapshot repository in Elasticsearch without storing Google Cloud credentials in Elasticsearch itself. This feature requires your Kubernetes cluster to run on GKE and your Elasticsearch cluster to run at least [version 7.13](https://github.com/elastic/elasticsearch/pull/71239) and [version 8.1](https://github.com/elastic/elasticsearch/pull/82974) when using searchable snapshots. +GKE Workload Identity allows a Kubernetes service account to impersonate a Google Cloud IAM service account and therefore to configure a snapshot repository in {{es}} without storing Google Cloud credentials in {{es}} itself. This feature requires your Kubernetes cluster to run on GKE and your {{es}} cluster to run at least [version 7.13](https://github.com/elastic/elasticsearch/pull/71239) and [version 8.1](https://github.com/elastic/elasticsearch/pull/82974) when using searchable snapshots. Follow the instructions in the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to configure workload identity, specifically: @@ -145,7 +145,7 @@ Follow the instructions in the [GKE documentation](https://cloud.google.com/kube iam.gke.io/gcp-service-account=gcp-sa@PROJECT_ID.iam.gserviceaccount.com ``` -6. Create an Elasticsearch cluster, referencing the Kubernetes service account +6. Create an {{es}} cluster, referencing the Kubernetes service account ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -169,7 +169,7 @@ Follow the instructions in the [GKE documentation](https://cloud.google.com/kube ### Use AWS IAM roles for service accounts (IRSA) [k8s-iam-service-accounts] -The AWS IAM roles for service accounts feature allows you to give Elasticsearch restricted access to a S3 bucket without having to expose and store AWS credentials directly in Elasticsearch. This requires you to run the ECK operator on Amazon’s EKS offering and an [Elasticsearch cluster running at least version 8.1](https://www.elastic.co/guide/en/elasticsearch/reference/8.1/repository-s3.html#iam-kubernetes-service-accounts). +The AWS IAM roles for service accounts feature allows you to give {{es}} restricted access to a S3 bucket without having to expose and store AWS credentials directly in {{es}}. This requires you to run the ECK operator on Amazon’s EKS offering and an [{{es}} cluster running at least version 8.1](https://www.elastic.co/guide/en/elasticsearch/reference/8.1/repository-s3.html#iam-kubernetes-service-accounts). Follow [the AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/eks-restrict-s3-bucket/) to set this feature up. Specifically you need to: @@ -228,7 +228,7 @@ Follow [the AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-c 1. Replace `YOUR_CLUSTER` with your actual EKS cluster name 2. Replace with the actual AWS IAM ARN for the policy you just created -4. Create an Elasticsearch cluster referencing the service account +4. Create an {{es}} cluster referencing the service account ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -263,7 +263,7 @@ Follow [the AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-c path: aws-web-identity-token-file ``` - 1. Elasticsearch expects the service account token to be projected to exactly this path + 1. {{es}} expects the service account token to be projected to exactly this path 2. Replace with the actual `AWS_ROLE_ARN` for the IAM role you created in step 3 5. Create the snapshot repository as described in [Register the repository in Elasticsearch](#k8s-create-repository) but of type `s3` @@ -282,7 +282,7 @@ Follow [the AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-c ### Use Azure Workload Identity [k8s-azure-workload-identity] -Starting with version 8.16 Elasticsearch supports Azure Workload identity which allows the use of Azure blob storage for Elasticsearch snapshots without exposing Azure credentials directly to Elasticsearch. +Starting with version 8.16 {{es}} supports Azure Workload identity which allows the use of Azure blob storage for {{es}} snapshots without exposing Azure credentials directly to {{es}}. Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster) for setting up workload identity for the first five steps: @@ -307,7 +307,7 @@ Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/wor --sku Standard_ZRS <1> ``` - 1. This can be any of the supported storage account types `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS` but not `Premium_LRS` see [the Elasticsearch documentation](/deploy-manage/tools/snapshot-and-restore/azure-repository.md) for details. + 1. This can be any of the supported storage account types `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS` but not `Premium_LRS` see [the {{es}} documentation](/deploy-manage/tools/snapshot-and-restore/azure-repository.md) for details. 7. Create a container in the storage account, for this example `es-snapshots`. @@ -337,14 +337,14 @@ Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/wor 1. The storage account ID needs to be specified as the scope for the role assignment without the leading slash returned by the `az storage account show` command. -9. Create a Kubernetes secret, called `keystore` in this example, with the storage account name. This is necessary to be able to specify the account name as a secure setting in Elasticsearch in the next step. +9. Create a Kubernetes secret, called `keystore` in this example, with the storage account name. This is necessary to be able to specify the account name as a secure setting in {{es}} in the next step. ```sh kubectl create secret generic keystore \ --from-literal=azure.client.default.account=${STORAGE_ACCOUNT_NAME} ``` -10. Create an Elasticsearch cluster that uses the Kubernetes service account created earlier. +10. Create an {{es}} cluster that uses the Kubernetes service account created earlier. ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -376,9 +376,9 @@ Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/wor 1. Specify the Kubernetes secret created in the previous step to configure the Azure storage account name as a secure setting. 2. This is the service account created earlier in the steps from the [Azure Workload Identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster#create-a-kubernetes-service-account) tutorial. - 3. The corresponding volume is injected by the [Azure Workload Identity Mutating Admission Webhook](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html). For Elasticsearch to be able to access the token, the mount needs to be in a sub-directory of the Elasticsearch config directory. The corresponding environment variable needs to be adjusted as well. + 3. The corresponding volume is injected by the [Azure Workload Identity Mutating Admission Webhook](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html). For {{es}} to be able to access the token, the mount needs to be in a sub-directory of the {{es}} config directory. The corresponding environment variable needs to be adjusted as well. -11. Create a snapshot repository of type `azure` through the Elasticsearch API, or through [*Elastic Stack configuration policies*](../../deploy/cloud-on-k8s/elastic-stack-configuration-policies.md). +11. Create a snapshot repository of type `azure` through the {{es}} API, or through [*{{stack}} configuration policies*](../../deploy/cloud-on-k8s/elastic-stack-configuration-policies.md). ```sh POST _snapshot/my_azure_repository @@ -394,7 +394,7 @@ Follow the [Azure documentation](https://learn.microsoft.com/en-us/azure/aks/wor ### Use S3-compatible services [k8s-s3-compatible] -The following example assumes that you have deployed and configured a S3 compatible object store like [MinIO](https://min.io) that can be reached from the Kubernetes cluster, and also that you have created a bucket in said service, called `es-repo` in this example. The example also assumes an Elasticsearch cluster named `es` is deployed within the cluster. Most importantly the steps describing how to customize the JVM trust store are only necessary if your S3-compatible service is using TLS certificates that are not issued by a well known certificate authority. +The following example assumes that you have deployed and configured a S3 compatible object store like [MinIO](https://min.io) that can be reached from the Kubernetes cluster, and also that you have created a bucket in said service, called `es-repo` in this example. The example also assumes an {{es}} cluster named `es` is deployed within the cluster. Most importantly the steps describing how to customize the JVM trust store are only necessary if your S3-compatible service is using TLS certificates that are not issued by a well known certificate authority. ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -408,14 +408,14 @@ spec: count: 3 ``` -1. Extract the cacerts JVM trust store from one of the running Elasticsearch nodes. +1. Extract the cacerts JVM trust store from one of the running {{es}} nodes. ```sh kubectl cp es-es-mixed-0:/usr/share/elasticsearch/jdk/lib/security/cacerts cacerts ``` ::::{note} - You can skip this step if you want to create a new trust store that does not contain any well known CAs that Elasticsearch trusts by default. Be aware that this limits Elasticsearch’s ability to communicate with TLS secured endpoints to those for which you add CA certificates in the next steps. + You can skip this step if you want to create a new trust store that does not contain any well known CAs that {{es}} trusts by default. Be aware that this limits Elasticsearch’s ability to communicate with TLS secured endpoints to those for which you add CA certificates in the next steps. :::: 2. Obtain the CA certificate used to sign the certificate of your S3-compatible service. We assume it is called `tls.crt` @@ -443,7 +443,7 @@ spec: --from-literal=s3.client.default.secret_key=$YOUR_SECRET_ACCESS_KEY ``` -6. Update your Elasticsearch cluster to use the trust store and credentials from the Kubernetes secrets +6. Update your {{es}} cluster to use the trust store and credentials from the Kubernetes secrets ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -494,9 +494,9 @@ spec: ### Install a snapshot repository plugin [k8s-install-plugin] -If you are running a version of Elasticsearch before 8.0 or you need a snapshot repository plugin that is not already pre-installed you have to install the plugin yourself. To install the snapshot repository plugin, you can either use a [custom image](../../deploy/cloud-on-k8s/create-custom-images.md) or [add your own init container](../../deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md) which installs the plugin when the Pod is created. +If you are running a version of {{es}} before 8.0 or you need a snapshot repository plugin that is not already pre-installed you have to install the plugin yourself. To install the snapshot repository plugin, you can either use a [custom image](../../deploy/cloud-on-k8s/create-custom-images.md) or [add your own init container](../../deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md) which installs the plugin when the Pod is created. -To use your own custom image with all necessary plugins pre-installed, use an Elasticsearch resource like the following: +To use your own custom image with all necessary plugins pre-installed, use an {{es}} resource like the following: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 @@ -535,7 +535,7 @@ spec: bin/elasticsearch-plugin install --batch repository-gcs ``` -Assuming you stored this in a file called `elasticsearch.yaml` you can in both cases create the Elasticsearch cluster with: +Assuming you stored this in a file called `elasticsearch.yaml` you can in both cases create the {{es}} cluster with: ```sh kubectl apply -f elasticsearch.yaml diff --git a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md index 1aed2270d1..707546b6d4 100644 --- a/deploy-manage/tools/snapshot-and-restore/create-snapshots.md +++ b/deploy-manage/tools/snapshot-and-restore/create-snapshots.md @@ -56,7 +56,7 @@ The guide also provides tips for creating dedicated cluster state snapshots and {{slm-cap}} ({{slm-init}}) is the easiest way to regularly back up a cluster. An {{slm-init}} policy automatically takes snapshots on a preset schedule. The policy can also delete snapshots based on retention rules you define. ::::{tip} -Elastic Cloud Hosted deployments automatically include the `cloud-snapshot-policy` {{slm-init}} policy. Elastic Cloud Hosted uses this policy to take periodic snapshots of your cluster. For more information, see the [Manage snapshot repositories in Elastic Cloud Hosted documentation](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md). +{{ech}} deployments automatically include the `cloud-snapshot-policy` {{slm-init}} policy. {{ech}} uses this policy to take periodic snapshots of your cluster. For more information, see the [Manage snapshot repositories in {{ech}} documentation](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md). :::: ### {{slm-init}} security [slm-security] @@ -282,7 +282,7 @@ The API returns: }, { "name": "kibana", - "description": "Manages Kibana configuration and reports" + "description": "Manages {{kib}} configuration and reports" }, { "name": "security", diff --git a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md index be73f6817b..f993c2535c 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md @@ -54,7 +54,7 @@ You can use the {{es}} keystore to store the credentials to access your AWS acco 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. -3. Locate **Elasticsearch keystore** and select **Add settings**. +3. Locate **{{es}} keystore** and select **Add settings**. 4. With **Type** set to **Single string**, add the following keys and their values: * `s3.client.secondary.access_key` @@ -65,7 +65,7 @@ You can use the {{es}} keystore to store the credentials to access your AWS acco ## Create the repository [ec-create-aws-repository] -1. Open Kibana and go to **Management** > **Snapshot and Restore**. +1. Open {{kib}} and go to **Management** > **Snapshot and Restore**. 2. On the **Repositories** tab, select **Register a repository**. 3. Provide a name for your repository and select type **AWS S3**. 4. Provide the following settings: @@ -77,5 +77,5 @@ You can use the {{es}} keystore to store the credentials to access your AWS acco 6. Select **Register**. 7. Select **Verify** to confirm that your settings are correct and the deployment can connect to your repository. -Your snapshot repository is now set up using S3! You can use Kibana to manage your snapshots and begin sending Elasticsearch snapshots to your own bucket. For details refer to the [Snapshot and Restore](create-snapshots.md) documentation. +Your snapshot repository is now set up using S3! You can use {{kib}} to manage your snapshots and begin sending {{es}} snapshots to your own bucket. For details refer to the [Snapshot and Restore](create-snapshots.md) documentation. diff --git a/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md b/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md index 966059b2b8..bce6aeccb5 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md @@ -17,11 +17,11 @@ Configure a custom snapshot repository using your Azure Blob storage account. Follow the Microsoft documentation to [set up an Azure storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create) with an access key, and then [create a container](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal). -## Enable the `repository-azure` plugin in Elastic Stack 7.17 and earlier [ec-enable-azure-plugin] +## Enable the `repository-azure` plugin in {{stack}} 7.17 and earlier [ec-enable-azure-plugin] -For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need to enable the `repository-azure` plugin to use the Azure repository type. On the Azure platform, the plugin is enabled by default. If your deployment is on AWS or GCP, follow these steps to enable the `repository-azure` plugin: +For deployments with **{{stack}} version 7.17 and earlier**, you’ll need to enable the `repository-azure` plugin to use the Azure repository type. On the Azure platform, the plugin is enabled by default. If your deployment is on AWS or GCP, follow these steps to enable the `repository-azure` plugin: -1. Refer to [Azure Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-azure.html) to download the version of the plugin that matches your Elastic Stack version. +1. Refer to [Azure Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-azure.html) to download the version of the plugin that matches your {{stack}} version. 2. Upload the plugin to your deployment: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -37,11 +37,11 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t ### Configure the keystore [ec-configure-azure-keystore] -Create an entry for the Azure client in the Elasticsearch keystore: +Create an entry for the Azure client in the {{es}} keystore: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. -3. Locate **Elasticsearch keystore** and select **Add settings**. +3. Locate **{{es}} keystore** and select **Add settings**. 4. With **Type** set to **Single string**, add the following keys and their values: * `azure.client.secondary.account` @@ -52,7 +52,7 @@ Create an entry for the Azure client in the Elasticsearch keystore: ### Create the repository [ec-create-azure-repository] -1. Open Kibana and go to **Management** > **Snapshot and Restore**. +1. Open {{kib}} and go to **Management** > **Snapshot and Restore**. 2. On the **Repositories** tab, select **Register a repository**. 3. Provide a name for your repository and select type **Azure**. 4. Provide the following settings: @@ -70,5 +70,5 @@ Create an entry for the Azure client in the Elasticsearch keystore: 6. Select Register. 7. Select **Verify** to confirm that your settings are correct and the deployment can connect to your repository. -Your snapshot repository is now set up using Azure Blob storage! You can use Kibana to manage your snapshots and begin sending Elasticsearch snapshots to your own container. For details, check the [Snapshot and Restore](create-snapshots.md) documentation. +Your snapshot repository is now set up using Azure Blob storage! You can use {{kib}} to manage your snapshots and begin sending {{es}} snapshots to your own container. For details, check the [Snapshot and Restore](create-snapshots.md) documentation. diff --git a/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md b/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md index f3d9431fbd..5feedaae4b 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md @@ -18,9 +18,9 @@ You’ll need to have an existing Google Cloud account and have the appropriate 1. Create a [service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) in your Google Cloud project. - The service account should be configured to have permission to read, write, and list the bucket objects. For more information, refer to [Recommended bucket permission](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md#repository-gcs-bucket-permission) in the Elasticsearch docs. + The service account should be configured to have permission to read, write, and list the bucket objects. For more information, refer to [Recommended bucket permission](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md#repository-gcs-bucket-permission) in the {{es}} docs. -2. Save the service account key in JSON file format. You are going to use it later to configure your Elasticsearch deployment for snapshotting. +2. Save the service account key in JSON file format. You are going to use it later to configure your {{es}} deployment for snapshotting. For more detailed information on the JSON account service key, refer to [Using a Service Account](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md#repository-gcs-using-service-account). @@ -30,11 +30,11 @@ For more detailed information on the JSON account service key, refer to [Using a Follow the Google Cloud Storage documentation to [create a GCS bucket](https://cloud.google.com/storage/docs/creating-buckets). -## Enable the `repository-gcs` plugin in Elastic Stack 7.17 and earlier [ec-enable-gcs-plugin] +## Enable the `repository-gcs` plugin in {{stack}} 7.17 and earlier [ec-enable-gcs-plugin] -For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need to enable the `repository-gcs` plugin to use the Google Cloud Storage repository type. On Google Cloud Platform, the plugin is enabled by default. If your deployment is on AWS or Azure, follow these steps to enable the `repository-gcs` plugin: +For deployments with **{{stack}} version 7.17 and earlier**, you’ll need to enable the `repository-gcs` plugin to use the Google Cloud Storage repository type. On Google Cloud Platform, the plugin is enabled by default. If your deployment is on AWS or Azure, follow these steps to enable the `repository-gcs` plugin: -1. Refer to [Google Cloud Storage Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-gcs.html) to download the version of the plugin that matches your Elastic Stack version. +1. Refer to [Google Cloud Storage Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-gcs.html) to download the version of the plugin that matches your {{stack}} version. 2. Upload the plugin to your deployment: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -50,11 +50,11 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t ### Configure the keystore [ec-configure-gcs-keystore] -Create an entry for the GCS client in the Elasticsearch keystore: +Create an entry for the GCS client in the {{es}} keystore: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. -3. Locate **Elasticsearch keystore** and select **Add settings**. +3. Locate **{{es}} keystore** and select **Add settings**. 4. Enter the **Setting name** `gcs.client.secondary.credentials_file`. 5. With **Type** set to **JSON block / file**, add your [GCS service account key JSON file](#ec-gcs-service-account-key). 6. Select **Save**. @@ -62,7 +62,7 @@ Create an entry for the GCS client in the Elasticsearch keystore: ### Create the repository [ec-create-gcs-repository] -1. Open Kibana and go to **Management** > **Snapshot and Restore**. +1. Open {{kib}} and go to **Management** > **Snapshot and Restore**. 2. On the **Repositories** tab, select **Register a repository**. 3. Provide a name for your repository and select type **Google Cloud Storage**. 4. Provide the following settings: @@ -77,5 +77,5 @@ Create an entry for the GCS client in the Elasticsearch keystore: 6. Select **Register**. 7. Select **Verify** to confirm that your settings are correct and the deployment can connect to your repository. -Your snapshot repository is now set up using GCS! You can use Kibana to manage your snapshots and begin sending Elasticsearch snapshots to your own bucket. For details, check the [Snapshot and Restore](create-snapshots.md) documentation. +Your snapshot repository is now set up using GCS! You can use {{kib}} to manage your snapshots and begin sending {{es}} snapshots to your own bucket. For details, check the [Snapshot and Restore](create-snapshots.md) documentation. diff --git a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md index 3a84be9ed2..0a382f1bf6 100644 --- a/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md @@ -8,7 +8,7 @@ applies_to: # Configure a snapshot repository using AWS S3 [ece-aws-custom-repository] -To store Elasticsearch snapshots in AWS S3, you need to configure a snapshot repository in Elastic Cloud Enterprise (ECE). This guide explains how to add an Amazon S3 repository using the Cloud UI and provides details on required settings and advanced configurations. +To store {{es}} snapshots in AWS S3, you need to configure a snapshot repository in {{ece}} (ECE). This guide explains how to add an Amazon S3 repository using the Cloud UI and provides details on required settings and advanced configurations. To add a snapshot repository: diff --git a/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md b/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md index e7c194092e..8b812e2ff9 100644 --- a/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md +++ b/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md @@ -10,10 +10,10 @@ applies_to: # Restore a snapshot across clusters [ece-restore-across-clusters] -Snapshots can be restored to either the same Elasticsearch cluster or to another cluster. If you are restoring all indices to another cluster, you can [clone](/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md) a cluster. +Snapshots can be restored to either the same {{es}} cluster or to another cluster. If you are restoring all indices to another cluster, you can [clone](/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md) a cluster. ::::{note} -Users created using the X-Pack security features or using Shield are not included when you restore across clusters, only data from Elasticsearch indices is restored. If you do want to create a cloned cluster with the same users as your old cluster, you need to recreate the users manually on the new cluster. +Users created using the X-Pack security features or using Shield are not included when you restore across clusters, only data from {{es}} indices is restored. If you do want to create a cloned cluster with the same users as your old cluster, you need to recreate the users manually on the new cluster. :::: @@ -29,18 +29,18 @@ Testing mapping changes : Mapping changes almost always require reindexing. Unless your data volume is trivial, reindexing requires time and tweaking the parameters to achieve the best reindexing performance usually takes a little trial and error. While this use case could also be handled by running the scan and scroll query directly against the source cluster, a long lived scroll has the side effect of blocking merges even if the scan query is very light weight. Integration testing -: Test your application against a real live Elasticsearch cluster with actual data. If you automate this, you could also aggregate performance metrics from the tests and use those metrics to detect if a change in your application has introduced a performance degradation. +: Test your application against a real live {{es}} cluster with actual data. If you automate this, you could also aggregate performance metrics from the tests and use those metrics to detect if a change in your application has introduced a performance degradation. -In **Elastic Cloud Hosted**, a cluster is eligible as a destination for a snapshot restore if it meets these criteria: +In **{{ech}}**, a cluster is eligible as a destination for a snapshot restore if it meets these criteria: - The cluster is in the same region. For example, a snapshot taken in `eu-west-1` cannot be restored to `us-east-1` at this point. If you need to restore snapshots across regions, create the destination deployment, connect to the [custom repository](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md#ess-repo-types), and then [restore from a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). -- The destination cluster is able to read the indices. You can generally restore to your Elasticsearch cluster snapshots of indices created back to the previous major version, but see the [version matrix](../snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. +- The destination cluster is able to read the indices. You can generally restore to your {{es}} cluster snapshots of indices created back to the previous major version, but see the [version matrix](../snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. The list of available snapshots can be found in the found-snapshots repository. -In **Elastic Cloud Enterprise**, a cluster is eligible as a destination for a snapshot restore if it meets these criteria: +In **{{ece}}**, a cluster is eligible as a destination for a snapshot restore if it meets these criteria: -- The destination cluster is able to read the indices. You can generally restore to your Elasticsearch cluster snapshots of indices created back to the previous major version, but see the [version matrix](../snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. +- The destination cluster is able to read the indices. You can generally restore to your {{es}} cluster snapshots of indices created back to the previous major version, but see the [version matrix](../snapshot-and-restore.md#snapshot-restore-version-compatibility) for all the details. To restore built-in snapshots across clusters, there are two options: diff --git a/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md b/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md index ecbeb9042a..8748af533b 100644 --- a/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md +++ b/deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-containing-searchable-snapshots-indices-across-clusters.md @@ -12,15 +12,15 @@ applies_to: If you are restoring a snapshot from one cluster to another, and that snapshot contains indices that are backed by searchable snapshots, there are a few more requirements to be aware of. -The first versions of Elasticsearch that supported searchable snapshots required that the repository name in the destination cluster, where the snapshot is to be restored from, match the repository name in the source cluster, where the snapshot was taken. Beginning with Elasticsearch version 7.12 this requirement is no longer present, but there are other prerequisites that need to be met. +The first versions of {{es}} that supported searchable snapshots required that the repository name in the destination cluster, where the snapshot is to be restored from, match the repository name in the source cluster, where the snapshot was taken. Beginning with {{es}} version 7.12 this requirement is no longer present, but there are other prerequisites that need to be met. Pre-requisites for restoring snapshots containing searchable snapshot indices across clusters: * The source cluster must have been created as version 7.12.0 or higher. - To be more precise the requirement is on the `found-snapshots` repository settings at the time the snapshots were taken. The repository must have a `uuid` field, which is supported only in Elasticsearch versions 7.12.0 and higher. If the cluster was created with a previous version and later upgraded to 7.12.0 or higher, the repository may not have the required `uuid` field and therefore cannot be used to restore onto another cluster. + To be more precise the requirement is on the `found-snapshots` repository settings at the time the snapshots were taken. The repository must have a `uuid` field, which is supported only in {{es}} versions 7.12.0 and higher. If the cluster was created with a previous version and later upgraded to 7.12.0 or higher, the repository may not have the required `uuid` field and therefore cannot be used to restore onto another cluster. - To be sure, you can send a `GET /_snapshot/found-snapshots` request to your Elasticsearch cluster and check if the `uuid` field is present. + To be sure, you can send a `GET /_snapshot/found-snapshots` request to your {{es}} cluster and check if the `uuid` field is present. * The destination cluster must be version 7.13.2 or higher. diff --git a/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md b/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md index e6f6e48f3a..a8165e45b3 100644 --- a/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md +++ b/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md @@ -2,27 +2,27 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-custom-repository.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-custom-repository.html -navigation_title: "Elastic Cloud Hosted" +navigation_title: "{{ech}}" applies_to: deployment: ess: --- -# Manage snapshot repositories in Elastic Cloud Hosted +# Manage snapshot repositories in {{ech}} -Snapshot repositories allow you to back up and restore your Elasticsearch data efficiently. In Elastic Cloud Hosted, repositories are automatically registered and managed within your deployment, ensuring data security, long-term archiving, and seamless recovery. +Snapshot repositories allow you to back up and restore your {{es}} data efficiently. In {{ech}}, repositories are automatically registered and managed within your deployment, ensuring data security, long-term archiving, and seamless recovery. -By default, Elastic Cloud Hosted takes a snapshot of all the indices in your Elasticsearch cluster every 30 minutes. You can set a different snapshot interval if needed for your environment. You can also take snapshots on demand, without having to wait for the next interval. Taking a snapshot on demand does not affect the retention schedule for existing snapshots; it just adds an additional snapshot to the repository. This might be helpful if you are about to make a deployment change and you don’t have a current snapshot. +By default, {{ech}} takes a snapshot of all the indices in your {{es}} cluster every 30 minutes. You can set a different snapshot interval if needed for your environment. You can also take snapshots on demand, without having to wait for the next interval. Taking a snapshot on demand does not affect the retention schedule for existing snapshots; it just adds an additional snapshot to the repository. This might be helpful if you are about to make a deployment change and you don’t have a current snapshot. -Use Kibana to manage your snapshots. In Kibana, you can set up additional repositories where the snapshots are stored, other than the one currently managed by Elastic Cloud Hosted. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. +Use {{kib}} to manage your snapshots. In {{kib}}, you can set up additional repositories where the snapshots are stored, other than the one currently managed by {{ech}}. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. Snapshots back up only open indices. If you close an index, it is not included in snapshots and you will not be able to restore the data. A snapshot taken using the default `found-snapshots` repository can only be restored to deployments in the same region. If you need to restore snapshots across regions, create the destination deployment, connect to the custom repository, and then restore from a snapshot. -From within Elastic Cloud Hosted, you can restore a snapshot from a different deployment in the same region. +From within {{ech}}, you can restore a snapshot from a different deployment in the same region. -## Prerequisites for Elastic Cloud Hosted +## Prerequisites for {{ech}} To use Kibana's Snapshot and Restore feature, you must have the following permissions: @@ -33,16 +33,16 @@ To register a snapshot repository, the cluster’s global metadata must be writa ## Considerations -When working with snapshot repositories in Elastic Cloud Hosted, keep the following in mind: +When working with snapshot repositories in {{ech}}, keep the following in mind: -- Each snapshot repository is separate and independent. Elasticsearch doesn’t share data between repositories. +- Each snapshot repository is separate and independent. {{es}} doesn’t share data between repositories. - Clusters should only register a particular snapshot repository bucket once. If you register the same snapshot repository with multiple clusters, only one cluster should have write access to the repository. On other clusters, register the repository as read-only. -- This prevents multiple clusters from writing to the repository at the same time and corrupting the repository’s contents. It also prevents Elasticsearch from caching the repository’s contents, which means that changes made by other clusters will become visible straight away. -- When upgrading Elasticsearch to a newer version, you can continue to use the same repository you were using before the upgrade. If the repository is accessed by multiple clusters, they should all have the same version. Once a repository has been modified by a particular version of Elasticsearch, it may not work correctly when accessed by older versions. However, you will be able to recover from a failed upgrade by restoring a snapshot taken before the upgrade into a cluster running the pre-upgrade version, even if you have taken more snapshots during or after the upgrade. +- This prevents multiple clusters from writing to the repository at the same time and corrupting the repository’s contents. It also prevents {{es}} from caching the repository’s contents, which means that changes made by other clusters will become visible straight away. +- When upgrading {{es}} to a newer version, you can continue to use the same repository you were using before the upgrade. If the repository is accessed by multiple clusters, they should all have the same version. Once a repository has been modified by a particular version of {{es}}, it may not work correctly when accessed by older versions. However, you will be able to recover from a failed upgrade by restoring a snapshot taken before the upgrade into a cluster running the pre-upgrade version, even if you have taken more snapshots during or after the upgrade. -## Elastic Cloud Hosted snapshot repository types [ess-repo-types] +## {{ech}} snapshot repository types [ess-repo-types] -[Elastic Cloud Hosted deployments](https://cloud.elastic.co/registration?page=docs&placement=docs-body) automatically register the [`found-snapshots`](../snapshot-and-restore.md) repository. {{ech}} uses this repository and the `cloud-snapshot-policy` to take periodic snapshots of your cluster. You can also use the `found-snapshots` repository for your own [{{slm-init}} policies](create-snapshots.md#automate-snapshots-slm) or to store searchable snapshots. +[{{ech}} deployments](https://cloud.elastic.co/registration?page=docs&placement=docs-body) automatically register the [`found-snapshots`](../snapshot-and-restore.md) repository. {{ech}} uses this repository and the `cloud-snapshot-policy` to take periodic snapshots of your cluster. You can also use the `found-snapshots` repository for your own [{{slm-init}} policies](create-snapshots.md#automate-snapshots-slm) or to store searchable snapshots. The `found-snapshots` repository is specific to each deployment. However, you can restore snapshots from another deployment’s `found-snapshots` repository if the deployments are under the same account and in the same region. See the Cloud [Snapshot and restore](../snapshot-and-restore.md) documentation to learn more. @@ -53,9 +53,9 @@ The `found-snapshots` repository is specific to each deployment. However, you ca * [AWS S3](ec-aws-custom-repository.md) * [Source-only](source-only-repository.md) -## Register a snapshot repository in Elastic Cloud Hosted [register-snapshot-repos-ech] +## Register a snapshot repository in {{ech}} [register-snapshot-repos-ech] -In **Elastic Cloud Hosted**, snapshot repositories are automatically registered for you, but you can create additional repositories if needed. +In **{{ech}}**, snapshot repositories are automatically registered for you, but you can create additional repositories if needed. * {{kib}}'s **Snapshot and Restore** feature * {{es}}'s [snapshot repository management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-snapshot) diff --git a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-gcs-repository.md b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-gcs-repository.md index 6ae5ae1267..49627b413f 100644 --- a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-gcs-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-gcs-repository.md @@ -14,11 +14,11 @@ Snapshots to GCS are supported using an [advanced repository configuration](clou You’ll need to have an existing Google Cloud account and have the appropriate permissions to generate credentials: -1. Create [service account credentials](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) in your Google Cloud project where Elastic Cloud Enterprise is running. +1. Create [service account credentials](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) in your Google Cloud project where {{ece}} is running. - The service account should be [granted the role of `storage.admin`](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts) so that Elasticsearch clusters can read, write, and list the bucket objects. + The service account should be [granted the role of `storage.admin`](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts) so that {{es}} clusters can read, write, and list the bucket objects. -2. Save the service account key in JSON file format. You are going to use it later to configure your Elasticsearch deployment for snapshotting. +2. Save the service account key in JSON file format. You are going to use it later to configure your {{es}} deployment for snapshotting. ## Add the GCS repository [ece_add_the_gcs_repository] @@ -41,7 +41,7 @@ Add your Google Cloud Storage bucket as a repository to the platform: ``` -Snapshots are stored in the bucket you provide. Use the repository name you define here to configure your Elasticsearch clusters for snapshotting to this repository. +Snapshots are stored in the bucket you provide. Use the repository name you define here to configure your {{es}} clusters for snapshotting to this repository. ## Configure your deployment for GCS snapshots [ece_configure_your_deployment_for_gcs_snapshots] diff --git a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md index ddf066a7f5..a71759c9a3 100644 --- a/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md @@ -34,7 +34,7 @@ For more detailed instructions, see the [Google Cloud documentation](https://clo ### Service authentication [repository-gcs-service-authentication] -The repository must authenticate the requests it makes to the Google Cloud Storage service. It is common for Google client libraries to employ a strategy named [application default credentials](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). However, that strategy is only **partially supported** by Elasticsearch. The repository operates under the Elasticsearch process, which runs with the security manager enabled. The security manager obstructs the "automatic" credential discovery when the environment variable `GOOGLE_APPLICATION_CREDENTIALS` is used to point to a local file on disk. It can, however, retrieve the service account that is attached to the resource that is running Elasticsearch, or fall back to the default service account that Compute Engine, Kubernetes Engine or App Engine provide. Alternatively, you must configure [service account](#repository-gcs-using-service-account) credentials if you are using an environment that does not support automatic credential discovery. +The repository must authenticate the requests it makes to the Google Cloud Storage service. It is common for Google client libraries to employ a strategy named [application default credentials](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). However, that strategy is only **partially supported** by {{es}}. The repository operates under the {{es}} process, which runs with the security manager enabled. The security manager obstructs the "automatic" credential discovery when the environment variable `GOOGLE_APPLICATION_CREDENTIALS` is used to point to a local file on disk. It can, however, retrieve the service account that is attached to the resource that is running {{es}}, or fall back to the default service account that Compute Engine, Kubernetes Engine or App Engine provide. Alternatively, you must configure [service account](#repository-gcs-using-service-account) credentials if you are using an environment that does not support automatic credential discovery. ### Using a service account [repository-gcs-using-service-account] @@ -70,7 +70,7 @@ A JSON service account file looks like this: } ``` -To provide this file to the repository, it must be stored in the [Elasticsearch keystore](../../security/secure-settings.md). You must add a `file` setting with the name `gcs.client.NAME.credentials_file` using the `add-file` subcommand. `NAME` is the name of the client configuration for the repository. The implicit client name is `default`, but a different client name can be specified in the repository settings with the `client` key. +To provide this file to the repository, it must be stored in the [{{es}} keystore](../../security/secure-settings.md). You must add a `file` setting with the name `gcs.client.NAME.credentials_file` using the `add-file` subcommand. `NAME` is the name of the client configuration for the repository. The implicit client name is `default`, but a different client name can be specified in the repository settings with the `client` key. ::::{note} Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment variable is **not** supported. @@ -118,7 +118,7 @@ PUT _snapshot/my_gcs_repository } ``` -Some settings are sensitive and must be stored in the [Elasticsearch keystore](../../security/secure-settings.md). This is the case for the service account file: +Some settings are sensitive and must be stored in the [{{es}} keystore](../../security/secure-settings.md). This is the case for the service account file: ```sh bin/elasticsearch-keystore add-file gcs.client.default.credentials_file /path/service-account.json diff --git a/deploy-manage/tools/snapshot-and-restore/manage-snapshot-repositories.md b/deploy-manage/tools/snapshot-and-restore/manage-snapshot-repositories.md index e62b8f6de9..1830908347 100644 --- a/deploy-manage/tools/snapshot-and-restore/manage-snapshot-repositories.md +++ b/deploy-manage/tools/snapshot-and-restore/manage-snapshot-repositories.md @@ -9,13 +9,13 @@ applies_to: # Manage snapshot repositories -Snapshot repositories allow you to back up and restore your Elasticsearch data efficiently. Whether you're using [{{ech}}](#elastic-cloud-hosted), [{{ece}} (ECE)](#elastic-cloud-enterprise-ece), [{{eck}} (ECK)](#elastic-cloud-on-kubernetes-eck), or managing your own [{{es}} cluster](#self-managed), configuring a snapshot repository ensures data security, long-term archiving, and seamless migration across environments. +Snapshot repositories allow you to back up and restore your {{es}} data efficiently. Whether you're using [{{ech}}](#elastic-cloud-hosted), [{{ece}} (ECE)](#elastic-cloud-enterprise), [{{eck}} (ECK)](#elastic-cloud-on-kubernetes), or managing your own [{{es}} cluster](#self-managed), configuring a snapshot repository ensures data security, long-term archiving, and seamless migration across environments. ## Supported repository types ### Self-managed -If you manage your own Elasticsearch cluster, you can use the following built-in snapshot repository types: +If you manage your own {{es}} cluster, you can use the following built-in snapshot repository types: * [Azure](/deploy-manage/tools/snapshot-and-restore/azure-repository.md) * [Google Cloud Storage](/deploy-manage/tools/snapshot-and-restore/google-cloud-storage-repository.md) @@ -28,28 +28,28 @@ Other repository types are available through official plugins: * [Hadoop Distributed File System (HDFS)](elasticsearch://reference/elasticsearch-plugins/repository-hdfs.md) -### Elastic Cloud Hosted +### {{ech}} [elastic-cloud-hosted] {{ech}} deployments automatically register a repository named `found-snapshots` in {{es}} clusters. These repositories are used together with the `cloud-snapshot-policy` SLM policy to take periodic snapshots of your {{es}} clusters. You can also use the `found-snapshots` repository for your own [SLM policies](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md#automate-snapshots-slm) or to store searchable snapshots. The `found-snapshots` repository is specific to each deployment. However, you can restore snapshots from another deployment’s found-snapshots repository if the deployments are under the same account and in the same region. -Elastic Cloud Hosted deployments also support the following repository types: +{{ech}} deployments also support the following repository types: * [Azure](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md) * [Google Cloud Storage](/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) * [AWS S3](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) * [Source-only](/deploy-manage/tools/snapshot-and-restore/source-only-repository.md) -For more details, refer to [Registering snapshot repositories in Elastic Cloud Hosted](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md). +For more details, refer to [Registering snapshot repositories in {{ech}}](/deploy-manage/tools/snapshot-and-restore/elastic-cloud-hosted.md). -### Elastic Cloud Enterprise (ECE) +### {{ece}} (ECE) [elastic-cloud-enterprise] -Snapshot repositories are managed at the platform level in Elastic Cloud Enterprise (ECE) and can be associated with deployments as needed. ECE supports the creation and maintenance of multiple repositories, but each deployment can be linked to only one repository for automatic snapshots. +Snapshot repositories are managed at the platform level in {{ece}} (ECE) and can be associated with deployments as needed. ECE supports the creation and maintenance of multiple repositories, but each deployment can be linked to only one repository for automatic snapshots. -When a platform-level repository is associated with a deployment, the `found-snapshots` repository is added to the {{es}} cluster, and, similar to Elastic Cloud, a snapshot is taken every 30 minutes by default. The interval can be adjusted on per deployment basis. +When a platform-level repository is associated with a deployment, the `found-snapshots` repository is added to the {{es}} cluster, and, similar to {{ech}}, a snapshot is taken every 30 minutes by default. The interval can be adjusted on per deployment basis. -Elastic Cloud Enterprise installations support the following Elasticsearch snapshot repository types: +{{ece}} installations support the following {{es}} snapshot repository types: * [AWS S3](/deploy-manage/tools/snapshot-and-restore/ece-aws-custom-repository.md) * [Azure](/deploy-manage/tools/snapshot-and-restore/azure-storage-repository.md) @@ -57,12 +57,12 @@ Elastic Cloud Enterprise installations support the following Elasticsearch snaps * [Minio](/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md) :::{note} -No repository types other than those listed are supported in the Elastic Cloud Enterprise platform, even if they are supported by Elasticsearch. +No repository types other than those listed are supported in the {{ece}} platform, even if they are supported by {{es}}. ::: -For more details, refer to [Managing snapshot repositories in Elastic Cloud Enterprise](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). +For more details, refer to [Managing snapshot repositories in {{ece}}](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). -### Elastic Cloud on Kubernetes (ECK) +### {{eck}} (ECK) [elastic-cloud-on-kubernetes] {{es}} clusters deployed through ECK support the same type of deployments as self-managed {{es}} clusters. ECK does not currently provide any automation or functionality to facilitate the integration of snapshot repositores within the {{es}} clusters. diff --git a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md index 7327b6831b..2866314707 100644 --- a/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/minio-on-premise-repository.md @@ -8,7 +8,7 @@ applies_to: # Minio on-premise repository [ece-configuring-minio] -Minio is a popular, open-source distributed object storage server compatible with the Amazon AWS S3 API. You can use it with Elastic Cloud Enterprise installations when you want to store your Elasticsearch snapshots locally. +Minio is a popular, open-source distributed object storage server compatible with the Amazon AWS S3 API. You can use it with {{ece}} installations when you want to store your {{es}} snapshots locally. ## Create a test environment [ece-minio-test] @@ -30,7 +30,7 @@ There are a number of different ways of orchestrating the Minio deployment (Dock We recommend: -* Using a single Minio endpoint with the Elastic Cloud Enterprise installation, to simplify repository management. +* Using a single Minio endpoint with the {{ece}} installation, to simplify repository management. * Securing access to the Minio endpoint with TLS. :::{image} /deploy-manage/images/cloud-enterprise-ece-minio-large-arch.png @@ -41,7 +41,7 @@ We recommend: ## Create an offline installation [ece-minio-offline-installation] -If you are installing MinIO offline, the process is very similar to the [offline installation of Elastic Cloud Enterprise](../../deploy/cloud-enterprise/air-gapped-install.md). There are two options: +If you are installing MinIO offline, the process is very similar to the [offline installation of {{ece}}](../../deploy/cloud-enterprise/air-gapped-install.md). There are two options: * Use a private Docker repository and [install the Minio images in the private repository](https://docs.docker.com/registry/deploying/). * Download the Minio images from an internet-connected machine, then use docker save to bundle the images into tar files. Copy the TAR files to the target hosts and use `docker load` to install. @@ -53,14 +53,14 @@ Gather the following after your installation: * Endpoint URL ::::{tip} -Minio might report various Endpoint URLs, be sure to choose the one that will be routable from your Elasticsearch Docker containers. +Minio might report various Endpoint URLs, be sure to choose the one that will be routable from your {{es}} Docker containers. :::: ## Create the S3 bucket [ece-minio-create-s3-bucket] -How you create the AWS S3 bucket depends on what version of Elasticsearch you are using: +How you create the AWS S3 bucket depends on what version of {{es}} you are using: * For version 7.x: @@ -75,14 +75,14 @@ Don’t forget to make the bucket name DNS-friendly, for example no underscores -## Elastic Cloud Enterprise configuration [ece-install-with-minio] +## {{ece}} configuration [ece-install-with-minio] You can configure existing deployments, or create new ones, with the following changes to use Minio storage. -### Add the repository to Elastic Cloud Enterprise [ece-add-repository] +### Add the repository to {{ece}} [ece-add-repository] -You must add the new repository to Elastic Cloud Enterprise before it can be used with your Elasticsearch clusters. +You must add the new repository to {{ece}} before it can be used with your {{es}} clusters. 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). 2. From the **Platform** menu, select **Repositories**. @@ -121,18 +121,18 @@ The Minio repository is now available from the drop-down list of repositories wh ### Additional settings for 6.x clusters [ece-6.x-settings] -For Elasticsearch versions 6.0 and later, after selecting the repository, you also need to set your **User Settings** YAML to specify the endpoint and protocol. For example: +For {{es}} versions 6.0 and later, after selecting the repository, you also need to set your **User Settings** YAML to specify the endpoint and protocol. For example: ``` s3.client.default.endpoint: ":9000" s3.client.default.protocol: http ``` -Check the [Elasticsearch S3 plugin details](https://www.elastic.co/guide/en/elasticsearch/plugins/6.8/repository-s3-client.html) for more information. +Check the [{{es}} S3 plugin details](https://www.elastic.co/guide/en/elasticsearch/plugins/6.8/repository-s3-client.html) for more information. -## Upgrade from 5.x to 6.x Elasticsearch clusters [ece-upgrade-minio] +## Upgrade from 5.x to 6.x {{es}} clusters [ece-upgrade-minio] -The configuration options for the Elasticsearch S3 repository plugin have changed from 5.x to 6.x versions and you must copy the endpoint and protocol values from your repository configuration to your **User Settings** YAML before you upgrade. +The configuration options for the {{es}} S3 repository plugin have changed from 5.x to 6.x versions and you must copy the endpoint and protocol values from your repository configuration to your **User Settings** YAML before you upgrade. ## Verify snapshots [ece-minio-verify-snapshot] @@ -148,10 +148,10 @@ As an extra verification step, you can restore a cluster using the snapshots tha Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - 2. From your deployment menu, go to the **Edit** page then go to the bottom of the page and select **advanced Elasticsearch configuration**. + 2. From your deployment menu, go to the **Edit** page then go to the bottom of the page and select **advanced {{es}} configuration**. 3. Copy the JSON format under the **Deployment configuration** heading. -3. Create a new Elasticsearch cluster as your target. +3. Create a new {{es}} cluster as your target. 4. On the new cluster, open the advanced cluster configuration editor. In the transient section, add the `restore_snapshot` settings to the plan. ```json diff --git a/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md b/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md index 264a42f432..6b91c16ea3 100644 --- a/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md +++ b/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md @@ -51,7 +51,7 @@ When restoring data from a snapshot, keep the following in mind: ## Get a list of available snapshots -To view a list of available snapshots in Kibana, go to **Stack Management > Snapshot and Restore**. +To view a list of available snapshots in {{kib}}, go to **Stack Management > Snapshot and Restore**. You can also use the get repository API and the get snapshot API to find snapshots that are available to restore. First, use the get repository API to fetch a list of registered snapshot repositories. diff --git a/deploy-manage/tools/snapshot-and-restore/s3-repository.md b/deploy-manage/tools/snapshot-and-restore/s3-repository.md index be3efbe19c..a50256e10a 100644 --- a/deploy-manage/tools/snapshot-and-restore/s3-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/s3-repository.md @@ -11,7 +11,7 @@ applies_to: You can use AWS S3 as a repository for [Snapshot/Restore](../snapshot-and-restore.md). ::::{note} -If you are looking for a hosted solution of Elasticsearch on AWS, please visit [https://www.elastic.co/cloud/](https://www.elastic.co/cloud/). +If you are looking for a hosted solution of {{es}} on AWS, please visit [https://www.elastic.co/cloud/](https://www.elastic.co/cloud/). :::: See [this video](https://www.youtube.com/watch?v=ACqfyzWf-xs) for a walkthrough of connecting an AWS S3 repository. @@ -272,7 +272,7 @@ For more information about S3 storage classes, see [AWS Storage Classes Guide](h ## Recommended S3 permissions [repository-s3-permissions] -In order to restrict the Elasticsearch snapshot process to the minimum required resources, we recommend using Amazon IAM in conjunction with pre-existing S3 buckets. Here is an example policy which will allow the snapshot access to an S3 bucket named "snaps.example.com". This may be configured through the AWS IAM console, by creating a Custom Policy, and using a Policy Document similar to this (changing snaps.example.com to your bucket name). +In order to restrict the {{es}} snapshot process to the minimum required resources, we recommend using Amazon IAM in conjunction with pre-existing S3 buckets. Here is an example policy which will allow the snapshot access to an S3 bucket named "snaps.example.com". This may be configured through the AWS IAM console, by creating a Custom Policy, and using a Policy Document similar to this (changing snaps.example.com to your bucket name). ```js { @@ -371,7 +371,7 @@ If the symlink exists, it will be used by default by all S3 repositories that do ## AWS VPC bandwidth settings [repository-s3-aws-vpc] -AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through the VPC’s NAT instance. If your VPC’s NAT instance is a smaller instance size (e.g. a t2.micro) or is handling a high volume of network traffic your bandwidth to S3 may be limited by that NAT instance’s networking bandwidth limitations. Instead we recommend creating a [VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) that enables connecting to S3 in instances that reside in a private subnet in an AWS VPC. This will eliminate any limitations imposed by the network bandwidth of your VPC’s NAT instance. +AWS instances resolve S3 endpoints to a public IP. If the {{es}} instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through the VPC’s NAT instance. If your VPC’s NAT instance is a smaller instance size (e.g. a t2.micro) or is handling a high volume of network traffic your bandwidth to S3 may be limited by that NAT instance’s networking bandwidth limitations. Instead we recommend creating a [VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) that enables connecting to S3 in instances that reside in a private subnet in an AWS VPC. This will eliminate any limitations imposed by the network bandwidth of your VPC’s NAT instance. Instances residing in a public subnet in an AWS VPC will connect to S3 via the VPC’s internet gateway and not be bandwidth limited by the VPC’s NAT instance. @@ -403,7 +403,7 @@ PUT /_cluster/settings } ``` -Collect the Elasticsearch logs covering the time period of the failed analysis from all nodes in your cluster and share them with the supplier of your storage system along with the analysis response so they can use them to determine the problem. See the [AWS Java SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html) documentation for further information, including details about other loggers that can be used to obtain even more verbose logs. When you have finished collecting the logs needed by your supplier, set the logger settings back to `null` to return to the default logging configuration and disable insecure network trace logging again. See [Logger](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-logger) and [Cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) for more information. +Collect the {{es}} logs covering the time period of the failed analysis from all nodes in your cluster and share them with the supplier of your storage system along with the analysis response so they can use them to determine the problem. See the [AWS Java SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html) documentation for further information, including details about other loggers that can be used to obtain even more verbose logs. When you have finished collecting the logs needed by your supplier, set the logger settings back to `null` to return to the default logging configuration and disable insecure network trace logging again. See [Logger](elasticsearch://reference/elasticsearch/configuration-reference/miscellaneous-cluster-settings.md#cluster-logger) and [Cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) for more information. ## Linearizable register implementation [repository-s3-linearizable-registers] diff --git a/deploy-manage/uninstall/delete-a-cloud-deployment.md b/deploy-manage/uninstall/delete-a-cloud-deployment.md index 9c0851425b..3968d56aab 100644 --- a/deploy-manage/uninstall/delete-a-cloud-deployment.md +++ b/deploy-manage/uninstall/delete-a-cloud-deployment.md @@ -67,7 +67,7 @@ Deleting a deployment cannot be undone. ### Terminate a deployment -Terminating a deployment stops all running instances and **deletes all data**. Only configuration information is saved so that you can restore the deployment in the future. If there is [a snapshot repository associated](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md) with the Elasticsearch cluster and at least one snapshot has been taken, you can restore the cluster with the same indices later. +Terminating a deployment stops all running instances and **deletes all data**. Only configuration information is saved so that you can restore the deployment in the future. If there is [a snapshot repository associated](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md) with the {{es}} cluster and at least one snapshot has been taken, you can restore the cluster with the same indices later. To terminate an {{ece}} deployment, diff --git a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md index ff09cf2532..f53ada1cc7 100644 --- a/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md +++ b/deploy-manage/uninstall/uninstall-elastic-cloud-on-kubernetes.md @@ -19,7 +19,7 @@ To uninstall the operator: | xargs -n1 kubectl delete elastic --all -n ``` - This deletes all underlying Elastic Stack resources, including their Pods, Secrets, Services, and so on. + This deletes all underlying {{stack}} resources, including their Pods, Secrets, Services, and so on. 2. Uninstall the operator: @@ -29,5 +29,5 @@ To uninstall the operator: ``` ::::{warning} -Deleting CRDs will trigger deletion of all custom resources (Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) in all namespaces of the cluster, regardless of whether they are managed by a single operator or multiple operators. +Deleting CRDs will trigger deletion of all custom resources ({{es}}, {{kib}}, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash) in all namespaces of the cluster, regardless of whether they are managed by a single operator or multiple operators. :::: \ No newline at end of file diff --git a/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md b/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md index f6c65f51aa..8ecf23fa17 100644 --- a/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md +++ b/deploy-manage/upgrade/orchestrator/re-running-the-ece-upgrade.md @@ -8,8 +8,8 @@ applies_to: By default, re-running the `elastic-cloud-enterprise.sh upgrade` command resumes the previous ECE upgrade process. However, if your previous ECE upgrade attempt got stuck (for example, due to infrastructure problems, a host going offline, or similar) and re-attempting the upgrade still results in the upgrade process being blocked, run `elastic-cloud-enterprise.sh upgrade --force-upgrade` to ensure that any existing upgrade state gets cleared before starting the new ECE upgrade process. -The `--force-upgrade` parameter is also helpful in situations where the Elastic Cloud Enterprise platform is already upgraded to the desired target version but there are containers still running at the old version. These rare situations can also be caused by infrastructure issues, for example, runners temporarily going offline and not being "seen" by the upgrade process. Running `elastic-cloud-enterprise.sh upgrade --force-upgrade` with the same target version makes the Elastic Cloud Enterprise upgrader perform the upgrade procedure anyway, thereby covering any containers that failed to upgrade previously. +The `--force-upgrade` parameter is also helpful in situations where the {{ece}} platform is already upgraded to the desired target version but there are containers still running at the old version. These rare situations can also be caused by infrastructure issues, for example, runners temporarily going offline and not being "seen" by the upgrade process. Running `elastic-cloud-enterprise.sh upgrade --force-upgrade` with the same target version makes the {{ece}} upgrader perform the upgrade procedure anyway, thereby covering any containers that failed to upgrade previously. -If the Elastic Cloud Enterprise platform was upgraded successfully and yet one or more system deployments were not upgraded to a higher Elastic Stack version during the very last phase of the Elastic Cloud Enterprise upgrade, we recommend running the Elastic Cloud Enterprise upgrader again without the `--force-upgrade` parameter. The Elastic Cloud Enterprise upgrader will recognize that the platform is already at the desired target version and will apply upgrade plans to system deployments. +If the {{ece}} platform was upgraded successfully and yet one or more system deployments were not upgraded to a higher {{stack}} version during the very last phase of the {{ece}} upgrade, we recommend running the {{ece}} upgrader again without the `--force-upgrade` parameter. The {{ece}} upgrader will recognize that the platform is already at the desired target version and will apply upgrade plans to system deployments. Refer to [](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md) for details on the system deployment versions associated with each {{ece}} version. \ No newline at end of file diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md index cd2952cc4e..f3a01f99ca 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md @@ -7,13 +7,15 @@ applies_to: ece: --- -# Upgrade Elastic Cloud Enterprise [ece-upgrade] +% The upgrade procedure is expected to change with ECE 3.8.0 release. This document is currently a temporary draft, pending to be refined. + +# Upgrade {{ece}} [ece-upgrade] This page provides instructions on how to upgrade the ECE operator. -To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the Elastic Stack version](../deployment-or-cluster.md). +To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the {{stack}} version](../deployment-or-cluster.md). -Periodically, you might need to upgrade an Elastic Cloud Enterprise installation as new versions with additional features become available. The upgrade process updates all hosts that are part of an Elastic Cloud Enterprise installation to the latest version of ECE, with little or no downtime for managed deployments. To upgrade your deployment to {{stack}} 9.x, the minimum required ECE version is 4.0.0. +Periodically, you might need to upgrade an {{ece}} installation as new versions with additional features become available. The upgrade process updates all hosts that are part of an {{ece}} installation to the latest version of ECE, with little or no downtime for managed deployments. To upgrade your deployment to {{stack}} 9.x, the minimum required ECE version is 4.0.0. Before initiating the ECE upgrade process, review the [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) to ensure the operating system (OS), Docker, or Podman versions you're running are compatible with the ECE version you’re upgrading to. We recommend that Docker, Podman, and the operating system are at the target version before starting the ECE upgrade. @@ -23,7 +25,7 @@ During the upgrade window, there might be a short time period when you run a com ## The upgrade version matrix [ece-upgrade-version-matrix] -The following table shows the recommended upgrade paths from older Elastic Cloud Enterprise versions to 4.0.0. +The following table shows the recommended upgrade paths from older {{ece}} versions to 4.0.0. | Upgrade from | Recommended upgrade path to 4.0 | | --- | --- | @@ -44,12 +46,12 @@ If you have to upgrade to any of the intermediate versions, follow the upgrade i ## The upgrade process [ece-upgrade-overview] -Upgrading Elastic Cloud Enterprise works by replacing the [containers](/reference/glossary/index.md#glossary-container) that ECE itself requires to run on each host. Upgrading ECE does not touch any of the containers that run {{es}} clusters and {{kib}} instances. Each container that needs to be upgraded is renamed and stopped, followed by the creation of a new container with an upgraded instance of the ECE software and its dependencies. When the upgrade process has completed successfully, it cleans up after itself and removes the old containers. +Upgrading {{ece}} works by replacing the [containers](/reference/glossary/index.md#glossary-container) that ECE itself requires to run on each host. Upgrading ECE does not touch any of the containers that run {{es}} clusters and {{kib}} instances. Each container that needs to be upgraded is renamed and stopped, followed by the creation of a new container with an upgraded instance of the ECE software and its dependencies. When the upgrade process has completed successfully, it cleans up after itself and removes the old containers. The upgrade process creates a `frc-upgraders-monitor` container on the host where you initiate the process that performs the following actions: * Back up the ZooKeeper transaction log to `HOST_STORAGE_PATH/RUNNER_ID/services/zookeeper/data/backup`, where `HOST_STORAGE_PATH` and `RUNNER_ID` are specific to an ECE installation. -* Configure Elastic Cloud Enterprise to perform the individual container upgrades by creating a `frc-upgraders-upgrader` container on each host that is part of the installation. +* Configure {{ece}} to perform the individual container upgrades by creating a `frc-upgraders-upgrader` container on each host that is part of the installation. * Monitor the upgrade process to ensure that all `frc-upgraders-upgrader` containers perform their part of the upgrade as expected and report their status. * After all hosts have been upgraded successfully, clean up temporary artifacts created during the upgrade process, and remove the old containers. @@ -61,26 +63,26 @@ Before starting the upgrade process, check which of the following recommendation * We strongly recommend that you routinely update your ECE installation to the most current version so that any bugs and security issues are fixed promptly. If you need to upgrade but are currently experiencing any issues with your platform, note that as long as ZooKeeper is running and healthy you should be able to upgrade (you can use the [get runners API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-runners) to easily verify the health of the runners on the [ECE allocators](../../../deploy-manage/deploy/cloud-enterprise/ece-architecture.md#ece-architecture-allocators)). That is, healthy system deployments are not required in order to perform an upgrade successfully. -* Before upgrading to Elastic Cloud Enterprise 4.0, refer to the [lists of removals](cloud://release-notes/cloud-enterprise/breaking-changes.md) to find out about features and API endpoints that are no longer supported. +* Before upgrading to {{ece}} 4.0, refer to the [lists of removals](cloud://release-notes/cloud-enterprise/breaking-changes.md) to find out about features and API endpoints that are no longer supported. * We strongly recommend that you do not attempt to perform certain actions during the upgrade process, such as: - * Creating or changing Elasticsearch clusters and Kibana instances + * Creating or changing {{es}} clusters and {{kib}} instances * Adding new hosts to an installation or removing existing hosts -* As a precaution, we recommend taking snapshots of your Elasticsearch clusters. +* As a precaution, we recommend taking snapshots of your {{es}} clusters. * We recommend that you take a backup snapshot of the `security` [system deployment](../../../deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md). This cluster stores [role-based access control configurations](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md), and a snapshot will allow you to restore those in case the upgrade fails. ## Requirements [ece-upgrade-prereqs] Before starting the upgrade process, verify that your setup meets the following requirements: -- XFS with quotas enabled on all allocators. You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly. To enable XFS quotas, modify the entry for the XFS volume in the `/etc/fstab file` to add pquota and `prjquota`. The default filesystem path used by Elastic Cloud Enterprise is `/mnt/data`. +- XFS with quotas enabled on all allocators. You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly. To enable XFS quotas, modify the entry for the XFS volume in the `/etc/fstab file` to add pquota and `prjquota`. The default filesystem path used by {{ece}} is `/mnt/data`. -- **Supported Docker / Podman version**. Make sure that you’re running a supported Docker or Podman version for the version of ECE that you are going to upgrade to. An overview of compatible versions can be found in the [support matrix](https://www.elastic.co/support/matrix#matrix_os&#elastic-cloud-enterprise) and install instructions are available under [Installing Elastic Cloud Enterprise](../../../deploy-manage/deploy/cloud-enterprise/install.md). -- **Required user, roles and groups**. To run the script to upgrade Elastic Cloud Enterprise, log in as the user used to run Elastic Cloud Enterprise (by default called `elastic` with UID/GID 1000). Initiate the upgrade process by running the `elastic-cloud-enterprise.sh` script with the upgrade action on a single host. The host that the script is run on must be a host that holds the director role. You do not need to run the script on additional hosts. -- **Available disk space**. Each host in the Elastic Cloud Enterprise installation must have at least 5 GB of disk space available to ensure that the upgrade process can complete successfully. -- **Proxies and load balancing**. To avoid any downtime for Elastic Cloud Enterprise, the installation must include more than one proxy and must use a load balancer as recommended. If only a single proxy is configured or if the installation is not using a load balancer, some downtime is expected when the containers on the proxies are upgraded. Each container upgrade typically takes five to ten seconds, times the number of containers on a typical host. -- **For *offline* or *air-gapped* installations**. Additional steps are required to upgrade Elastic Cloud Enterprise. After downloading the installation script for the new version, pull and load the required container images and push them to a private Docker registry. To learn more about pulling and loading Docker images, check Install [ECE offline](../../../deploy-manage/deploy/cloud-enterprise/air-gapped-install.md). +- **Supported Docker / Podman version**. Make sure that you’re running a supported Docker or Podman version for the version of ECE that you are going to upgrade to. An overview of compatible versions can be found in the [support matrix](https://www.elastic.co/support/matrix#matrix_os&#elastic-cloud-enterprise) and install instructions are available under [Installing {{ece}}](../../../deploy-manage/deploy/cloud-enterprise/install.md). +- **Required user, roles and groups**. To run the script to upgrade {{ece}}, log in as the user used to run {{ece}} (by default called `elastic` with UID/GID 1000). Initiate the upgrade process by running the `elastic-cloud-enterprise.sh` script with the upgrade action on a single host. The host that the script is run on must be a host that holds the director role. You do not need to run the script on additional hosts. +- **Available disk space**. Each host in the {{ece}} installation must have at least 5 GB of disk space available to ensure that the upgrade process can complete successfully. +- **Proxies and load balancing**. To avoid any downtime for {{ece}}, the installation must include more than one proxy and must use a load balancer as recommended. If only a single proxy is configured or if the installation is not using a load balancer, some downtime is expected when the containers on the proxies are upgraded. Each container upgrade typically takes five to ten seconds, times the number of containers on a typical host. +- **For *offline* or *air-gapped* installations**. Additional steps are required to upgrade {{ece}}. After downloading the installation script for the new version, pull and load the required container images and push them to a private Docker registry. To learn more about pulling and loading Docker images, check Install [ECE offline](../../../deploy-manage/deploy/cloud-enterprise/air-gapped-install.md). - Check the security cluster’s zone count. Due to internal limitations in ECE, the built-in security cluster cannot be scaled to two zones during the ECE upgrade procedure. If the zone count is set to 2 zones, scale the cluster to 3 or 1 zone(s) before upgrading ECE. - **[Verify if you can upgrade directly](#ece-upgrade-version-matrix)**. When upgrading to ECE 4.0 or a higher version: - You need to first upgrade to ECE 3.8.0 or later. Refer to the [ECE version 3.8.0 upgrade instructions](https://www.elastic.co/guide/en/cloud-enterprise/3.8/ece-upgrade.html) for details. @@ -89,17 +91,17 @@ Before starting the upgrade process, verify that your setup meets the following Don’t manually upgrade your system deployments if you are on ECE version 2.7.0 or a later version, as it can cause issues and you may lose access to the Cloud UI. Note that the only exception to that rule is when you’re upgrading to ECE 3.6.0 and your system deployments are at a version lower than 7.17.0. ::: - - Ensure that your system deployments are at their [expected versions](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md). Since ECE 3.6.1, the upgrade process automatically upgrades system deployments to the required version. If the Elastic Cloud Enterprise platform was upgraded successfully and yet one or more system deployments were not upgraded to [their expected version](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md) during the very last phase of the Elastic Cloud Enterprise upgrade, you can re-run the `elastic-cloud-enterprise.sh upgrade --cloud-enterprise-version ` command to retry system deployment upgrade only. - - Check that your deployments are running on Elastic Stack version 7.0.0 or above. + - Ensure that your system deployments are at their [expected versions](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md). Since ECE 3.6.1, the upgrade process automatically upgrades system deployments to the required version. If the {{ece}} platform was upgraded successfully and yet one or more system deployments were not upgraded to [their expected version](/deploy-manage/deploy/cloud-enterprise/default-system-deployment-versions.md) during the very last phase of the {{ece}} upgrade, you can re-run the `elastic-cloud-enterprise.sh upgrade --cloud-enterprise-version ` command to retry system deployment upgrade only. + - Check that your deployments are running on {{stack}} version 7.0.0 or above. - Before running the upgrade command, ensure that you include the same installation flags that were used during the initial setup. Some deployment configurations, such as those using Podman or SELinux, require specific flags to be passed again during the upgrade. Failure to do so may result in compatibility errors. ## Certificate rotation [ece-upgrade-certificates] -If your ECE installation is still using the default, auto-generated certificates, we recommend performing one of the following steps to avoid trust errors related to the proxy server certificate after the upgrade. The proxy server certificate is used when connecting to Kibana and Elasticsearch clusters. During the upgrade, the ECE certificate authority generates a new certificate. As with any server certificate rotation, you must add an exception for the new proxy server certificate, unless the certificate authority is present in the trust store of the system or browser. You can perform either of these steps before or after the upgrade: +If your ECE installation is still using the default, auto-generated certificates, we recommend performing one of the following steps to avoid trust errors related to the proxy server certificate after the upgrade. The proxy server certificate is used when connecting to {{kib}} and {{es}} clusters. During the upgrade, the ECE certificate authority generates a new certificate. As with any server certificate rotation, you must add an exception for the new proxy server certificate, unless the certificate authority is present in the trust store of the system or browser. You can perform either of these steps before or after the upgrade: -- Recommended: [Add your organization’s own certificate](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md) to Elastic Cloud Enterprise. The upgrade process ensures that the certificates you add do not change, which avoids the trust errors. +- Recommended: [Add your organization’s own certificate](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md) to {{ece}}. The upgrade process ensures that the certificates you add do not change, which avoids the trust errors. - Add the default CA certificate to the trust store of your system or of your browser. Only the server certificate changes during upgrade, but the CA certificate remains the same. Adding the CA certificate to your trust store alone is sufficient to avoid the trust errors. -- Apply a valid license. It is required to have an `Enterprise resource unit`-compatible license applied before upgrading to ECE 2.7 or later. The most reliable way to check if your license is compatible is to use the Elastic Cloud Enterprise API and check the value of the license version field: +- Apply a valid license. It is required to have an `Enterprise resource unit`-compatible license applied before upgrading to ECE 2.7 or later. The most reliable way to check if your license is compatible is to use the {{ece}} API and check the value of the license version field: ```sh curl -X GET -u admin:PASSWORD -k https://COORDINATOR_HOST:12443/api/v1/platform/license @@ -111,14 +113,14 @@ If your ECE installation is still using the default, auto-generated certificates } ``` -If the license version is not 4 or higher, you must request an updated license from [Elastic Support](/troubleshoot/index.md#contact-us). Once you receive your new license, make sure Elastic Cloud Enterprise is upgraded to at least version 2.5.0, and then upload the new license in the Settings page under the Platform menu. +If the license version is not 4 or higher, you must request an updated license from [Elastic Support](/troubleshoot/index.md#contact-us). Once you receive your new license, make sure {{ece}} is upgraded to at least version 2.5.0, and then upload the new license in the Settings page under the Platform menu. In versions from 2.6 to 2.10 included, some or all platform certificates are generated with a 398-day expiration. Installations that ran on these versions, even temporarily, must have their certificates rotated manually before expiry. For details, check our [KB article](https://support.elastic.co/knowledge/ECE-Certificate-Rotation). ## Perform the upgrade [ece-upgrade-steps] -To upgrade an Elastic Cloud Enterprise installation, download the latest installation script. Log in as the user used to run Elastic Cloud Enterprise (by default called `elastic` with UID/GID 1000), and run the script with the `upgrade` action on a single host that holds the director role: +To upgrade an {{ece}} installation, download the latest installation script. Log in as the user used to run {{ece}} (by default called `elastic` with UID/GID 1000), and run the script with the `upgrade` action on a single host that holds the director role: ::::{important} * If your ECE installation was set up using **Podman** instead of Docker, append the `--podman` flag when running the upgrade command. @@ -130,7 +132,7 @@ To upgrade an Elastic Cloud Enterprise installation, download the latest install bash <(curl -fsSL https://download.elastic.co/cloud/elastic-cloud-enterprise.sh) upgrade ``` -You can follow along while each container for Elastic Cloud Enterprise is upgraded on the hosts that are part of the installation. +You can follow along while each container for {{ece}} is upgraded on the hosts that are part of the installation. ## Upgrade to a specific version diff --git a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md index 8817564099..8711c98f5b 100644 --- a/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md +++ b/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md @@ -6,16 +6,16 @@ applies_to: eck: ga 3.0.0 --- -# Upgrade Elastic Cloud on Kubernetes [k8s-upgrading-eck] +# Upgrade {{eck}} [k8s-upgrading-eck] This page provides instructions on how to upgrade the ECK operator. -To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the Elastic Stack version](../deployment-or-cluster.md). +To learn how to upgrade {{stack}} applications like {{es}} or {{kib}}, refer to [Upgrade the {{stack}} version](../deployment-or-cluster.md). ## Before you upgrade to ECK 3.0.0 [k8s-ga-upgrade] -The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all Elasticsearch and Kibana pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large Elasticsearch cluster or multiple Elastic Stack deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and Elasticsearch clusters. For more details on controlling rolling restarts during the upgrade, refer to the [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart) section. +The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all {{es}} and {{kib}} pods. This [list](#k8s-beta-to-ga-rolling-restart) details the affected target versions that will cause a rolling restart. If you have a large {{es}} cluster or multiple {{stack}} deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and {{es}} clusters. For more details on controlling rolling restarts during the upgrade, refer to the [control the rolling restarts during the upgrade](#k8s-beta-to-ga-rolling-restart) section. Before upgrading, refer to the [release notes](cloud-on-k8s://release-notes/index.md) to make sure that the release does not contain any breaking changes that could affect you. The [release highlights document](cloud-on-k8s://release-notes/index.md) provides more details and possible workarounds for any breaking changes or known issues in each release. @@ -71,7 +71,7 @@ If you are using ECK through an OLM-managed distribution channel like [operatorh ### Upgrading from ECK 1.9 or earlier [k8s_upgrading_from_eck_1_9_or_earlier] -Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with automatic upgrades enabled, are advised to set the `set-default-security-context` [operator flag](/deploy-manage/deploy/cloud-on-k8s/configure-eck.md) explicitly before upgrading to ECK 2.0 or later. If not set, ECK can fail to [auto-detect](https://github.com/elastic/cloud-on-k8s/issues/5061) the correct security context configuration and Elasticsearch Pods may not be allowed to run. +Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with automatic upgrades enabled, are advised to set the `set-default-security-context` [operator flag](/deploy-manage/deploy/cloud-on-k8s/configure-eck.md) explicitly before upgrading to ECK 2.0 or later. If not set, ECK can fail to [auto-detect](https://github.com/elastic/cloud-on-k8s/issues/5061) the correct security context configuration and {{es}} Pods may not be allowed to run. ### Upgrading from ECK 2.0 or later [k8s_upgrading_from_eck_2_0_or_later] @@ -106,10 +106,10 @@ Stepping over one of these versions, for example, upgrading ECK from 2.6 to 2.9, :::: -If you have a very large Elasticsearch cluster or multiple Elastic Stack deployments, this rolling restart might be disruptive or inconvenient. To have more control over when the pods belonging to a particular deployment should be restarted, you can [add an annotation](../../../troubleshoot/deployments/cloud-on-k8s/troubleshooting-methods.md#k8s-exclude-resource) to the corresponding resources to temporarily exclude them from being managed by the operator. When the time is convenient, you can remove the annotation and let the rolling restart go through. +If you have a very large {{es}} cluster or multiple {{stack}} deployments, this rolling restart might be disruptive or inconvenient. To have more control over when the pods belonging to a particular deployment should be restarted, you can [add an annotation](../../../troubleshoot/deployments/cloud-on-k8s/troubleshooting-methods.md#k8s-exclude-resource) to the corresponding resources to temporarily exclude them from being managed by the operator. When the time is convenient, you can remove the annotation and let the rolling restart go through. ::::{warning} -Once a resource is excluded from being managed by ECK, you will not be able to add/remove nodes, upgrade Stack version, or perform other [orchestration tasks](../../deploy/cloud-on-k8s/configure-deployments.md) by updating the resource manifest. You must remember to remove the exclusion to ensure that your Elastic Stack deployment is continually monitored and managed by the operator. +Once a resource is excluded from being managed by ECK, you will not be able to add/remove nodes, upgrade Stack version, or perform other [orchestration tasks](../../deploy/cloud-on-k8s/configure-deployments.md) by updating the resource manifest. You must remember to remove the exclusion to ensure that your {{stack}} deployment is continually monitored and managed by the operator. :::: Exclude Elastic resources from being managed by the operator: @@ -118,7 +118,7 @@ Exclude Elastic resources from being managed by the operator: ```shell ANNOTATION='eck.k8s.elastic.co/managed=false' -# Exclude a single Elasticsearch resource named "quickstart" +# Exclude a single {{es}} resource named "quickstart" kubectl annotate --overwrite elasticsearch quickstart $ANNOTATION # Exclude all resources in the current namespace @@ -133,7 +133,7 @@ Once the operator has been upgraded and you are ready to let the resource become ```shell RM_ANNOTATION='eck.k8s.elastic.co/managed-' -# Resume management of a single Elasticsearch cluster named "quickstart" +# Resume management of a single {{es}} cluster named "quickstart" kubectl annotate elasticsearch quickstart $RM_ANNOTATION ``` diff --git a/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md b/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md index b7fc01ac59..bebc33ad87 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md +++ b/deploy-manage/upgrade/prepare-to-upgrade/upgrade-assistant.md @@ -17,16 +17,16 @@ The `manage` cluster privilege is required to access the **Upgrade assistant**. ## Feature set [_feature_set] -Some features of the Upgrade assistant are only needed when upgrading to a new major version. The feature set enabled by default are those for the very next version from the one Kibana currently runs on. +Some features of the Upgrade assistant are only needed when upgrading to a new major version. The feature set enabled by default are those for the very next version from the one {{kib}} currently runs on. ## Deprecations [_deprecations] The Upgrade assistant pulls information about deprecations from the following sources: -* Elasticsearch Deprecation Info API -* Elasticsearch deprecation logs -* Kibana deprecations API +* {{es}} Deprecation Info API +* {{es}} deprecation logs +* {{kib}} deprecations API For more information about Upgrade Assistant APIs, refer to [Upgrade Assistant APIs](https://www.elastic.co/guide/en/kibana/current/upgrade-assistant-api.html). diff --git a/deploy-manage/users-roles.md b/deploy-manage/users-roles.md index c7a333c6c8..e8130cf022 100644 --- a/deploy-manage/users-roles.md +++ b/deploy-manage/users-roles.md @@ -98,11 +98,11 @@ deployment: self: all ``` -Set up authentication and authorization at the cluster or deployment level, and learn about the underlying security technologies that Elasticsearch uses to authenticate and authorize requests internally and across services. +Set up authentication and authorization at the cluster or deployment level, and learn about the underlying security technologies that {{es}} uses to authenticate and authorize requests internally and across services. ### User authentication -Set up methods to identify users to the Elasticsearch cluster. +Set up methods to identify users to the {{es}} cluster. Key tasks for managing user authentication include: @@ -110,7 +110,7 @@ Key tasks for managing user authentication include: * [Managing users natively](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) * [Integrating with external authentication providers](/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md) -You can also learn the basics of Elasticsearch authentication, learn about accounts used to communicate within an Elasticsearch cluster and across services, and perform advanced tasks. +You can also learn the basics of {{es}} authentication, learn about accounts used to communicate within an {{es}} cluster and across services, and perform advanced tasks. [View all user authentication docs](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md) @@ -124,7 +124,7 @@ Key tasks for managing user authorization include: * [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) * [Setting up field- and document-level security](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) -You can also learn the basics of Elasticsearch authorization, and perform advanced tasks. +You can also learn the basics of {{es}} authorization, and perform advanced tasks. ::::{tip} User roles are also used to control access to [{{kib}} spaces](/deploy-manage/manage-spaces.md). diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator.md index 00a762fd98..6da19b4d86 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator.md @@ -5,7 +5,7 @@ applies_to: ece: all --- -# Elastic Cloud Enterprise orchestrator users +# {{ece}} orchestrator users Control access to your {{ece}} [orchestrator](/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md) and deployments. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md index 3bcbdf3c11..70320be480 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md @@ -32,7 +32,7 @@ With RBAC, interacting with API endpoints now requires a [bearer token](cloud:// ## Before you begin [ece_before_you_begin_8] -To prepare for RBAC, you should review the Elastic Cloud Enterprise [limitations and known issues](cloud://release-notes/cloud-enterprise/known-issues.md). +To prepare for RBAC, you should review the {{ece}} [limitations and known issues](cloud://release-notes/cloud-enterprise/known-issues.md). ## Available roles and permissions [ece-user-role-permissions] diff --git a/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md b/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md index 19ce231cbc..d2eca106b4 100644 --- a/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md +++ b/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md @@ -23,7 +23,7 @@ On this page, you'll learn the following: * The [risks and considerations for using SAML SSO](#ec_risks_and_considerations) * How to [implement and test SAML SSO](#set-up-sso) * How to [enforce SAML SSO](#enforce-sso) for your organization -* How to [map groups returned by your IdP to Elastic Cloud roles](#role-mappings) +* How to [map groups returned by your IdP to {{ecloud}} roles](#role-mappings) * How to [disable SAML SSO](#ec_disable_sso) For detailed examples of implementing SAML SSO using common identity providers, refer to the following topics: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth.md b/deploy-manage/users-roles/cluster-or-deployment-auth.md index 0a2d611e15..d5f1e729ca 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth.md @@ -15,17 +15,17 @@ applies_to: To prevent unauthorized access to your Elastic resources, you need a way to identify users and validate that a user is who they claim to be (*authentication*), and control what data users can access and what tasks they can perform (*authorization*). -In this section, you’ll learn how to set up authentication and authorization at the cluster or deployment level, and learn about the underlying security technologies that Elasticsearch uses to authenticate and authorize requests internally and across services. +In this section, you’ll learn how to set up authentication and authorization at the cluster or deployment level, and learn about the underlying security technologies that {{es}} uses to authenticate and authorize requests internally and across services. This section only covers direct access to and communications with an {{es}} cluster - sometimes known as a deployment - as well as the related {{kib}} instance. To learn about managing access to your {{ecloud}} organization or {{ece}} orchestrator, or to learn how to use single sign-on to access a cluster using your {{ecloud}} credentials, refer to [Manage users and roles](/deploy-manage/users-roles.md). ## Quickstart -If you plan to use native Elasticsearch user and role management, then [follow our quickstart](/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md) to learn how to set up basic authentication and authorization features, including [spaces](/deploy-manage/manage-spaces.md), [roles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md), and [native users](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). +If you plan to use native {{es}} user and role management, then [follow our quickstart](/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md) to learn how to set up basic authentication and authorization features, including [spaces](/deploy-manage/manage-spaces.md), [roles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md), and [native users](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). ### User authentication -Set up methods to identify users to the Elasticsearch cluster. +Set up methods to identify users to the {{es}} cluster. Key tasks for managing user authentication include: @@ -33,7 +33,7 @@ Key tasks for managing user authentication include: * [Managing users natively](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) * [Integrating with external authentication providers](/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md) -You can also learn the basics of Elasticsearch authentication, learn about accounts used to communicate within an Elasticsearch cluster and across services, and perform advanced tasks. +You can also learn the basics of {{es}} authentication, learn about accounts used to communicate within an {{es}} cluster and across services, and perform advanced tasks. [View all user authentication docs](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md) @@ -48,7 +48,7 @@ Key tasks for managing user authorization include: * Creating [mappings of users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) for external authentication providers * [Setting up field- and document-level security](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md) -You can also learn the basics of Elasticsearch authorization, and perform advanced tasks. +You can also learn the basics of {{es}} authorization, and perform advanced tasks. ::::{tip} User roles are also used to control access to [{{kib}} spaces](/deploy-manage/manage-spaces.md). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md index 48ac31112d..7e6814b4fa 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md @@ -104,7 +104,7 @@ $$$built-in-roles-ml-user$$$ `machine_learning_user` : Grants the minimum privileges required to view {{ml}} configuration, status, and work with results. This role grants `monitor_ml` cluster privileges, read access to the `.ml-notifications` and `.ml-anomalies*` indices (which store {{ml}} results), and write access to `.ml-annotations*` indices. {{ml-cap}} users also need index privileges for source and destination indices and roles that grant access to {{kib}}. See [{{ml-cap}} security privileges](../../../explore-analyze/machine-learning/setting-up-machine-learning.md#setup-privileges). $$$built-in-roles-monitoring-user$$$ `monitoring_user` -: Grants the minimum privileges required for any user of {{monitoring}} other than those required to use {{kib}}. This role grants access to the monitoring indices and grants privileges necessary for reading basic cluster information. This role also includes all [Kibana privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) for the {{stack-monitor-features}}. Monitoring users should also be assigned the `kibana_admin` role, or another role with [access to the {{kib}} instance](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). +: Grants the minimum privileges required for any user of {{monitoring}} other than those required to use {{kib}}. This role grants access to the monitoring indices and grants privileges necessary for reading basic cluster information. This role also includes all [{{kib}} privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md) for the {{stack-monitor-features}}. Monitoring users should also be assigned the `kibana_admin` role, or another role with [access to the {{kib}} instance](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). $$$built-in-roles-remote-monitoring-agent$$$ `remote_monitoring_agent` : Grants the minimum privileges required to write data into the monitoring indices (`.monitoring-*`). This role also has the privileges necessary to create {{metricbeat}} indices (`metricbeat-*`) and write data into them. @@ -113,7 +113,7 @@ $$$built-in-roles-remote-monitoring-collector$$$ `remote_monitoring_collector` : Grants the minimum privileges required to collect monitoring data for the {{stack}}. $$$built-in-roles-reporting-user$$$ `reporting_user` -: Grants the necessary privileges required to use {{reporting}} features in {{kib}}, including generating and downloading reports. This role implicitly grants access to all Kibana reporting features, with each user having access only to their own reports. Note that reporting users should also be assigned additional roles that grant read access to the [indices](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) that will be used to generate reports. +: Grants the necessary privileges required to use {{reporting}} features in {{kib}}, including generating and downloading reports. This role implicitly grants access to all {{kib}} reporting features, with each user having access only to their own reports. Note that reporting users should also be assigned additional roles that grant read access to the [indices](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) that will be used to generate reports. $$$built-in-roles-rollup-admin$$$ `rollup_admin` : Grants `manage_rollup` cluster privileges, which enable you to manage and execute all rollup actions. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md index d49a167060..4a5618f760 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md @@ -47,7 +47,7 @@ POST /_security/user/user1/_password ## Using the `user` API [native-users-api] -You can manage users through the Elasticsearch `user` API. +You can manage users through the {{es}} `user` API. For example, you can change a user's password: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md index 8dc2e7e192..b8ed6f8a91 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md @@ -31,7 +31,7 @@ The following built-in users are available: Anyone who can log in as the `elastic` user has direct read-only access to restricted indices, such as `.security`. This user also has the ability to manage security and create roles with unlimited privileges. `kibana_system` -: The user Kibana uses to connect and communicate with {{es}}. +: The user {{kib}} uses to connect and communicate with {{es}}. `logstash_system` : The user Logstash uses when storing monitoring information in {{es}}. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md index f433733eaa..6742daca1a 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md @@ -9,7 +9,7 @@ applies_to: self: --- -# Elasticsearch privileges [security-privileges] +# {{es}} privileges [security-privileges] This section lists the privileges that you can assign to a role. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md index 8b725c232b..c846945f75 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md @@ -1,5 +1,5 @@ --- -navigation_title: "Kibana authentication" +navigation_title: "{{kib}} authentication" applies_to: deployment: ess: @@ -158,7 +158,7 @@ Note that with `server.ssl.clientAuthentication` set to `required`, users are as ## SAML single sign-on [saml] -SAML authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). This allows users to log in to {{kib}} with an external Identity Provider, such as Okta or Auth0. Make sure that SAML is enabled and configured in {{es}} before setting it up in {{kib}}. See [Configuring SAML single sign-on on the Elastic Stack](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). +SAML authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). This allows users to log in to {{kib}} with an external Identity Provider, such as Okta or Auth0. Make sure that SAML is enabled and configured in {{es}} before setting it up in {{kib}}. See [Configuring SAML single sign-on on the {{stack}}](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). Enable SAML authentication by specifying which SAML realm in {{es}} should be used: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md index 1413f0ae2e..349f8e4286 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md @@ -9,7 +9,7 @@ applies_to: self: --- -# Kibana privileges [kibana-privileges] +# {{kib}} privileges [kibana-privileges] {{kib}} privileges grant users access to features within {{kib}}. Roles have privileges to determine whether users have write or read access. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md index 838cab6969..5300bcecbe 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md @@ -314,4 +314,4 @@ By default, when you configure {{es}} to connect to an LDAP server using SSL/TLS The LDAP security realm uses the {{kib}}-provided [basic authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#basic-authentication) login form. Basic authentication is enabled by default. -You can also use LDAP with [token authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#token-authentication) in Kibana. \ No newline at end of file +You can also use LDAP with [token authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#token-authentication) in {{kib}}. \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md index f52cd9aab6..1154121d74 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md @@ -12,7 +12,7 @@ navigation_title: ECH and ECE # Reset the `elastic` user password in {{ech}} and {{ece}} [ec-password-reset] -You might need to reset the password for the `elastic` superuser if you can't authenticate with the `elastic` user ID and are effectively locked out from an Elasticsearch cluster or Kibana. +You might need to reset the password for the `elastic` superuser if you can't authenticate with the `elastic` user ID and are effectively locked out from an {{es}} cluster or {{kib}}. ::::{note} Elastic does not manage the `elastic` user and does not have access to the account or its credentials. If you lose the password, you have to reset it. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md b/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md index 5d99c1430e..c463ae7cfc 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md @@ -57,7 +57,7 @@ kubectl delete secret quickstart-es-elastic-user ``` ::::{warning} -If you are using the `elastic` user credentials in your own applications, they will fail to connect to {{es}} and Kibana after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always [create your own users with restricted roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md) to access Elasticsearch. +If you are using the `elastic` user credentials in your own applications, they will fail to connect to {{es}} and {{kib}} after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always [create your own users with restricted roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md) to access {{es}}. :::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md index fa8c848f99..30f2bde639 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md @@ -15,7 +15,7 @@ navigation_title: "Native" # Native user authentication [native-realm] -The easiest way to manage and authenticate users is with the internal `native` realm. You can use [Elasticsearch REST APIs](#native-users-api) or [Kibana](#managing-native-users) to add and remove users, assign user roles, and manage user passwords. +The easiest way to manage and authenticate users is with the internal `native` realm. You can use [{{es}} REST APIs](#native-users-api) or [Kibana](#managing-native-users) to add and remove users, assign user roles, and manage user passwords. In self-managed {{es}} clusters, you can also reset passwords for users in the native realm [using the command line](#reset-pw-cmd-line). @@ -80,7 +80,7 @@ Elastic enables you to easily manage users in {{kib}} on the **Stack Management ## Manage native users using the `user` API [native-users-api] -You can manage users through the Elasticsearch `user` API. +You can manage users through the {{es}} `user` API. For example, you can change a user's password: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md index b335e0c110..6549e45399 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md @@ -12,7 +12,7 @@ applies_to: # Set up OpenID Connect with Azure, Google, or Okta [ec-securing-clusters-oidc-op] -This page explains how to implement OIDC, from the OAuth client credentials generation to the realm configuration for Elasticsearch and Kibana, with the following OpenID Connect Providers (OPs): +This page explains how to implement OIDC, from the OAuth client credentials generation to the realm configuration for {{es}} and {{kib}}, with the following OpenID Connect Providers (OPs): * [Azure](#ec-securing-oidc-azure) * [Google](#ec-securing-oidc-google) @@ -62,7 +62,7 @@ For more information about OpenID connect in Azure, refer to [Azure OAuth 2.0 an For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. -3. Configure Elasticsearch with the OIDC realm. +3. Configure {{es}} with the OIDC realm. To learn more about the available endpoints provided by Microsoft Azure, refer to the **Endpoints** details in the application that you configured. @@ -70,7 +70,7 @@ For more information about OpenID connect in Azure, refer to [Azure OAuth 2.0 an :alt: A screenshot of the Azure Endpoints dialog with fields for Display Name ::: - To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + To configure {{es}} for OIDC, [update your {{es}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: ```sh xpack: @@ -99,7 +99,7 @@ For more information about OpenID connect in Azure, refer to [Azure OAuth 2.0 an * `` is your Client ID, available in the application details on Azure. * `` is your Directory ID, available in the application details on Azure. - * `KIBANA_ENDPOINT_URL` is your Kibana endpoint. + * `KIBANA_ENDPOINT_URL` is your {{kib}} endpoint. * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. @@ -138,7 +138,7 @@ For more information about OpenID connect in Azure, refer to [Azure OAuth 2.0 an If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). -5. Configure Kibana with the OIDC realm. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: +5. Configure {{kib}} with the OIDC realm. [Update your {{kib}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: ```sh xpack.security.authc.providers: @@ -183,11 +183,11 @@ For more information about OpenID connect in Google, refer to [Google OpenID Con For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. -3. Configure Elasticsearch with the OIDC realm. +3. Configure {{es}} with the OIDC realm. To learn more about the endpoints provided by Google, refer to this [OpenID configuration](https://accounts.google.com/.well-known/openid-configuration). - To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + To configure {{es}} for OIDC, [update your {{es}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: ```sh xpack: @@ -213,7 +213,7 @@ For more information about OpenID connect in Google, refer to [Google OpenID Con Where: * `YOUR_CLIENT_ID` is your Client ID. - * `/api/security/oidc/callback` is your Kibana endpoint. + * `/api/security/oidc/callback` is your {{kib}} endpoint. It will typically be `/api/security/oidc/callback`, where `` is the base URL for your {{kib}} instance. @@ -256,7 +256,7 @@ For more information about OpenID connect in Google, refer to [Google OpenID Con If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). -2. Configure Kibana with the OIDC realm. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: +2. Configure {{kib}} with the OIDC realm. [Update your {{kib}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: ```sh xpack.security.authc.providers: @@ -285,7 +285,7 @@ For more information about OpenID connect in Okta, refer to [Okta OAuth 2.0 docu ::: 2. For the **Platform** page settings, select **Web** then **Next**. - 3. In the **Application settings** choose a **Name** for your application, for example `Kibana OIDC`. + 3. In the **Application settings** choose a **Name** for your application, for example `{{kib}} OIDC`. 4. Set the **Base URI** to `KIBANA_ENDPOINT_URL`. 5. Set the **Login redirect URI**. @@ -299,11 +299,11 @@ For more information about OpenID connect in Okta, refer to [Okta OAuth 2.0 docu For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. -3. Configure Elasticsearch with the OIDC realm. +3. Configure {{es}} with the OIDC realm. To learn more about the available endpoints provided by Okta, refer to the following OpenID configuration: `https://{{yourOktadomain}}/.well-known/openid-configuration` - To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + To configure {{es}} for OIDC, [update your {{es}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: ```sh xpack: @@ -330,7 +330,7 @@ For more information about OpenID connect in Okta, refer to [Okta OAuth 2.0 docu Where: * `YOUR_CLIENT_ID` is the Client ID that you set up in the previous steps. - * `KIBANA_ENDPOINT_URL` is your Kibana endpoint, available from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). + * `KIBANA_ENDPOINT_URL` is your {{kib}} endpoint, available from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). * `YOUR_OKTA_DOMAIN` is the URL of your Okta domain shown on your Okta dashboard. * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md index b404ebeb40..8f7c37b776 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md @@ -350,7 +350,7 @@ You can't use [role mapping files](/deploy-manage/users-roles/cluster-or-deploym ### Example: using the role mapping API -If you want all your users authenticating with OpenID Connect to get access to Kibana, issue the following request to Elasticsearch: +If you want all your users authenticating with OpenID Connect to get access to {{kib}}, issue the following request to {{es}}: ```sh POST /_security/role_mapping/CLOUD_OIDC_TO_KIBANA_ADMIN <1> @@ -440,10 +440,10 @@ xpack.security.authc.providers: The configuration values used in the example above are: `xpack.security.authc.providers` -: Add an `oidc` provider to instruct {{kib}} to use OpenID Connect single sign-on as the authentication method. This instructs Kibana to attempt to initiate an SSO flow every time a user attempts to access a URL in {{kib}}, if the user is not already authenticated. +: Add an `oidc` provider to instruct {{kib}} to use OpenID Connect single sign-on as the authentication method. This instructs {{kib}} to attempt to initiate an SSO flow every time a user attempts to access a URL in {{kib}}, if the user is not already authenticated. `xpack.security.authc.providers.oidc..realm` -: The name of the OpenID Connect realm in {{es}} that should handle authentication for this Kibana instance. +: The name of the OpenID Connect realm in {{es}} that should handle authentication for this {{kib}} instance. ### Supporting OIDC and basic authentication in {{kib}} @@ -529,7 +529,7 @@ On a high level, the custom web application would need to perform the following } ``` - Elasticsearch will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests and a refresh token that can be later used to refresh the given access token as described in [Get token](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). + {{es}} will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests and a refresh token that can be later used to refresh the given access token as described in [Get token](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). 4. At some point, if necessary, the custom web application can log the user out by using the [OIDC logout API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-oidc-logout) passing the access token and refresh token as parameters. For example: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md index 0ff51f7c19..2d16ea23a1 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md @@ -11,7 +11,7 @@ applies_to: # Quickstart [tutorial-secure-access-to-kibana] -If you plan to use native Elasticsearch user and role management, then you can manage your users and roles completely within your {{kib}} instance. +If you plan to use native {{es}} user and role management, then you can manage your users and roles completely within your {{kib}} instance. You can use native access management features to give your users access to only the surfaces and features they need. For example, some users might only need to view your dashboards, while others might need to manage your fleet of Elastic agents and run machine learning jobs to detect anomalous behavior in your network. @@ -30,13 +30,13 @@ Refer to the [Spaces documentation](/deploy-manage/manage-spaces.md) for more in ## Roles [_roles] -After your spaces are set up, the next step to securing access is to provision your roles. Roles are a collection of privileges that allow you to perform actions in {{kib}} and Elasticsearch. Roles are assigned to users, and to [system accounts](built-in-users.md) that power the Elastic Stack. +After your spaces are set up, the next step to securing access is to provision your roles. Roles are a collection of privileges that allow you to perform actions in {{kib}} and {{es}}. Roles are assigned to users, and to [system accounts](built-in-users.md) that power the {{stack}}. -You can create your own roles, or use any of the [built-in roles](built-in-roles.md). Some built-in roles are intended for Elastic Stack components and should not be assigned to end users directly. +You can create your own roles, or use any of the [built-in roles](built-in-roles.md). Some built-in roles are intended for {{stack}} components and should not be assigned to end users directly. An example of a built-in role is `kibana_admin`. Assigning this role to your users will grant access to all of {{kib}}'s features. This includes the ability to manage spaces. -Built-in roles are great for getting started with the Elastic Stack, and for system administrators who do not need more restrictive access. However, if you need to control access with more precision, you can create [custom roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). +Built-in roles are great for getting started with the {{stack}}, and for system administrators who do not need more restrictive access. However, if you need to control access with more precision, you can create [custom roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). As an administrator, you have the ability to create your own roles to describe exactly the kind of access your users should have. For example, you might create a `marketing_user` role, which you then assign to all users in your marketing department. This role would grant access to all of the necessary data and features for this team to be successful, without granting them access they don’t require. @@ -99,7 +99,7 @@ To create the role: 1. From the **Spaces** dropdown, select the `Marketing` space. 2. Expand the **Analytics** section, and select the **Read** privilege for **Dashboard**. - 3. Click **Add Kibana privilege**. + 3. Click **Add {{kib}} privilege**. 6. Click **Create role**. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md index 223b4a0188..0fbf9a7e96 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md @@ -38,7 +38,7 @@ To configure SAML, you need to perform the following steps: 1. [Configure the prerequisites](#prerequisites) 2. [Create one or more SAML realms](#saml-create-realm) 3. [Configure role mappings](#saml-role-mapping) -4. [Configure Kibana to use SAML as the authentication provider](#saml-configure-kibana) +4. [Configure {{kib}} to use SAML as the authentication provider](#saml-configure-kibana) Additional steps outlined in this document are optional. @@ -591,7 +591,7 @@ PUT /_security/role_mapping/saml-example } ``` -1. The `example_role` role is **not** a builtin Elasticsearch role. This example assumes that you have created a custom role of your own, with appropriate access to your [data streams, indices,](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) and [Kibana features](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md#kibana-feature-privileges). +1. The `example_role` role is **not** a builtin {{es}} role. This example assumes that you have created a custom role of your own, with appropriate access to your [data streams, indices,](/deploy-manage/users-roles/cluster-or-deployment-auth/role-structure.md#roles-indices-priv) and [{{kib}} features](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md#kibana-feature-privileges). ### Example: Role mapping API, using SAML attributes @@ -668,7 +668,7 @@ The configuration values used in the example above are: : Add `saml` provider to instruct {{kib}} to use SAML SSO as the authentication method. `xpack.security.authc.providers.saml..realm` -: Set this to the name of the SAML realm that you have used in your [Elasticsearch realm configuration](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm), for instance: `saml1` +: Set this to the name of the SAML realm that you have used in your [{{es}} realm configuration](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm), for instance: `saml1` ### Supporting SAML and basic authentication in {{kib}} [saml-kibana-basic] @@ -795,7 +795,7 @@ On a high level, the custom web application would need to perform the following } ``` - Elasticsearch will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests. It also supplies a refresh token that can be later used to refresh the given access token as described in [get token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). + {{es}} will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests. It also supplies a refresh token that can be later used to refresh the given access token as described in [get token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). 4. The response to calling `/_security/saml/authenticate` will contain only the username of the authenticated user. If you need to get the values for the SAML Attributes that were contained in the SAML Response for that user, you can call the Authenticate API `/_security/_authenticate/` using the access token as a `Bearer` token and the SAML attribute values will be contained in the response as part of the [User metadata](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-user-metadata). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md index bc928245db..1e00c038e7 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md @@ -53,8 +53,8 @@ If you use {{serverless-full}}, then you can only manage authentication at the [ ### Advanced topics -* Learn about [internal users](/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md), which are responsible for the operations that take place inside an Elasticsearch cluster. -* Learn about [service accounts](/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md), which are used for integration with external services that connect to Elasticsearch. +* Learn about [internal users](/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md), which are responsible for the operations that take place inside an {{es}} cluster. +* Learn about [service accounts](/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md), which are used for integration with external services that connect to {{es}}. * Learn about the [services used for token-based authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md). * Learn about the [services used by orchestrators](/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md). * Manage [user profiles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md). diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md b/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md index 3d2a7496e2..29a0c25512 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md @@ -37,7 +37,7 @@ The authorization process revolves around the following constructs: : A resource to which access is restricted. Indices, aliases, documents, fields, users, and the {{es}} cluster itself are all examples of secured objects. *Privilege* -: A named group of one or more actions that a user may execute against a secured resource. Each secured resource has its own sets of available privileges. For example, `read` is an index privilege that represents all actions that enable reading the indexed/stored data. For a complete list of available privileges, see [Elasticsearch privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md). +: A named group of one or more actions that a user may execute against a secured resource. Each secured resource has its own sets of available privileges. For example, `read` is an index privilege that represents all actions that enable reading the indexed/stored data. For a complete list of available privileges, see [{{es}} privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md). *Permissions* : A set of one or more privileges against a secured resource. Permissions can easily be described in words, here are few examples: diff --git a/deploy-manage/users-roles/serverless-custom-roles.md b/deploy-manage/users-roles/serverless-custom-roles.md index 7b164f9ce0..4b634a86c2 100644 --- a/deploy-manage/users-roles/serverless-custom-roles.md +++ b/deploy-manage/users-roles/serverless-custom-roles.md @@ -79,7 +79,7 @@ Document-level and field-level security affords you even more granularity when i ## {{kib}} privileges [custom-roles-kib-privileges] -When you create a custom role, click **Add Kibana privilege** to grant access to specific features. The features that are available vary depending on the project type. For example, in {{es-serverless}}: +When you create a custom role, click **Add {{kib}} privilege** to grant access to specific features. The features that are available vary depending on the project type. For example, in {{es-serverless}}: :::{image} /deploy-manage/images/serverless-custom-roles-kibana-privileges.png :alt: Create a custom role and define {{kib}} privileges