diff --git a/deploy-manage/autoscaling/ec-autoscaling.md b/deploy-manage/autoscaling/ec-autoscaling.md index 04c7d4e757..d89bdc716f 100644 --- a/deploy-manage/autoscaling/ec-autoscaling.md +++ b/deploy-manage/autoscaling/ec-autoscaling.md @@ -62,7 +62,7 @@ When past behavior on a hot tier indicates that the influx of data can increase * Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/create-jobs.html). +On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/ml-ad-run-jobs.html). On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. diff --git a/deploy-manage/autoscaling/ece-autoscaling.md b/deploy-manage/autoscaling/ece-autoscaling.md index 043007e254..a398492397 100644 --- a/deploy-manage/autoscaling/ece-autoscaling.md +++ b/deploy-manage/autoscaling/ece-autoscaling.md @@ -62,7 +62,7 @@ When past behavior on a hot tier indicates that the influx of data can increase * Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/create-jobs.html). +On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/ml-ad-run-jobs.html). On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. @@ -79,7 +79,7 @@ A warning is also issued in the ECE `service-constructor` logs with the field `l The following are known limitations and restrictions with autoscaling: * Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed. -* In the event that an override is set for the instance size or disk quota multiplier for an instance by means of the [Instance Overrides API](https://www.elastic.co/guide/en/cloud-enterprise/current/set-all-instances-settings-overrides.html), autoscaling will be effectively disabled. It’s recommended to avoid adjusting the instance size or disk quota multiplier for an instance that uses autoscaling, since the setting prevents autoscaling. +* In the event that an override is set for the instance size or disk quota multiplier for an instance by means of the [Instance Overrides API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-all-instances-settings-overrides), autoscaling will be effectively disabled. It’s recommended to avoid adjusting the instance size or disk quota multiplier for an instance that uses autoscaling, since the setting prevents autoscaling. ## Enable or disable autoscaling [ece-autoscaling-enable] diff --git a/deploy-manage/autoscaling/ech-autoscaling.md b/deploy-manage/autoscaling/ech-autoscaling.md index 35f6f140f3..126aeaaa1f 100644 --- a/deploy-manage/autoscaling/ech-autoscaling.md +++ b/deploy-manage/autoscaling/ech-autoscaling.md @@ -62,7 +62,7 @@ When past behavior on a hot tier indicates that the influx of data can increase * Through ILM policies. For example, if a deployment has only hot nodes and autoscaling is enabled, it automatically creates warm or cold nodes, if an ILM policy is trying to move data from hot to warm or cold nodes. -On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/create-jobs.html). +On machine learning nodes, scaling is determined by an estimate of the memory and CPU requirements for the currently configured jobs and trained models. When a new machine learning job tries to start, it looks for a node with adequate native memory and CPU capacity. If one cannot be found, it stays in an `opening` state. If this waiting job exceeds the queueing limit set in the machine learning decider, a scale up is requested. Conversely, as machine learning jobs run, their memory and CPU usage might decrease or other running jobs might finish or close. In this case, if the duration of decreased resource usage exceeds the set value for `down_scale_delay`, a scale down is requested. Check [Machine learning decider](autoscaling-deciders.md) for more detail. To learn more about machine learning jobs in general, check [Create anomaly detection jobs](https://www.elastic.co/guide/en/machine-learning/current/ml-ad-run-jobs.html). On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency. diff --git a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md index 3147a38e51..385bfbdf20 100644 --- a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md @@ -57,7 +57,7 @@ Data inter-node charges are currently waived for Azure deployments. Data transfer out of deployments and between nodes of the cluster is hard to control, as it is a function of the use case employed for the cluster and cannot always be tuned. Use cases such as batch queries executed at a frequent interval may be revisited to help lower transfer costs, if applicable. Watcher email alerts also count towards data transfer out of the deployment, so you may want to reduce their frequency and size. -The largest contributor to inter-node data transfer is usually shard movement between nodes in a cluster. The only way to prevent shard movement is by having a single node in a single availability zone. This solution is only possible for clusters up to 64GB RAM and is not recommended as it creates a risk of data loss. [Oversharding](https://www.elastic.co/guide/en/elasticsearch/reference/current/avoid-oversharding.html) can cause excessive shard movement. Avoiding oversharding can also help control costs and improve performance. Note that creating snapshots generates inter-node data transfer. The *storage* cost of snapshots is detailed later in this document. +The largest contributor to inter-node data transfer is usually shard movement between nodes in a cluster. The only way to prevent shard movement is by having a single node in a single availability zone. This solution is only possible for clusters up to 64GB RAM and is not recommended as it creates a risk of data loss. [Oversharding](https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html) can cause excessive shard movement. Avoiding oversharding can also help control costs and improve performance. Note that creating snapshots generates inter-node data transfer. The *storage* cost of snapshots is detailed later in this document. The exact root cause of unusual data transfer is not always something we can identify as it can have many causes, some of which are out of our control and not associated with Cloud configuration changes. It may help to [enable monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) and examine index and shard activity on your cluster. diff --git a/deploy-manage/cloud-organization/tools-and-apis.md b/deploy-manage/cloud-organization/tools-and-apis.md index 55c27d8fd8..c243a4e086 100644 --- a/deploy-manage/cloud-organization/tools-and-apis.md +++ b/deploy-manage/cloud-organization/tools-and-apis.md @@ -10,7 +10,7 @@ Most Elastic resources can be accessed and managed through RESTful APIs. While t Elasticsearch Service API : You can use the Elasticsearch Service API to manage your deployments and all of the resources associated with them. This includes performing deployment CRUD operations, scaling or autoscaling resources, and managing traffic filters, deployment extensions, remote clusters, and Elastic Stack versions. You can also access cost data by deployment and by organization. - To learn more about the Elasticsearch Service API, read through the [API overview](https://www.elastic.co/guide/en/cloud/current/ec-restful-api.html), try out some [getting started examples](https://www.elastic.co/guide/en/cloud/current/ec-api-examples.html), and check our [API reference documentation](https://www.elastic.co/guide/en/cloud/current/ec-api-swagger.html). + To learn more about the Elasticsearch Service API, read through the [API overview](https://www.elastic.co/guide/en/cloud/current/ec-restful-api.html), try out some [getting started examples](https://www.elastic.co/guide/en/cloud/current/ec-api-examples.html), and check our [API reference documentation](https://www.elastic.co/docs/api/doc/cloud). Calls to the Elasticsearch Service API are subject to [Rate limiting](https://www.elastic.co/guide/en/cloud/current/ec-api-rate-limiting.html). diff --git a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md index 01ca591300..1bf5f4e294 100644 --- a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md +++ b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md @@ -1414,7 +1414,7 @@ Having added support for `node_roles` and autoscaling to your custom template, i curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST:12443/api/v1/deployments/templates?region=ece-region ``` -2. Send a `PUT` request with the updated template on the payload, in order to effectively replace the outdated template with the new one. Note that the following request is just an example, you have to replace `{{template_id}}` with the `id` you collected on step 1. and set the payload to the updated template JSON. Check [set deployment template API](https://www.elastic.co/guide/en/cloud-enterprise/current/set-deployment-template-v2.html) for more details. +2. Send a `PUT` request with the updated template on the payload, in order to effectively replace the outdated template with the new one. Note that the following request is just an example, you have to replace `{{template_id}}` with the `id` you collected on step 1. and set the payload to the updated template JSON. Check [set deployment template API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-deployment-template-v2) for more details. ::::{dropdown} Update template API request example ```sh @@ -1749,7 +1749,7 @@ If you do not intend to perform any of these actions, the migration can only be 1. Go to the deployment **Edit** page. 2. Get the deployment update payload by clicking **Equivalent API request** at the bottom of the page. 3. Update the payload by replacing `node_type` with `node_roles` in each Elasticsearch topology element. To know which `node_roles` to add to each topology element, refer to the [custom template example](#ece-ce-add-support-to-node-roles-example) where support for `node_roles` is added. -4. Send a `PUT` request with the updated deployment payload to conclude the migration. Check the [Update Deployment](https://www.elastic.co/guide/en/cloud-enterprise/current/update-deployment.html) API documentation for more details. +4. Send a `PUT` request with the updated deployment payload to conclude the migration. Check the [Update Deployment](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-update-deployment) API documentation for more details. **Using the Advanced edit:** diff --git a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel-cloud.md b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel-cloud.md index e21ca07038..481e4aca90 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure-host-rhel-cloud.md +++ b/deploy-manage/deploy/cloud-enterprise/configure-host-rhel-cloud.md @@ -20,7 +20,7 @@ Create a RHEL 8 (the version must be >= 8.5, but <9), RHEL 9, Rocky Linux 8, or * For RHEL 8, follow your internal guidelines to add a vanilla RHEL 8 VM to your environment. Note that the version must be >= 8.5, but <9. -Verify that required traffic is allowed. Check the [Networking prerequisites](ece-networking-prereq.md) and [Google Cloud Platform (GCP)](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-gcp.html) guidelines for a list of ports that need to be open. The technical configuration highly depends on the underlying infrastructure. +Verify that required traffic is allowed. Check the [Networking prerequisites](ece-networking-prereq.md) and [Google Cloud Platform (GCP)](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-prereqs.html) guidelines for a list of ports that need to be open. The technical configuration highly depends on the underlying infrastructure. **Example:** For AWS, allowing traffic between hosts is implemented using security groups. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-ce-add-support-for-integrations-server.md b/deploy-manage/deploy/cloud-enterprise/ece-ce-add-support-for-integrations-server.md index a58d20acdb..dbb9d75dcc 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-ce-add-support-for-integrations-server.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-ce-add-support-for-integrations-server.md @@ -48,7 +48,7 @@ Send a `PUT` request with the updated template in the payload to replace the ori * The following request is just an example; other resources in the request payload should remain unchanged (they have been truncated in the example). * You need to replace `{{template_id}}` in the URL with the `id` that you collected in Step 1. -Refer to [set deployment template API](https://www.elastic.co/guide/en/cloud-enterprise/current/set-deployment-template-v2.html) for more details. +Refer to [set deployment template API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-deployment-template-v2) for more details. ::::{dropdown} Update template API request example ```sh diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-configure-system-templates.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-configure-system-templates.md index 1cc439b811..b9dd286c4c 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-configure-system-templates.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-configure-system-templates.md @@ -26,7 +26,7 @@ The API user must have the `Platform admin` role in order to configure system te ``` 2. Edit the JSON of the system deployment template you wish to modify. -3. Make the API call to modify the deployment template. Note that the last path segment in the URL is the `id` of the system template you wish to modify. Check [set deployment template API](https://www.elastic.co/guide/en/cloud-enterprise/current/set-deployment-template-v2.html) for more detail. +3. Make the API call to modify the deployment template. Note that the last path segment in the URL is the `id` of the system template you wish to modify. Check [set deployment template API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-set-deployment-template-v2) for more detail. The following example modifies the Default system deployment template (that, is the system template with `id` value of `default`), setting the default value of `autoscaling_enabled` to `true` and the default autoscaling maximum size of the hot tier to 4,194,304MB (64GB * 64 nodes). diff --git a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md index c62edb938f..835db0dda4 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md @@ -99,5 +99,5 @@ While the `TransportClient` is deprecated, your custom endpoint aliases still wo ``` -For more information on configuring the `TransportClient`, see [Configure the Java Transport Client](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-security-transport.html). +For more information on configuring the `TransportClient`, see [Configure the Java Transport Client](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html). diff --git a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md index f24c2cb2c5..5efe1a8a73 100644 --- a/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md +++ b/deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md @@ -42,7 +42,7 @@ Otherwise, when the file content changes, the corresponding user is mentioned as 1. Make sure you are running a healthy x-node ECE environment ready to be upgraded. All nodes use the Docker container runtime. 2. Upgrade to ECE 3.3.0+ following the [Upgrade your installation](../../upgrade/orchestrator/upgrade-cloud-enterprise.md) guideline. Skip this step if your existing ECE installation already runs ECE >= 3.3.0. 3. Follow your internal guidelines to add an additional vanilla RHEL (Note that the version must be >= 8.5, but <9), or Rocky Linux 8 or 9 VM to your environment. -4. Verify that required traffic from the host added in step 3 is allowed to the primary ECE VM(s). Check the [Networking prerequisites](ece-networking-prereq.md) and [Google Cloud Platform (GCP)](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-gcp.html) guidelines for a list of ports that need to be open. The technical configuration highly depends on the underlying infrastructure. +4. Verify that required traffic from the host added in step 3 is allowed to the primary ECE VM(s). Check the [Networking prerequisites](ece-networking-prereq.md) and [Google Cloud Platform (GCP)](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-prereqs.html) guidelines for a list of ports that need to be open. The technical configuration highly depends on the underlying infrastructure. **Example** For AWS, allowing traffic between hosts is implemented using security groups. @@ -435,7 +435,7 @@ Otherwise, when the file content changes, the corresponding user is mentioned as 3. Put the docker-based allocator you want to replace with a podman allocator in maintenance mode by following the [Enable Maintenance Mode](../../maintenance/ece/enable-maintenance-mode.md) documentation. - As an alternative, use the [Start maintenance mode](https://www.elastic.co/guide/en/cloud-enterprise/current/start-allocator-maintenance-mode.html) API. + As an alternative, use the [Start maintenance mode](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-start-allocator-maintenance-mode) API. 4. Move all instances from the Docker allocator to the podman allocator by following the [Move Nodes From Allocators](../../maintenance/ece/move-nodes-instances-from-allocators.md) documentation. @@ -453,12 +453,12 @@ Otherwise, when the file content changes, the corresponding user is mentioned as :alt: Move instances ::: - As an alternative, use the [*Move clusters*](https://www.elastic.co/guide/en/cloud-enterprise/current/move-clusters.html) API. + As an alternative, use the [*Move clusters*](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-move-clusters) API. To identifying the correct target allocator, the following APIs might be helpful: - * [*Get allocators*](https://www.elastic.co/guide/en/cloud-enterprise/current/get-allocators.html) - * [*Get allocator metadata*](https://www.elastic.co/guide/en/cloud-enterprise/current/get-allocator-metadata.html) + * [*Get allocators*](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-allocators) + * [*Get allocator metadata*](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-allocator-metadata) ```json { @@ -483,11 +483,11 @@ Otherwise, when the file content changes, the corresponding user is mentioned as } ``` - 1. If allocators are tagged as mentioned in step 7, the metadata section of the [*Get allocators*](https://www.elastic.co/guide/en/cloud-enterprise/current/get-allocators.html) API should contain the tag. + 1. If allocators are tagged as mentioned in step 7, the metadata section of the [*Get allocators*](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-get-allocators) API should contain the tag. This information allows you to determine what allocators are running on top of podman (automated way) 5. Remove the Docker allocator by following the [Delete Hosts](../../maintenance/ece/delete-ece-hosts.md) guidelines. - As an alternative, use the [Delete Runner](https://www.elastic.co/guide/en/cloud-enterprise/current/delete-runner.html) API. + As an alternative, use the [Delete Runner](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-delete-runner) API. diff --git a/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md b/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md index d76a85d1ab..b622dd1b3b 100644 --- a/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md +++ b/deploy-manage/deploy/cloud-enterprise/system-deployments-configuration.md @@ -24,7 +24,7 @@ Logging and metrics - `logging-and-metrics` : As part of an ECE environment, a Beats sidecar with Filebeat and Metricbeat is installed on each ECE host. The logs and metrics collected by those beats are indexed in the `logging-and-metrics` cluster. This includes ECE service logs, such as proxy logs, director logs, and more. It also includes hosted deployments logs, security cluster audit logs, and metrics, such as CPU and disk usage. Data is collected from all hosts. This information is critical in order to be able to monitor ECE and troubleshoot issues. You can also use this data to configure watches to alert you in case of an issue, or machine learning jobs that can provide alerts based on anomalies or forecasting. Security - `security` -: When you enable the user management feature, you trigger the creation of a third system deployment named `security`. This cluster stores all security-related configurations, such as native users and the related native realm, integration with SAML or LDAP as external authentication providers and their role mapping, and the realm ordering. The health of this cluster is critical to provide access to the ECE Cloud UI and REST API. To learn more, check [Configure role-based access control](../../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). Beginning with Elastic Cloud Enterprise 2.5.0 the `security` cluster is created automatically for you. It is recommended to use the [dedicated API](https://www.elastic.co/guide/en/cloud-enterprise/current/update-security-deployment.html) to manage the cluster. +: When you enable the user management feature, you trigger the creation of a third system deployment named `security`. This cluster stores all security-related configurations, such as native users and the related native realm, integration with SAML or LDAP as external authentication providers and their role mapping, and the realm ordering. The health of this cluster is critical to provide access to the ECE Cloud UI and REST API. To learn more, check [Configure role-based access control](../../users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). Beginning with Elastic Cloud Enterprise 2.5.0 the `security` cluster is created automatically for you. It is recommended to use the [dedicated API](https://www.elastic.co/docs/api/doc/cloud-enterprise/operation/operation-update-security-deployment) to manage the cluster. ## High availability [ece_high_availability] diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md index 98f7b948e4..9485e32f5d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md @@ -14,7 +14,7 @@ This section covers the following topics: ## Use APM Agent central configuration [k8s-apm-agent-central-configuration] -[APM Agent configuration management](https://www.elastic.co/guide/en/kibana/current/agent-configuration.html) [7.5.1] allows you to configure your APM Agents centrally from the Kibana APM app. To use this feature, the APM Server needs to be configured with connection details of the Kibana instance. If Kibana is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: +[APM Agent configuration management](https://www.elastic.co/guide/en/observability/current/apm-agent-configuration.html) [7.5.1] allows you to configure your APM Agents centrally from the Kibana APM app. To use this feature, the APM Server needs to be configured with connection details of the Kibana instance. If Kibana is managed by ECK, you can simply add a `kibanaRef` attribute to the APM Server specification: ```yaml cat < - ::::{tip} + ::::{tip} In production environments, we strongly recommend using a separate cluster (referred to as the *monitoring cluster*) to store the data. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster. If {{security-features}} are enabled on the production cluster, use an HTTPS URL such as `https://:9200` in this setting. @@ -74,7 +74,7 @@ To learn about monitoring in general, see [Monitor a cluster](../../monitor.md). 4. If {{security-features}} are enabled on the production cluster: 1. Verify that there is a valid user ID and password in the `elasticsearch.username` and `elasticsearch.password` settings in the `kibana.yml` file. These values are used when {{kib}} sends monitoring data to the production cluster. - 2. [Configure encryption for traffic between {{kib}} and {{es}}](https://www.elastic.co/guide/en/kibana/current/configuring-tls.html#configuring-tls-kib-es). + 2. [Configure encryption for traffic between {{kib}} and {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html#encrypt-kibana-http). 5. [Start {{kib}}](../../maintenance/start-stop-services/start-stop-kibana.md). 6. [View the monitoring data in {{kib}}](kibana-monitoring-data.md). diff --git a/deploy-manage/production-guidance.md b/deploy-manage/production-guidance.md index 924050a764..cdee264506 100644 --- a/deploy-manage/production-guidance.md +++ b/deploy-manage/production-guidance.md @@ -8,19 +8,19 @@ mapped_pages: This section provides some best practices for managing your data to help you set up a production environment that matches your workloads, policies, and deployment needs. -## Plan your data structure, availability, and formatting [ec_plan_your_data_structure_availability_and_formatting] +## Plan your data structure, availability, and formatting [ec_plan_your_data_structure_availability_and_formatting] -* Build a [data architecture](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-tiers.md) that best fits your needs. Your Elasticsearch Service deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data. -* Make your data [highly available](https://www.elastic.co/guide/en/elasticsearch/reference/current/high-availability.md) for production environments or otherwise critical data stores, and take regular [backup snapshots](tools/snapshot-and-restore.md). -* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/{{ecs_version}}/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. +* Build a [data architecture](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-tiers.html) that best fits your needs. Your Elasticsearch Service deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data. +* Make your data [highly available](https://www.elastic.co/guide/en/elasticsearch/reference/current/high-availability.html) for production environments or otherwise critical data stores, and take regular [backup snapshots](tools/snapshot-and-restore.md). +* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/ecs-getting-started.html) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. -## Optimize data storage and retention [ec_optimize_data_storage_and_retention] +## Optimize data storage and retention [ec_optimize_data_storage_and_retention] -Once you have your data tiers deployed and you have data flowing, you can [manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.md). +Once you have your data tiers deployed and you have data flowing, you can [manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html). -::::{tip} -[Elastic integrations](https://www.elastic.co/integrations) provide default index lifecycle policies, and you can [build your own policies for your custom integrations](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-index-lifecycle-management.md). +::::{tip} +[Elastic integrations](https://www.elastic.co/integrations) provide default index lifecycle policies, and you can [build your own policies for your custom integrations](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-index-lifecycle-management.html). :::: diff --git a/deploy-manage/reference-architectures.md b/deploy-manage/reference-architectures.md index aa8bbcd920..47dcbac4b2 100644 --- a/deploy-manage/reference-architectures.md +++ b/deploy-manage/reference-architectures.md @@ -9,15 +9,15 @@ Elasticsearch reference architectures are blueprints for deploying Elasticsearch These architectures are designed by architects and engineers to provide standardized, proven solutions that help you to follow best practices when deploying {{es}}. -::::{tip} -These architectures are specific to running your deployment on-premises or on cloud. If you are using Elastic serverless your {{es}} clusters are autoscaled and fully managed by Elastic. For all the deployment options, refer to [Run Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro-deploy.md). +::::{tip} +These architectures are specific to running your deployment on-premises or on cloud. If you are using Elastic serverless your {{es}} clusters are autoscaled and fully managed by Elastic. For all the deployment options, refer to [Run Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro-deploy.html). :::: -These reference architectures are recommendations and should be adapted to fit your specific environment and needs. Each solution can vary based on the unique requirements and conditions of your deployment. In these architectures we discuss about how to deploy cluster components. For information about designing ingest architectures to feed content into your cluster, refer to [Ingest architectures](https://www.elastic.co/guide/en/ingest/current/use-case-arch.md) +These reference architectures are recommendations and should be adapted to fit your specific environment and needs. Each solution can vary based on the unique requirements and conditions of your deployment. In these architectures we discuss about how to deploy cluster components. For information about designing ingest architectures to feed content into your cluster, refer to [Ingest architectures](https://www.elastic.co/guide/en/ingest/current/use-case-arch.html) -## Architectures [reference-architectures-time-series-2] +## Architectures [reference-architectures-time-series-2] | | | | --- | --- | diff --git a/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md b/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md index 4282153281..1aaa6a8eb4 100644 --- a/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md +++ b/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md @@ -5,7 +5,7 @@ mapped_pages: # Enabling CCS/R between Elasticsearch Service and ECK [ec-enable-ccs-for-eck] -These steps describe how to configure remote clusters between an {{es}} cluster in Elasticsearch Service and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html). Once that’s done, you’ll be able to [run CCS queries from {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html) or [set up CCR](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-getting-started.html). +These steps describe how to configure remote clusters between an {{es}} cluster in Elasticsearch Service and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html). Once that’s done, you’ll be able to [run CCS queries from {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html) or [set up CCR](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-getting-started-tutorial.html). ## Establish trust between two clusters [ec_establish_trust_between_two_clusters] diff --git a/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md b/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md index 21806f3a41..23b05068ba 100644 --- a/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md +++ b/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md @@ -5,7 +5,7 @@ mapped_pages: # Enabling CCS/R between Elastic Cloud Enterprise and ECK [ece-enable-ccs-for-eck] -These steps describe how to configure remote clusters between an {{es}} cluster in Elastic Cloud Enterprise and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html). Once that’s done, you’ll be able to [run CCS queries from {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html) or [set up CCR](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-getting-started.html). +These steps describe how to configure remote clusters between an {{es}} cluster in Elastic Cloud Enterprise and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html). Once that’s done, you’ll be able to [run CCS queries from {{es}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cross-cluster-search.html) or [set up CCR](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-getting-started-tutorial.html). ## Establish trust between two clusters [ece_establish_trust_between_two_clusters] diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index f5c2f6c13a..0676ac4e8b 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -5,7 +5,7 @@ mapped_pages: # ECK remote clusters [k8s-remote-clusters] -The [remote clusters module](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-remote-clusters.html) in Elasticsearch enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. +The [remote clusters module](https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters.html) in Elasticsearch enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. When using remote cluster connections with ECK, the setup process depends on where the remote cluster is deployed. diff --git a/deploy-manage/remote-clusters/remote-clusters-cert.md b/deploy-manage/remote-clusters/remote-clusters-cert.md index 66f9c5d0d4..0ff8ce2117 100644 --- a/deploy-manage/remote-clusters/remote-clusters-cert.md +++ b/deploy-manage/remote-clusters/remote-clusters-cert.md @@ -264,7 +264,7 @@ cluster: ## Configure roles and users for remote clusters [remote-clusters-privileges-cert] -After [connecting remote clusters](https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-connect.html), you create a user role on both the local and remote clusters and assign necessary privileges. These roles are required to use {{ccr}} and {{ccs}}. +After [connecting remote clusters](https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters.html), you create a user role on both the local and remote clusters and assign necessary privileges. These roles are required to use {{ccr}} and {{ccs}}. ::::{important} You must use the same role names on both the local and remote clusters. For example, the following configuration for {{ccr}} uses the `remote-replication` role name on both the local and remote clusters. However, you can specify different role definitions on each cluster. diff --git a/deploy-manage/remote-clusters/remote-clusters-self-managed.md b/deploy-manage/remote-clusters/remote-clusters-self-managed.md index c1b6b6d9cf..26b293f965 100644 --- a/deploy-manage/remote-clusters/remote-clusters-self-managed.md +++ b/deploy-manage/remote-clusters/remote-clusters-self-managed.md @@ -21,7 +21,7 @@ With [{{ccr}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/x ## Add remote clusters [add-remote-clusters] ::::{note} -The instructions that follow describe how to create a remote connection from a self-managed cluster. You can also set up {{ccs}} and {{ccr}} from an [{{ess}} deployment](https://www.elastic.co/guide/en/cloud/current/ec-enable-ccs.md) or from an [{{ece}} deployment](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-ccs.md). +The instructions that follow describe how to create a remote connection from a self-managed cluster. You can also set up {{ccs}} and {{ccr}} from an [{{ess}} deployment](https://www.elastic.co/guide/en/cloud/current/ec-enable-ccs.html) or from an [{{ece}} deployment](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-ccs.html). :::: diff --git a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md index 57d5e5d2e2..f2be688c27 100644 --- a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md +++ b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md @@ -172,7 +172,7 @@ Provide your key identifier without the key version identifier so Elastic Cloud * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](https://www.elastic.co/guide/en/cloud/current/ec-regions-templates-instances.html). * [Get a valid Elastic Cloud API key](https://www.elastic.co/guide/en/cloud/current/ec-api-authentication.html) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. * Get the ARN of the symmetric AWS KMS key or of its alias. Use an alias if you are planning to do manual key rotations as specified in the [AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.md). - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/guide/en/cloud/current/Deployment_-_CRUD.html). For example: + * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```curl curl -XPOST \ @@ -246,7 +246,7 @@ After you have created the service principal and granted it the necessary permis * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](https://www.elastic.co/guide/en/cloud/current/ec-regions-templates-instances.html). * [Get a valid Elastic Cloud API key](https://www.elastic.co/guide/en/cloud/current/ec-api-authentication.html) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/guide/en/cloud/current/Deployment_-_CRUD.html). For example: + * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```curl curl -XPOST \ @@ -325,7 +325,7 @@ After you have granted the Elastic principals the necessary roles, you can finis * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](https://www.elastic.co/guide/en/cloud/current/ec-regions-templates-instances.html). * [Get a valid Elastic Cloud API key](https://www.elastic.co/guide/en/cloud/current/ec-api-authentication.html) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. - * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/guide/en/cloud/current/Deployment_-_CRUD.html). For example: + * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: ```curl curl -XPOST \ diff --git a/deploy-manage/security/httprest-clients-security.md b/deploy-manage/security/httprest-clients-security.md index 7249f64bf2..c9376cd9a3 100644 --- a/deploy-manage/security/httprest-clients-security.md +++ b/deploy-manage/security/httprest-clients-security.md @@ -71,7 +71,7 @@ es-secondary-authorization: ApiKey <1> For more information about using {{security-features}} with the language specific clients, refer to: * [Java](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/_basic_authentication.html) -* [JavaScript](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/auth-reference.html) +* [JavaScript](https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/client-connecting.html) * [.NET](https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/configuration.html) * [Perl](https://metacpan.org/pod/Search::Elasticsearch::Cxn::HTTPTiny#CONFIGURATION) * [PHP](https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/connecting.html) diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md b/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md index cc300cee0d..ddc461598f 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md @@ -26,7 +26,7 @@ To configure Google Cloud Storage (GCS) as a snapshot repository, you must use [ To configure Microsoft Azure Storage as a snapshot repository, refer to [Snapshotting to Azure Storage](azure-storage-repository.md). -For more details about how snapshots are used with Elasticsearch, check [Snapshot and Restore](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html). You can also review the official documentation for these storage repository options: +For more details about how snapshots are used with Elasticsearch, check [Snapshot and Restore](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html). You can also review the official documentation for these storage repository options: * [Amazon S3 documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.md) * [Microsoft Azure storage documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account) diff --git a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md index d31812bcc0..7f419e912a 100644 --- a/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md +++ b/deploy-manage/tools/snapshot-and-restore/cloud-on-k8s.md @@ -13,7 +13,7 @@ Snapshots are essential for recovering Elasticsearch indices in case of accident To set up automated snapshots for Elasticsearch on Kubernetes you have to: 1. Register the snapshot repository with the Elasticsearch API. -2. Set up a Snapshot Lifecycle Management Policy through [API](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-lifecycle-management-api.html) or the [Kibana UI](https://www.elastic.co/guide/en/kibana/current/snapshot-repositories.html) +2. Set up a Snapshot Lifecycle Management Policy through [API](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-lifecycle-management-api.html) or the [Kibana UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html) ::::{note} Support for S3, GCS and Azure repositories is bundled in Elasticsearch by default from version 8.0. On older versions of Elasticsearch, or if another snapshot repository plugin should be used, you have to [Install a snapshot repository plugin](#k8s-install-plugin). @@ -97,7 +97,7 @@ GCS credentials are automatically propagated into each Elasticsearch node’s ke #### Register the repository in Elasticsearch [k8s-create-repository] -1. Create the GCS snapshot repository in Elasticsearch. You can either use the [Snapshot and Restore UI](https://www.elastic.co/guide/en/kibana/current/snapshot-repositories.html) in Kibana version 7.4.0 or higher, or follow the procedure described in [Snapshot and Restore](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html): +1. Create the GCS snapshot repository in Elasticsearch. You can either use the [Snapshot and Restore UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html) in Kibana version 7.4.0 or higher, or follow the procedure described in [Snapshot and Restore](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html): ```sh PUT /_snapshot/my_gcs_repository diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md index 55977e2a83..49a435a383 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md @@ -50,7 +50,7 @@ $$$ece-saml-general-settings$$$Begin the provider profile by adding the general ## Map SAML attributes to User Properties [ece-saml-attributes] -The SAML assertion about a user usually includes attribute names and values that can be used for role mapping. The configuration in this section allows to configure a mapping between these SAML attribute values and [Elasticsearch user properties](https://www.elastic.co/guide/en/elasticsearch/reference/current/saml-guide-authentication.html#saml-user-properties). When the attributes have been mapped to user properties such as `groups`, these can then be used to configure [role mappings](#ece-saml-role-mapping). Mapping the `principal` user property is required and the `groups` property is recommended for a minimum configuration. +The SAML assertion about a user usually includes attribute names and values that can be used for role mapping. The configuration in this section allows to configure a mapping between these SAML attribute values and [Elasticsearch user properties](https://www.elastic.co/guide/en/elasticsearch/reference/current/saml-guide-stack.html#saml-elasticsearch-authentication). When the attributes have been mapped to user properties such as `groups`, these can then be used to configure [role mappings](#ece-saml-role-mapping). Mapping the `principal` user property is required and the `groups` property is recommended for a minimum configuration. Note that some additional attention must be paid to the `principal` user property. Although the SAML specification does not have many restrictions on the type of value that is mapped, ECE requires that the mapped value is also a valid Elasticsearch native realm identifier. Specifically, this means the mapped identifier should not contain any commas or slashes, and should be otherwise URL friendly.