diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index 8b8b5705ae..b5ef3dcb72 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -812,6 +812,8 @@ toc: children: - file: upgrade/deployment-or-cluster/elasticsearch.md - file: upgrade/deployment-or-cluster/kibana.md + children: + - file: upgrade/deployment-or-cluster/kibana-roll-back.md - file: upgrade/deployment-or-cluster/archived-settings.md children: - file: upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md diff --git a/deploy-manage/upgrade.md b/deploy-manage/upgrade.md index fb72a19bd1..ecd16907c9 100644 --- a/deploy-manage/upgrade.md +++ b/deploy-manage/upgrade.md @@ -1,96 +1,10 @@ # Upgrade -⚠️ **This page is a work in progress.** ⚠️ +Upgrading to the latest version provides access to the newest Elastic features, enhancements, performance improvements, and bug fixes. These updates reduce costs, speed up threat response, and improve investigative and analytical data tools. -Upgrading to the latest version provides you access to Elastic latest features, enhancements, performance improvements, and bug fixes, many of which enable you to save your organization money, respond faster to potential threats, and improve the tools you use to investigate and analyze your data. As new versions are released, older versions reach their end of life at a regular cadence, so it’s important to ensure that your deployment is fully maintained and supported. For more information, refer to Elastic’s [Product End of Life Dates](https://www.elastic.co/support/eol). +When Elastic releases new versions, older versions reach their end of life on a set schedule. To keep your deployment supported, stay up to date. For more information, refer to [Product End of Life Dates](https://www.elastic.co/support/eol). :::{note} -Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. +Upgrading from a release candidate build, such as 9.0.0-rc1, is unsupported. Use pre-releases only for testing in a temporary environment. ::: -## Plan your upgrade [plan-upgrade] - -There are a number of things you need to plan for before performing the actual upgrade, so create a test plan. Consider the following recommendations: - -* Plan for an appropriate amount of time to complete the upgrade. Depending on your configuration and the size of your cluster, the process can take up to a few weeks or more to complete. -* Consider opening a [support case](https://support.elastic.co/) with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, [Elastic Consulting Services](https://www.elastic.co/consulting) provides the technical expertise and step-by-step approach for upgrading your Elastic deployment. -* Schedule a system maintenance window within your organization. - -**Check system requirements** - -Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). - -**OpenJDK compatibility and FIPS compliance** - -By default, {{es}} is built using Java and includes a bundled version of [OpenJDK](https://openjdk.java.net/) within each distribution. While we strongly recommend using the bundled Java Virtual Machine (JVM) in all installations of {{es}}, if you choose to use your own JVM, ensure it’s compatible by reviewing the [Product and JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm). {{es}} 9.0 requires Java 21 and supports Java 24. - -If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [Bouncy Castle's](https://www.bouncycastle.org/java.html) FIPS implementation and is the recommended Java security provider when running {{es}}. - -**Conduct a component inventory** - -It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record if each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: - -* {{es}} -* {{es}} Hadoop -* {{es}} plugins -* {{es}} clients -* {{kib}} -* {{ls}} -* {{ls}} plugins -* {{beats}} -* {{beats}} modules -* {{apm-agent}} -* APM server -* {{agent}} -* {{fleet}} -* Security -* Browsers -* External services (Kafka, etc.) - -:::{tip} -When you do your inventory, you can [enable audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. -::: - -**Test your development environment** - -We highly recommend testing and upgrading in your development environment before your production environment. Therefore, it is crucial to ensure that both your development and production environments have the same settings. Consider checking the following components beforehand: - -* Enrichment information -* Plugins -* Mapping -* Index lifecycle management (ILM) -* Snapshot lifecycle management (SLM) -* Index templates -* {{ml-cap}} jobs -* Inbound sample data -* Live data -* Performance -* Outbound integrations -* Dashboards -* Alerts -* Authentication - -## Choose your upgrade path [choose-upgrade-path] - -The procedures you follow to upgrade depend on your infrastructure and deployment method. You’ve installed Elastic components using either Elastic-managed infrastructure or self-managed infrastructure. - -### Elastic-managed infrastructure - -Elastic-managed infrastructure includes {{ecloud}} – the umbrella term for {{ech}} (ECH) and {{serverless-full}}. {{serverless-full}} (“Serverless”) is a fully managed cloud offering with three products: {{es-serverless}}, {{obs-serverless}}, and {{sec-serverless}}. All serverless products are built on top of the Search AI Lake. Customers on serverless receive the latest features automatically when updates are published and do not need to choose an upgrade path. - -{{ech}} is Elastic’s cloud offering for managing {{stack}} deployments, built on top of {{es}}. A single click in the {{ecloud}} console can upgrade a deployment to a newer version. - -### Self-managed infrastructure - -Self-managed infrastructure – either on-prem or on public cloud, includes: -* {{stack}} -* {{ece}} (ECE) -* {{eck}} (ECK) - -For ECE and ECK, you must ensure the operator is running a compatible version with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. - -If you’re running the {{stack}} on your own self-managed infrastructure, you must upgrade each component individually. - -% Refer to the diagram below for a visualization of the different deployment methods. - - diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index dadd22636f..84a54edecc 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -1,7 +1,6 @@ --- mapped_urls: - https://www.elastic.co/guide/en/kibana/current/upgrade.html - - https://www.elastic.co/guide/en/kibana/current/upgrade-migrations-rolling-back.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elasticsearch.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-kibana.html @@ -13,11 +12,6 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-upgrading-stack.html --- -# Upgrade your deployment or cluster - -% What needs to be done: Refine - -% GitHub issue: https://github.com/elastic/docs-projects/issues/270 % Use migrated content from existing pages that map to this page: @@ -57,18 +51,365 @@ $$$rolling-upgrades$$$ $$$upgrading-reindex$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) -* [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md) -* [/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md](/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) - -% EEDUGON note: when working on this document, or in the ECK upgrade documentation we should include a link to [nodes orchestration](../deploy/cloud-on-k8s/nodes-orchestration.md) as reference to learn the details on how ECK orchestates / manages the upgrade of the individual instances. \ No newline at end of file +% * [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) +% * [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md) +% * [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md) +% * [/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md) +% * [/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md](/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) +% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) +% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) + +# Upgrade your deployment or cluster [upgrade-deployment-cluster] + +When upgrading an existing cluster, you perform a minor or major upgrade. For example, a minor upgrade takes you from version 9.0.0 to 9.1.0, while a major upgrade takes you from version 8.0.0 to 9.0.0. + +Upgrade procedures depend on whether you installed Elastic components using Elastic-managed or self-managed infrastructure. + +If you’re using Elastic-managed infrastructure, use the following options: + +* [Upgrade on {{ech}}](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md) +* Upgrade on [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), which is automatically performed by Elastic and requires no user management + +If you’re using self-managed infrastructure - either on-prem or public cloud - use the following options: + +* [Upgrade the {{stack}}](/deploy-manage/upgrade/deployment-or-cluster/self-managed.md) +* [Upgrade on {{ece}} (ECE)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ece.md) +* [Upgrade on {{eck}} (ECK)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md) + +## Prepare to upgrade [prepare-to-upgrade] + +Before you upgrade, review and complete the necessary preparation steps, which vary by version. + +:::{important} +Upgrading from a release candidate build, such as 9.0.0-rc1, is unsupported. Use pre-releases only for testing in a temporary environment. +::: + +## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] + +To upgrade from 8.17.0 or earlier to 9.0.0, you must first upgrade to the latest 8.18 patch release. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0.0, and perform a rolling upgrade. Upgrading to the latest 8.18 patch release is required even if you choose a full {{es}} cluster restart. If you're using 7.x and earlier, you may need to complete multiple upgrades or perform a full-cluster restart to reach the latest 8.18 patch release before upgrading to 9.0.0. + +Alternatively, you can create a 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade). + +:::{note} +For flexible upgrade scheduling, 8.18.0 {{beats}} and {{ls}} are compatible with 9.0.0 {{es}}. +By default, 8.x {{es}} clients are compatible with 9.0.0 and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) to maintain compatibility with the 9.0.0 {{es}} server. +::: + +Review the best practices to upgrade your deployments. + +1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md), which identifies deprecated settings, helps resolve issues, and reindexes data streams and indices created in 8.0.0 and earlier. + + :::{note} + Depending on your setup, reindexing can change your indices, and you may need to update alerts, transforms, or other code targeting the old index. + ::: + +2. Before you change configurations or reindex, ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md). + + :::{tip} + Tip: In 8.3.0 and later, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots from 5.0.0 and later, without needing an old {{es}} cluster. This ensures that your {{es}} data remains accessible after upgrades, without requiring a reindex process. + ::: + + To successfully upgrade, resolve all critical issues. If you make additional changes, create a snapshot to back up your data. + +3. To identify if your applications use unsupported features or behave differently in 9.0.0, review the deprecation logs in the Upgrade Assistant. + +4. Major version upgrades can include breaking changes that require additional steps to ensure your applications function as expected. Review the breaking changes for each product you use to learn more about potential impacts on your application. Ensure you test with the new version before upgrading existing deployments. + +5. To ensure your clients continue to operate as expected after the upgrade, make the recommended changes. + + :::{note} + As a temporary solution, use the 8.x syntax to submit requests to 9.0.0 with REST API compatibility mode. While this allows you to submit requests using the old syntax, it doesn’t guarantee the same behavior. REST API compatibility should serve as a bridge during the upgrade, not a long-term solution. For more details on how to effectively use REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). + ::: + +6. If you use {{es}} plugins, ensure each plugin is compatible with the {{es}} version you're upgrading. + +7. Before upgrading your production deployment, we recommend creating a 9.0.0 test deployment and testing the upgrade in an isolated environment. Ensure the test and production environments use the same settings. + + :::{important} + After you upgrade, you cannot downgrade {{es}} nodes. If you can't complete the upgrade process, you must [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). + ::: + +8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), upgrade the monitoring cluster before the production cluster. The monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. Monitoring clusters are unable to monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. + + :::{note} + If you use {{ccs}}, 9.0.0 and later can search only remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). + + If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). To view your remote clusters in {{kib}}, go to **Stack Management > Remote Clusters**. + :::: + +9. To reduce overhead on the cluster during the upgrade, close {{ml}} jobs. Although {{ml}} jobs can run during a rolling upgrade, doing so increases the cluster workload. + +10. If you have `.ml-anomalies-*`anomaly detection result indices created in {{es}} 7.x, reindex, mark as read-only, or delete them before you upgrade to 9.0.0. For more information, refer to [Migrate anomaly detection results](#anomaly-migration). + +11. If you have transform destination indices created in {{es}} 7.x, reset, reindex, or delete them before you upgrade to 9.0.0. For more information, refer to [Migrate transform destination indices](#transform-migration). + + +## Reindex to upgrade [reindex-to-upgrade] + +Optionally create a 9.0.0 deployment and reindex from remote: + +1. Provision an additional deployment running 9.0.0. +2. To reindex your data into the new {{es}} cluster, use the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. +3. Verify the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. +4. Delete the old deployment. On {ecloud}, you are billed only for the time the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. + + +## Migrate anomaly detection results [anomaly-migration] + +Reindex, mark as read-only, or delete the `.ml-anomalies-*` {{anomaly-detect}} result indices created in {{es}} 7.x. + +**Reindex**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. You are unable to delete an {{anomaly-job}} that stores results in the index until the reindexing is complete. + +**Mark indices as read-only**: This is useful for large indexes that contain the results of one or two {{anomaly-jobs}}. If you delete these jobs later, you cannot create a new job with the same name. + +**Delete**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. + +:::{dropdown} Which indices require attention? +To identify indices that require action, use the [Deprecation info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-migration-deprecations-1): + +``` +GET /.ml-anomalies-*/_migration/deprecations +``` + +The response contains the list of critical deprecation warnings in the `index_settings` section: + +```json + "index_settings": { + ".ml-anomalies-shared": [ + { + "level": "critical", + "message": "Index created before 8.0", + "url": "https://ela.st/es-deprecation-8-reindex", + "details": "This index was created with version 7.8.23 and is not compatible with 9.0. Reindex or remove the index before upgrading.", + "resolve_during_rolling_upgrade": false + } + ] + } +``` +::: + +:::{dropdown} Reindexing anomaly result indices +For an index with less than 10GB that contains results from multiple jobs that are still required, we recommend reindexing into a new format using UI. You can use the [Get index information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices-1) to obtain the size of an index: + +``` +GET _cat/indices/.ml-anomalies-custom-example?v&h=index,store.size +``` + +The reindexing can be initiated in the {{kib}} Upgrade Assistant. + +If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps: + +1. Set the original index to read-only. + +``` +PUT .ml-anomalies-custom-example/_block/read_only +``` + +2. Create a new index from the legacy index. + +``` +POST _create_from/.ml-anomalies-custom-example/.reindexed-v9-ml-anomalies-custom-example +``` + +3. Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to `0` before the reindexing and then set back to the original number once it is completed. + + 1. Get the number of replicas. + + ``` + GET /.reindexed-v9-ml-anomalies-custom-example/_settings + ``` + + Note the number of replicas in the response. For example: + + ```json + { + ".reindexed-v9-ml-anomalies-custom-example": { + "settings": { + "index": { + "number_of_replicas": "1", + "number_of_shards": "1" + } + } + } + } + ``` + + 2. Set the number of replicas to `0.` + + ```json + PUT /.reindexed-v9-ml-anomalies-custom-example/_settings + { + "index": { + "number_of_replicas": 0 + } + } + ``` + + 3. Start the reindexing process in asynchronous mode. + + ```json + POST _reindex?wait_for_completion=false + { + "source": { + "index": ".ml-anomalies-custom-example" + }, + "dest": { + "index": ".reindexed-v9-ml-anomalies-custom-example" + } + } + ``` + + The response will contain a `task_id`. You can check when the task is completed using the following command: + + ``` + GET _tasks/ + ``` + + 4. Set the number of replicas to the original number when the reindexing is finished. + + ```json + PUT /.reindexed-v9-ml-anomalies-custom-example/_settings + { + "index": { + "number_of_replicas": "" + } + } + ``` + +4. Get the aliases the original index is pointing to. + +``` +GET .ml-anomalies-custom-example/_alias +``` + +The response may contain multiple aliases if the results of multiple jobs are stored in the same index. + +```json +{ + ".ml-anomalies-custom-example": { + "aliases": { + ".ml-anomalies-example1": { + "filter": { + "term": { + "job_id": { + "value": "example1" + } + } + }, + "is_hidden": true + }, + ".ml-anomalies-example2": { + "filter": { + "term": { + "job_id": { + "value": "example2" + } + } + }, + "is_hidden": true + } + } + } +} +``` + +5. Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same `filter` and `is_hidden` parameters as for the original index. + +```json +POST _aliases +{ + "actions": [ + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-example1", + "filter": { + "term": { + "job_id": { + "value": "example1" + } + } + }, + "is_hidden": true + } + }, + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-example2", + "filter": { + "term": { + "job_id": { + "value": "example2" + } + } + }, + "is_hidden": true + } + }, + { + "remove": { + "index": ".ml-anomalies-custom-example", + "aliases": ".ml-anomalies-*" + } + }, + { + "remove_index": { + "index": ".ml-anomalies-custom-example" + } + }, + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-custom-example", + "is_hidden": true + } + } + ] +} +``` +::: + + +:::{dropdown} Marking anomaly result indices as read-only +Legacy indices created in {{es}} 7.x can be made read-only and supported in {{es}} 9.x. Making an index with a large amount of historical results read-only allows for a quick migration to the next major release, since you don’t have to wait for the data to be reindexed into the new format. However, it has the limitation that even after deleting an {{anomaly-job}}, the historical results associated with this job are not completely deleted. Therefore, the system will prevent you from creating a new job with the same name. + +To set the index as read-only, add the write block to the index: + +``` +PUT .ml-anomalies-custom-example/_block/write +``` + +Indices created in {{es}} 7.x that have a write block will not raise a critical deprecation warning. +::: + +:::{dropdown} Deleting anomaly result indices +If an index contains results of the jobs that are no longer required. To list all jobs that stored results in an index, use the terms aggregation: + +```json +GET .ml-anomalies-custom-example/_search +{ + "size": 0, + "aggs": { + "job_ids": { + "terms": { + "field": "job_id", + "size": 100 + } + } + } +} +``` + +The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. +::: + +## Migrate transform destination indices [transform-migration] +======= + + diff --git a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md index 66884d9959..72ce8208fa 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md +++ b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md @@ -8,7 +8,7 @@ applies_to: An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. -Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#rolling-upgrades): +Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](#rolling-upgrades): 1. Upgrade the data nodes first, tier-by-tier, starting with the frozen tier, then the cold tier, then the warm tier, then the hot tier, and finally any other data nodes which are not in a tier. Complete the upgrade for all nodes in each data tier before moving to the next. This ensures {{ilm-init}} can continue to move data through the tiers during the upgrade. You can get the list of nodes in a specific tier with a `GET /_nodes` request, for example: `GET /_nodes/data_frozen:true/_none`. 2. Upgrade all remaining nodes that are neither master-eligible nor data nodes. This includes dedicated ML nodes, dedicated ingest nodes, and dedicated coordinating nodes. diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md new file mode 100644 index 0000000000..9e68f618b1 --- /dev/null +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md @@ -0,0 +1,36 @@ +--- +navigation_title: "Roll back to a previous version" +mapped_urls: + - https://www.elastic.co/guide/en/kibana/current/upgrade-migrations-rolling-back.html +--- + +# Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] + +If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana.md#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, roll back {{kib}} until you identify and fix the root cause. + +::::{warning} +Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also roll back {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. +:::: + + +To roll back after a failed upgrade migration, you must also roll back the {{kib}} feature state to be compatible with the previous {{kib}} version. + + +## Roll back by restoring the {{kib}} feature state from a snapshot [_roll_back_by_restoring_the_kib_feature_state_from_a_snapshot] + +1. Before proceeding, [take a snapshot](../../tools/snapshot-and-restore/create-snapshots.md) that contains the `kibana` feature state. By default, snapshots include the `kibana` feature state. +2. To make sure no {{kib}} instances are performing an upgrade migration, shut down all {{kib}} instances. +3. [Restore](../../tools/snapshot-and-restore/restore-snapshot.md) the `kibana` feature state from a snapshot taken before the failed {{kib}} upgrade. The following {{es}} request will only restore the {{kib}} feature state: + + ```console + POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore + { + "indices": "-*", <1> + "feature_states": ["kibana"] + } + ``` + + 1. Exclude all indices and data streams from the restore operation to ensure that only the {{kib}} system indices included in the {{kib}} feature state are restored. + +4. Start all {{kib}} instances on the older version you want to roll back to. + diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana.md b/deploy-manage/upgrade/deployment-or-cluster/kibana.md index 9b512caabe..a5c6342959 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana.md @@ -20,13 +20,13 @@ For more information, refer to [Migrate saved objects](../internal-upgrade-proce When upgrading several {{kib}} instances connected to the same {{es}} cluster, ensure that all outdated instances are shut down before starting the upgrade. -Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade migration in parallel. +Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade in parallel. -For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade migration before bringing up the remaining instances. +For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade before bringing up the remaining instances. -## Preparing for migration [preventing-migration-failures] +## Preparing for upgrading [preventing-migration-failures] -Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready for migration. +Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready to upgrade. ### Ensure your {{es}} cluster is healthy [_ensure_your_es_cluster_is_healthy] @@ -44,7 +44,7 @@ A healthy cluster has: ### Ensure that all {{kib}} instances are the same [_ensure_that_all_kib_instances_are_the_same] -When you perform an upgrade migration of different {{kib}} versions, the migration can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. +When you perform an upgrade of different {{kib}} versions, the upgrade can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. ## Perform the upgrade [perform-kibana-upgrade] diff --git a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md index 9dbfc4f8c0..db953d0a76 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md +++ b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md @@ -1 +1,14 @@ # Upgrade Elastic on self-managed infrastructure + +If you've installed the {{stack}} on your own self-managed infrastructure, once you're [prepared to upgrade](/deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade), you'll need to upgrade each of your Elastic components individually. + +It's important that you upgrade your components in this order: +* [{{es}}](/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md) +* [{{kib}}](/deploy-manage/upgrade/deployment-or-cluster/kibana.md) +* [Elastic APM](../../../solutions/observability/apps/upgrade.md) +* [Ingest components](/deploy-manage/upgrade/ingest-components.md) + +:::{important} +If you're using {{ls}} and the `logstash-filter-elastic_integration plugin` to extend Elastic integrations, upgrade {{ls}} (or the `logstash-filter-elastic_integration` plugin specifically) *before* you upgrade {{kib}}. + +The {{es}} → {{ls}} → {{kib}} installation order for this specific plugin ensures the best experience with {{agent}}-managed pipelines, and embeds functionality from a version of {{es}} Ingest Node that is compatible with the plugin version (`major.minor`). diff --git a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md index 0d7d0e1d95..c0ae27b768 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md +++ b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md @@ -3,3 +3,51 @@ navigation_title: "Upgrade on {{ech}}" --- # Upgrade on {{ech}} (ECH) + +Once you are [prepared to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md), a single click in the {{ecloud}} console can upgrade a deployment to a newer version, add more processing capacity, change plugins, and enable or disable high availability, all at the same time. During the upgrade process, {{es}}, {{kib}}, and all of your deployment components are upgraded simultaneously. + +Minor version upgrades, upgrades from 8.18 to 9.0, and cluster configuration changes can be performed with no downtime. {{ecloud}} only supports upgrades to released versions. Release candidate builds and master snapshots are not supported. + +::::{important} +Although it’s simple to upgrade an {{ecloud}} deployment, the new version might include breaking changes that affect your application. Ensure you review breaking changes and deprecation logs, make any necessary changes, and test against the new version before upgrading your production deployment. +:::: + +## Perform the upgrade [perform-cloud-upgrade] + +Log in to your {{ecloud}} environment: + +1. Log in to the [{{ech}} console](https://cloud.elastic.co/login). +2. Select your deployment on the home page in the {{ech}} card or go to the **Deployments** page. + + Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + + +To upgrade a deployment: + +1. In the **Deployment version** section, click **Upgrade**. +2. Select the appropriate version you're upgrading to. +3. Click **Upgrade**, then **Confirm upgrade**. The new configuration takes a few minutes to create. + + ::::{note} + If any incompatibilities are detected when you attempt to upgrade, the UI provides a link to the Upgrade Assistant, which checks for deprecated settings in your cluster and indices and helps you resolve them. If there are any issues that would prevent a successful upgrade, the upgrade is blocked. After resolving the issues, return to the **Deployments** page and restart the upgrade. + :::: + + +Snapshots +: To keep your data safe during the upgrade process, a snapshot is taken automatically before any changes are made to your cluster. After a major version upgrade is complete and a snapshot of the upgraded cluster is available, all snapshots taken with the previous major version of {{es}} are stored in the snapshot repository. + + +Security realm settings +: During the upgrade process, you are prompted to update the security realm settings if your user settings include a `xpack.security.authc.realms` value. + + If the security realms are configured in `user_settings`, you’ll be prompted to modify the settings: + + 1. On the **Update security realm settings** window, edit the settings. + 2. Click **Update settings**. If the security realm settings are located in `user_settings_override`, contact support to help you upgrade. + +### Next steps [next-steps-ech] + +Once you've successfully upgraded on {{ech}}, do the following: + +* [Upgrade Elastic APM](../../../solutions/observability/apps/upgrade.md) +* [Upgrade your ingest components](/deploy-manage/upgrade/ingest-components.md) diff --git a/deploy-manage/upgrade/orchestrator.md b/deploy-manage/upgrade/orchestrator.md index 15ba07a036..084299ea92 100644 --- a/deploy-manage/upgrade/orchestrator.md +++ b/deploy-manage/upgrade/orchestrator.md @@ -1,7 +1,14 @@ # Upgrade your orchestrator -% What needs to be done: Write from scratch +:::{note} +The topics in this section apply to customers running the {{stack}} on {{ece}} (ECE) or {{eck}} (ECK). +::: -% GitHub issue: https://github.com/elastic/docs-projects/issues/355 +Elastic provides customers with two major self-managed orchestrators to manage the {{stack}}. Before upgrading the products in the stack, ensure your orchestrator is running a compatible version. If you’re running a version of your orchestrator that’s incompatible with the {{stack}} version you’re upgrading to, upgrade the orchestrator first. -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +Refer to one of these topics, depending on which orchestrator you have: + +* [Upgrade ECE](/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md) +* [Upgrade ECK](/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md) + +If your orchestrator is up to date, you can skip this step and proceed with [Upgrading on ECE](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ece.md) or [Upgrading on ECK](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md). diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index b9438452f1..27bf9733f5 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -1,258 +1,86 @@ -# Prepare to upgrade +# Prepare to upgrade [upgrade-prep] -% What needs to be done: Write from scratch +There are a number of things you need to plan for before performing the actual upgrade, so create a test plan. Consider the following recommendations: -% Scope notes: Prerequisites and requirements +* Plan for an appropriate amount of time to complete the upgrade. Depending on your configuration and the size of your cluster, the process can take up to a few weeks or more to complete. +* Consider opening a [support case](https://support.elastic.co/) with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, [Elastic Consulting Services](https://www.elastic.co/consulting) provides the technical expertise and step-by-step approach for upgrading your Elastic deployment. +* Schedule a system maintenance window within your organization. -⚠️ **This page is a work in progress.** ⚠️ +## Check system requirements [check-system-requirements] +Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). -## Anomaly detection results migration +### OpenJDK compatibility and FIPS compliance -The {{anomaly-detect}} result indices `.ml-anomalies-*` created in {{es}} 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x. +By default, {{es}} is built using Java and includes a bundled version of [OpenJDK](https://openjdk.java.net/) within each distribution. While we strongly recommend using the bundled Java Virtual Machine (JVM) in all installations of {{es}}, if you choose to use your own JVM, ensure it’s compatible by reviewing the [Product and JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm). {{es}} 9.0 requires Java 21 and supports Java 24. -**Reindexing**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. However, you cannot completely delete an {{anomaly-job}} that stores results in this index until the reindexing is complete. +If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [Bouncy Castle's](https://www.bouncycastle.org/java.html) FIPS implementation and is the recommended Java security provider when running {{es}}. -**Marking indices as read-only**: This is useful for large indexes that contain the results of only one or a few {{anomaly-jobs}}. If you delete these jobs later, you will not be able to create a new job with the same name. +## Conduct a component inventory -**Deleting**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. +It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record whether each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: -:::{dropdown} Which indices require attention? -To identify indices that require action, use the [Deprecation info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-migration-deprecations-1): +* {{es}} +* {{es}} Hadoop +* {{es}} plugins +* {{es}} clients +* {{kib}} +* {{ls}} +* {{ls}} plugins +* {{beats}} +* {{beats}} modules +* {{apm-agent}} +* APM server +* {{agent}} +* {{fleet}} +* Security +* Browsers +* External services (Kafka, etc.) -``` -GET /.ml-anomalies-*/_migration/deprecations -``` - -The response contains the list of critical deprecation warnings in the `index_settings` section: - -```json - "index_settings": { - ".ml-anomalies-shared": [ - { - "level": "critical", - "message": "Index created before 8.0", - "url": "https://ela.st/es-deprecation-8-reindex", - "details": "This index was created with version 7.8.23 and is not compatible with 9.0. Reindex or remove the index before upgrading.", - "resolve_during_rolling_upgrade": false - } - ] - } -``` +:::{tip} +When you do your inventory, you can [enable audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. ::: -:::{dropdown} Reindexing anomaly result indices -For an index with less than 10GB that contains results from multiple jobs that are still required, we recommend reindexing into a new format using UI. You can use the [Get index information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices-1) to obtain the size of an index: - -``` -GET _cat/indices/.ml-anomalies-custom-example?v&h=index,store.size -``` - -The reindexing can be initiated in the {{kib}} Upgrade Assistant. - -If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps: - -1. Set the original index to read-only. - -``` -PUT .ml-anomalies-custom-example/_block/read_only -``` - -2. Create a new index from the legacy index. - -``` -POST _create_from/.ml-anomalies-custom-example/.reindexed-v9-ml-anomalies-custom-example -``` - -3. Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to `0` before the reindexing and then set back to the original number once it is completed. - - 1. Get the number of replicas. - - ``` - GET /.reindexed-v9-ml-anomalies-custom-example/_settings - ``` +## Test your development environment - Note the number of replicas in the response. For example: +We highly recommend testing and upgrading in your development environment before your production environment. Therefore, it is crucial to ensure that both your development and production environments have the same settings. Consider checking the following components beforehand: - ```json - { - ".reindexed-v9-ml-anomalies-custom-example": { - "settings": { - "index": { - "number_of_replicas": "1", - "number_of_shards": "1" - } - } - } - } - ``` +* Enrichment information +* Plugins +* Mapping +* Index lifecycle management (ILM) +* Snapshot lifecycle management (SLM) +* Index templates +* {{ml-cap}} jobs +* Inbound sample data +* Live data +* Performance +* Outbound integrations +* Dashboards +* Alerts +* Authentication - 2. Set the number of replicas to `0.` +## Choose your upgrade path [choose-upgrade-path] - ```json - PUT /.reindexed-v9-ml-anomalies-custom-example/_settings - { - "index": { - "number_of_replicas": 0 - } - } - ``` +The procedures you follow to upgrade depend on your infrastructure and deployment method. You’ve installed Elastic components using either Elastic-managed infrastructure or self-managed infrastructure. - 3. Start the reindexing process in asynchronous mode. +### Elastic-managed infrastructure - ```json - POST _reindex?wait_for_completion=false - { - "source": { - "index": ".ml-anomalies-custom-example" - }, - "dest": { - "index": ".reindexed-v9-ml-anomalies-custom-example" - } - } - ``` +Elastic-managed infrastructure includes {{ecloud}} – the umbrella term for {{ech}} (ECH) and {{serverless-full}}. {{serverless-full}} (“Serverless”) is a fully managed cloud offering with three products: {{es-serverless}}, {{obs-serverless}}, and {{sec-serverless}}. All serverless products are built on top of the Search AI Lake. Customers on serverless receive the latest features automatically when updates are published and do not need to choose an upgrade path. - The response will contain a `task_id`. You can check when the task is completed using the following command: +{{ech}} is Elastic’s cloud offering for managing {{stack}} deployments, built on top of {{es}}. A single click in the {{ecloud}} console can upgrade a deployment to a newer version. - ``` - GET _tasks/ - ``` +### Self-managed infrastructure - 4. Set the number of replicas to the original number when the reindexing is finished. +Self-managed infrastructure – either on-prem or on public cloud, includes: +* {{stack}} +* {{ece}} (ECE) +* {{eck}} (ECK) - ```json - PUT /.reindexed-v9-ml-anomalies-custom-example/_settings - { - "index": { - "number_of_replicas": "" - } - } - ``` +For ECE and ECK, ensure the operator is running a version compatible with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. -4. Get the aliases the original index is pointing to. - -``` -GET .ml-anomalies-custom-example/_alias -``` - -The response may contain multiple aliases if the results of multiple jobs are stored in the same index. - -```json -{ - ".ml-anomalies-custom-example": { - "aliases": { - ".ml-anomalies-example1": { - "filter": { - "term": { - "job_id": { - "value": "example1" - } - } - }, - "is_hidden": true - }, - ".ml-anomalies-example2": { - "filter": { - "term": { - "job_id": { - "value": "example2" - } - } - }, - "is_hidden": true - } - } - } -} -``` - -5. Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same `filter` and `is_hidden` parameters as for the original index. - -```json -POST _aliases -{ - "actions": [ - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-example1", - "filter": { - "term": { - "job_id": { - "value": "example1" - } - } - }, - "is_hidden": true - } - }, - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-example2", - "filter": { - "term": { - "job_id": { - "value": "example2" - } - } - }, - "is_hidden": true - } - }, - { - "remove": { - "index": ".ml-anomalies-custom-example", - "aliases": ".ml-anomalies-*" - } - }, - { - "remove_index": { - "index": ".ml-anomalies-custom-example" - } - }, - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-custom-example", - "is_hidden": true - } - } - ] -} -``` -::: - - -:::{dropdown} Marking anomaly result indices as read-only -Legacy indices created in {{es}} 7.x can be made read-only and supported in {{es}} 9.x. Making an index with a large amount of historical results read-only allows for a quick migration to the next major release, since you don’t have to wait for the data to be reindexed into the new format. However, it has the limitation that even after deleting an {{anomaly-job}}, the historical results associated with this job are not completely deleted. Therefore, the system will prevent you from creating a new job with the same name. - -To set the index as read-only, add the write block to the index: - -``` -PUT .ml-anomalies-custom-example/_block/write -``` - -Indices created in {{es}} 7.x that have a write block will not raise a critical deprecation warning. -::: +If you’re running the {{stack}} on your own self-managed infrastructure, you must upgrade each component individually. -:::{dropdown} Deleting anomaly result indices -If an index contains results of the jobs that are no longer required. To list all jobs that stored results in an index, use the terms aggregation: +% Refer to the diagram below for a visualization of the different deployment methods. -```json -GET .ml-anomalies-custom-example/_search -{ - "size": 0, - "aggs": { - "job_ids": { - "terms": { - "field": "job_id", - "size": 100 - } - } - } -} -``` -The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. -::: \ No newline at end of file