From 73b3c5830471e8762875c28e790555b74a87ef3f Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Wed, 5 Mar 2025 23:31:20 -0500 Subject: [PATCH 01/11] Adds Kibana and Elasticsearch upgrade instructions. --- .../deployment-or-cluster/elasticsearch.md | 197 +++++++++++++++++- .../upgrade/deployment-or-cluster/kibana.md | 70 ++++++- 2 files changed, 265 insertions(+), 2 deletions(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md index c4b296173e..f46223da66 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md +++ b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md @@ -1 +1,196 @@ -# Upgrade {{es}} \ No newline at end of file +# Upgrade {{es}} [upgrading-elasticsearch] + +An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. + +Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#rolling-upgrades): + +1. Upgrade the data nodes first, tier-by-tier, starting with the frozen tier, then the cold tier, then the warm tier, then the hot tier, and finally any other data nodes which are not in a tier. Complete the upgrade for all nodes in each data tier before moving to the next. This ensures {{ilm-init}} can continue to move data through the tiers during the upgrade. You can get the list of nodes in a specific tier with a `GET /_nodes` request, for example: `GET /_nodes/data_frozen:true/_none`. +2. Upgrade all remaining nodes that are neither master-eligible nor data nodes. This includes dedicated ML nodes, dedicated ingest nodes, and dedicated coordinating nodes. +3. Upgrade the master-eligible nodes last. You can retrieve a list of these nodes with `GET /_nodes/master:true/_none`. + +This order ensures that all nodes can join the cluster during the upgrade. Upgraded nodes can join a cluster with an older master, but older nodes cannot always join a cluster with a upgraded master. + +To upgrade a cluster: + +1. **Disable shard allocation**. + + When you shut down a data node, the allocation process waits for `index.unassigned.node_left.delayed_timeout` (by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by [disabling allocation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-allocation-enable) of replicas before shutting down [data nodes](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/node-settings.md#data-node): + + ```console + PUT _cluster/settings + { + "persistent": { + "cluster.routing.allocation.enable": "primaries" + } + } + ``` + +2. **Stop non-essential indexing and perform a flush.** (Optional) + + While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a [flush](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-flush). + + ```console + POST /_flush + ``` + +3. **Temporarily stop the tasks associated with active {{ml}} jobs and {{dfeeds}}.** (Optional) + + It is possible to leave your {{ml}} jobs running during the upgrade, but it puts increased load on the cluster. When you shut down a {{ml}} node, its jobs automatically move to another node and restore the model states. + + ::::{note} + Any {{ml}} indices created before 8.x must be reindexed before upgrading, which you can initiate from the **Upgrade Assistant** in 8.18. + :::: + + + * Temporarily halt the tasks associated with your {{ml}} jobs and {{dfeeds}} and prevent new jobs from opening by using the [set upgrade mode API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-set-upgrade-mode): + + ```console + POST _ml/set_upgrade_mode?enabled=true + ``` + + When you disable upgrade mode, the jobs resume using the last model state that was automatically saved. This option avoids the overhead of managing active jobs during the upgrade and is faster than explicitly stopping {{dfeeds}} and closing jobs. + + * [Stop all {{dfeeds}} and close all jobs](../../../explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-close-job). This option saves the model state at the time of closure. When you reopen the jobs after the upgrade, they use the exact same model. However, saving the latest model state takes longer than using upgrade mode, especially if you have a lot of jobs or jobs with large model states. + +4. $$$upgrade-node$$$ **Shut down a single node**. + + To shut down a single node depends on what is currently used to run {{es}}. For example, if using `systemd` or SysV `init` run the commands below. + + * If you are running {{es}} with `systemd`: + + ```sh + sudo systemctl stop elasticsearch.service + ``` + + * If you are running {{es}} with SysV `init`: + + ```sh + sudo -i service elasticsearch stop + ``` + +5. **Upgrade the node you shut down.** + + To upgrade using a [Debian](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) or [RPM](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) package: + + * Use `rpm` or `dpkg` to install the new package. All files are installed in the appropriate location for the operating system and {{es}} config files are not overwritten. + + To upgrade using a zip or compressed tarball: + + 1. Extract the zip or tarball to a **new** directory. This is critical if you are not using external `config` and `data` directories. + 2. Set the `ES_PATH_CONF` environment variable to specify the location of your external `config` directory and `jvm.options` file. If you are not using an external `config` directory, copy your old configuration over to the new installation. + 3. Set `path.data` in `config/elasticsearch.yml` to point to your external data directory. If you are not using an external `data` directory, copy your old data directory over to the new installation.
+ + ::::{important} + If you use {{monitor-features}}, re-use the data directory when you upgrade {{es}}. Monitoring identifies unique {{es}} nodes by using the persistent UUID, which is stored in the data directory. + :::: + + 4. Set `path.logs` in `config/elasticsearch.yml` to point to the location where you want to store your logs. If you do not specify this setting, logs are stored in the directory you extracted the archive to. + + ::::{tip} + When you extract the zip or tarball packages, the `elasticsearch-{{bare_version}}` directory contains the {{es}} `config`, `data`, and `logs` directories. + + We recommend moving these directories out of the {{es}} directory so that there is no chance of deleting them when you upgrade {{es}}. To specify the new locations, use the `ES_PATH_CONF` environment variable and the `path.data` and `path.logs` settings. For more information, see [Important {{es}} configuration](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md). + + The Debian and RPM packages place these directories in the appropriate place for each operating system. In production, we recommend using the deb or rpm package. + + :::: + + + $$$rolling-upgrades-bootstrapping$$$ + Leave `cluster.initial_master_nodes` unset when performing a rolling upgrade. Each upgraded node is joining an existing cluster so there is no need for [cluster bootstrapping](../../../deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md). You must configure [either `discovery.seed_hosts` or `discovery.seed_providers`](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md#discovery-settings) on every node. + +6. **Upgrade any plugins.** + + Use the `elasticsearch-plugin` script to install the upgraded version of each installed {{es}} plugin. All plugins must be upgraded when you upgrade a node. + +7. **Start the upgraded node.** + + Start the newly-upgraded node and confirm that it joins the cluster by checking the log file or by submitting a `_cat/nodes` request: + + ```console + GET _cat/nodes + ``` + +8. **Reenable shard allocation.** + + For data nodes, once the node has joined the cluster, remove the `cluster.routing.allocation.enable` setting to enable shard allocation and start using the node: + + ```console + PUT _cluster/settings + { + "persistent": { + "cluster.routing.allocation.enable": null + } + } + ``` + +9. **Wait for the node to recover.** + + Before upgrading the next node, wait for the cluster to finish shard allocation. You can check progress by submitting a `_cat/health` request: + + ```console + GET _cat/health?v=true + ``` + + Wait for the `status` column to switch to `green`. Once the node is `green`, all primary and replica shards have been allocated. + + ::::{important} + During a rolling upgrade, primary shards assigned to a node running the new version cannot have their replicas assigned to a node with the old version. The new version might have a different data format that is not understood by the old version. + + If it is not possible to assign the replica shards to another node (there is only one upgraded node in the cluster), the replica shards remain unassigned and status stays `yellow`. + + In this case, you can proceed once there are no initializing or relocating shards (check the `init` and `relo` columns). + + As soon as another node is upgraded, the replicas can be assigned and the status will change to `green`. + + :::: + + + Shards that were not flushed might take longer to recover. You can monitor the recovery status of individual shards by submitting a `_cat/recovery` request: + + ```console + GET _cat/recovery + ``` + + If you stopped indexing, it is safe to resume indexing as soon as recovery completes. + +10. **Repeat**. + + When the node has recovered and the cluster is stable, repeat these steps for each node that needs to be updated. You can monitor the health of the cluster with a `_cat/health` request: + + ```console + GET /_cat/health?v=true + ``` + + And check which nodes have been upgraded with a `_cat/nodes` request: + + ```console + GET /_cat/nodes?h=ip,name,version&v=true + ``` + +11. **Restart machine learning jobs.** + + If you temporarily halted the tasks associated with your {{ml}} jobs, use the set upgrade mode API to return them to active states: + + ```console + POST _ml/set_upgrade_mode?enabled=false + ``` + + If you closed all {{ml}} jobs before the upgrade, open the jobs and start the datafeeds from {{kib}} or with the [open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) and [start datafeed](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-datafeed) APIs. + + + +## Rolling upgrades [rolling-upgrades] + +During a rolling upgrade, the cluster continues to operate normally. However, any new functionality is disabled or operates in a backward compatible mode until all nodes in the cluster are upgraded. New functionality becomes operational once the upgrade is complete and all nodes are running the new version. Once that has happened, there’s no way to return to operating in a backward compatible mode. Nodes running the previous version will not be allowed to join the fully-updated cluster. + +In the unlikely case of a network malfunction during the upgrade process that isolates all remaining old nodes from the cluster, you must take the old nodes offline and upgrade them to enable them to join the cluster. + +If you stop half or more of the master-eligible nodes all at once during the upgrade the cluster will become unavailable. You must upgrade and restart all of the stopped master-eligible nodes to allow the cluster to re-form. It might also be necessary to upgrade all other nodes running the old version to enable them to join the re-formed cluster. + +Similarly, if you run a testing/development environment with a single master node it should be upgraded last. Restarting a single master node forces the cluster to be reformed. The new cluster will initially only have the upgraded master node and will thus reject the older nodes when they re-join the cluster. Nodes that have already been upgraded will successfully re-join the upgraded master. + + +## Archived settings [archived-settings] + +If you upgrade an {{es}} cluster that uses deprecated cluster or index settings that are not used in the target version, they are archived. We recommend you remove any archived settings after upgrading. For more information, see [Archived settings](../../../deploy-manage/upgrade/deployment-or-cluster/archived-settings.md). diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana.md b/deploy-manage/upgrade/deployment-or-cluster/kibana.md index 94bffee473..d0e54b9ed0 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana.md @@ -1 +1,69 @@ -# Upgrade {{kib}} \ No newline at end of file +# Upgrade {{kib}} [upgrade-kibana] + +When you upgrade {{kib}}, you also upgrade the {{observability}} and {{elastic-sec}} solutions, which use {{kib}} as their main interface. + +::::{warning} +{{kib}} automatically runs saved object migrations when required. To roll back to an earlier version in case of an upgrade failure, you **must** have a [backup snapshot](../../tools/snapshot-and-restore.md) that includes the `kibana` feature state. Snapshots include this feature state by default. + +For more information, refer to [Migrate saved objects](../internal-upgrade-processes/saved-object-migrations.md). + +:::: + + +## Upgrading multiple {{kib}} instances [_upgrading_multiple_kib_instances] + +When upgrading several {{kib}} instances connected to the same {{es}} cluster, ensure that all outdated instances are shut down before starting the upgrade. + +Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade migration in parallel. + +For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade migration before bringing up the remaining instances. + +## Preparing for migration [preventing-migration-failures] + +Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready for migration. + + +### Ensure your {{es}} cluster is healthy [_ensure_your_es_cluster_is_healthy] + +Problems with your {{es}} cluster can prevent {{kib}} upgrades from succeeding. + +During the upgrade process, {{kib}} creates new indices into which updated documents are written. If a cluster is approaching the low watermark, there’s a high risk of {{kib}} not being able to create these. Reading, transforming and writing updated documents can be memory intensive, using more available heap than during routine operation. You must make sure that enough heap is available to prevent requests from timing out or throwing errors from circuit breaker exceptions. You should also ensure that all shards are replicated and assigned. + +A healthy cluster has: + +* Enough free disk space, at least twice the amount of storage taken up by the `.kibana` and `.kibana_task_manager` indices +* Sufficient heap size +* A "green" cluster status + + +### Ensure that all {{kib}} instances are the same [_ensure_that_all_kib_instances_are_the_same] + +When you perform an upgrade migration of different {{kib}} versions, the migration can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. + +## Perform the upgrade [perform-kibana-upgrade] + +To upgrade {{kib}}: + +1. Shut down all {{kib}} instances. {{kib}} does not support rolling upgrades. **Upgrading while older {{kib}} instances are running can cause data loss or upgrade failures.** +2. To install the `deb` or `rpm` package: + + a. Use `rpm` or `dpkg`. This installs all files in their proper locations and will not overwrite the config files. + b. Upgrade any plugins by removing the existing plugin and reinstalling the appropriate version using the `kibana-plugin` script. For more information, see [{{kib}} plugins](asciidocalypse://docs/kibana/docs/reference/kibana-plugins.md). + +3. To install from a `zip` or `tar.gz` archive: + + a. **Extract the archive to a new directory** to be sure that you don’t overwrite the `config` or `data` directories. + b. Copy the files from the `config` directory from your old installation to your new installation. + c. Copy the files from the `data` directory from your old installation to your new installation. + + ::::{important} + If you use {{monitor-features}}, you must re-use the data directory when you upgrade {{kib}}. Otherwise, the {{kib}} instance is assigned a new persistent UUID and becomes a new instance in the monitoring data. + :::: + + d. Install the appropriate versions of all your plugins for your new installation using the `kibana-plugin` script. For more information, see [{{kib}} plugins](asciidocalypse://docs/kibana/docs/reference/kibana-plugins.md). + +4. Start {{kib}}. + +::::{note} +{{kib}} 8.0.0 and later uses a new logging system, so the log formats have changed. For additional information, see [Logging configuration changes](asciidocalypse://docs/kibana/docs/extend/logging-config-changes.md). +:::: From 38417fdc8d78783bc171bd81d3cf99a4e30f948d Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 10 Mar 2025 22:53:38 -0400 Subject: [PATCH 02/11] Incorporates feedback --- .../upgrade/deployment-or-cluster/elasticsearch.md | 10 ++++++++-- .../upgrade/deployment-or-cluster/kibana.md | 12 +++++++++--- 2 files changed, 17 insertions(+), 5 deletions(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md index f46223da66..66884d9959 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md +++ b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md @@ -1,3 +1,9 @@ +--- +applies_to: + deployment: + self: all +--- + # Upgrade {{es}} [upgrading-elasticsearch] An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. @@ -14,7 +20,7 @@ To upgrade a cluster: 1. **Disable shard allocation**. - When you shut down a data node, the allocation process waits for `index.unassigned.node_left.delayed_timeout` (by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by [disabling allocation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-allocation-enable) of replicas before shutting down [data nodes](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/node-settings.md#data-node): + When you shut down a data node, the allocation process waits for `index.unassigned.node_left.delayed_timeout` (by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by [disabling allocation](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-allocation-enable) of replicas before shutting down [data nodes](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#data-node): ```console PUT _cluster/settings @@ -38,7 +44,7 @@ To upgrade a cluster: It is possible to leave your {{ml}} jobs running during the upgrade, but it puts increased load on the cluster. When you shut down a {{ml}} node, its jobs automatically move to another node and restore the model states. ::::{note} - Any {{ml}} indices created before 8.x must be reindexed before upgrading, which you can initiate from the **Upgrade Assistant** in 8.18. + Any {{ml}} indices created before 8.x must be reindexed before upgrading, which you can initiate from the **Upgrade Assistant** in 8.18. For more information, refer to [Anomaly detection results migration] :::: diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana.md b/deploy-manage/upgrade/deployment-or-cluster/kibana.md index d0e54b9ed0..9b512caabe 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana.md @@ -1,3 +1,9 @@ +--- +applies_to: + deployment: + self: all +--- + # Upgrade {{kib}} [upgrade-kibana] When you upgrade {{kib}}, you also upgrade the {{observability}} and {{elastic-sec}} solutions, which use {{kib}} as their main interface. @@ -48,7 +54,7 @@ To upgrade {{kib}}: 2. To install the `deb` or `rpm` package: a. Use `rpm` or `dpkg`. This installs all files in their proper locations and will not overwrite the config files. - b. Upgrade any plugins by removing the existing plugin and reinstalling the appropriate version using the `kibana-plugin` script. For more information, see [{{kib}} plugins](asciidocalypse://docs/kibana/docs/reference/kibana-plugins.md). + b. Upgrade any plugins by removing the existing plugin and reinstalling the appropriate version using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). 3. To install from a `zip` or `tar.gz` archive: @@ -60,10 +66,10 @@ To upgrade {{kib}}: If you use {{monitor-features}}, you must re-use the data directory when you upgrade {{kib}}. Otherwise, the {{kib}} instance is assigned a new persistent UUID and becomes a new instance in the monitoring data. :::: - d. Install the appropriate versions of all your plugins for your new installation using the `kibana-plugin` script. For more information, see [{{kib}} plugins](asciidocalypse://docs/kibana/docs/reference/kibana-plugins.md). + d. Install the appropriate versions of all your plugins for your new installation using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). 4. Start {{kib}}. ::::{note} -{{kib}} 8.0.0 and later uses a new logging system, so the log formats have changed. For additional information, see [Logging configuration changes](asciidocalypse://docs/kibana/docs/extend/logging-config-changes.md). +{{kib}} 8.0.0 and later uses a new logging system, so the log formats have changed. For additional information, see [Logging configuration changes](kibana://extend/logging-config-changes.md). :::: From f3f5292bbeb35bd39d4d480475f988103fa52f5a Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Sat, 15 Mar 2025 23:04:59 -0400 Subject: [PATCH 03/11] Update prepare-to-upgrade.md --- deploy-manage/upgrade/prepare-to-upgrade.md | 70 +++++++++++++++++++-- 1 file changed, 64 insertions(+), 6 deletions(-) diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index b9438452f1..9212ac4fa6 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -1,13 +1,69 @@ -# Prepare to upgrade +# Prepare to upgrade [prepare-to-upgrade] -% What needs to be done: Write from scratch +⚠️ **This page is a work in progress.** ⚠️ -% Scope notes: Prerequisites and requirements +Before you upgrade Elastic, it's important to take some preparation steps. These steps vary based on your current version. -⚠️ **This page is a work in progress.** ⚠️ +## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] + +To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to Reindex to upgrade. + +:::{note} +{{beats}} and {{ls}} 8.18 are compatible with {{es}} 9.x to give you flexibility in scheduling the upgrade. {{es}} 8.x clients are also compatible with 9.x and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) by default to help ensure compatibility between 8.x clients and the 9.x {{es}} server. +::: + +With the exception of serverless, the following recommendations are best practices for all deployment methods. + +1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to prepare for your upgrade from 8.18 to 9.0. The Upgrade Assistant identifies deprecated settings, and guides you through resolving issues, and reindexing data streams and indices created before 8.0. + + :::{note} + Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms or other code that was targeting the old index. + ::: + +2. Ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) before making configuration changes or reindexing. + + :::{tip} + Tip: From version 8.3, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots as old as version 5.0 without the need of an old {{es}} cluster. This ensures that data you store in {{es}} doesn’t have an end of life and is still accessible when you upgrade, without requiring a reindex process. + ::: + + You must resolve all critical issues before proceeding with the upgrade. If you make any additional changes, take a new snapshot to back up your data. + +3. Review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.x. + +4. Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Review all breaking changes for each product you use to review more information about changes that could affect your application. Make sure you test against the new version before upgrading existing deployments. + +5. Make the recommended changes to ensure that your clients continue to operate as expected after the upgrade. -## Anomaly detection results migration + :::{note} + As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information about how to best leverage REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). + ::: + +6. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with the {{es}} version you're upgrading to. + +7. We recommend creating a 9.0 test deployment and test the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings. + + :::{important} + You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot). + ::: + +8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. A monitoring cluster cannot monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. + + :::{note} + If you use {{ccs}}, note that 9.0+ can only search remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). + + If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. + :::: + +9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Anomaly detection results migration](#anomaly-detection-results-migration-anomaly-migration). + +10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Transform destination indices migration](#transform-destination-indices-migration-transform-migration). + + +## Reindex to upgrade + + +## Anomaly detection results migration [anomaly-migration] The {{anomaly-detect}} result indices `.ml-anomalies-*` created in {{es}} 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x. @@ -255,4 +311,6 @@ GET .ml-anomalies-custom-example/_search ``` The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. -::: \ No newline at end of file +::: + +## Transform destination indices migration [transform-migration] \ No newline at end of file From 824b5201ffad6407426dd6b5989d744b680ee2ab Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Sun, 16 Mar 2025 14:44:52 -0400 Subject: [PATCH 04/11] Saving changes. --- deploy-manage/upgrade.md | 8 ++--- .../deployment-or-cluster/kibana-roll-back.md | 30 +++++++++++++++++++ deploy-manage/upgrade/orchestrator.md | 13 ++++++-- deploy-manage/upgrade/prepare-to-upgrade.md | 22 +++++++++----- 4 files changed, 58 insertions(+), 15 deletions(-) create mode 100644 deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md diff --git a/deploy-manage/upgrade.md b/deploy-manage/upgrade.md index c871b26ecd..54874131f5 100644 --- a/deploy-manage/upgrade.md +++ b/deploy-manage/upgrade.md @@ -1,7 +1,5 @@ # Upgrade -⚠️ **This page is a work in progress.** ⚠️ - Upgrading to the latest version provides you access to Elastic latest features, enhancements, performance improvements, and bug fixes, many of which enable you to save your organization money, respond faster to potential threats, and improve the tools you use to investigate and analyze your data. As new versions are released, older versions reach their end of life at a regular cadence, so it’s important to ensure that your deployment is fully maintained and supported. For more information, refer to Elastic’s [Product End of Life Dates](https://www.elastic.co/support/eol). :::{note} @@ -16,7 +14,7 @@ There are a number of things you need to plan for before performing the actual u * Consider opening a [support case](https://support.elastic.co/) with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, [Elastic Consulting Services](https://www.elastic.co/consulting) provides the technical expertise and step-by-step approach for upgrading your Elastic deployment. * Schedule a system maintenance window within your organization. -**Check system requirements** +## Check system requirements [check-system-requirements] Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). @@ -26,7 +24,7 @@ By default, {{es}} is built using Java and includes a bundled version of [OpenJD If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [Bouncy Castle's](https://www.bouncycastle.org/java.html) FIPS implementation and is the recommended Java security provider when running {{es}}. -**Conduct a component inventory** +## Conduct a component inventory It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record if each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: @@ -51,7 +49,7 @@ It is very important to map all the components that are being used on the {{stac When you do your inventory, you can [enable audit logging](/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. ::: -**Test your development environment** +## Test your development environment We highly recommend testing and upgrading in your development environment before your production environment. Therefore, it is crucial to ensure that both your development and production environments have the same settings. Consider checking the following components beforehand: diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md new file mode 100644 index 0000000000..de6d616ef0 --- /dev/null +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md @@ -0,0 +1,30 @@ +# Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] + +If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. + +::::{warning} +Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also rollback {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. +:::: + + +To roll back after a failed upgrade migration, you must also roll back the {{kib}} feature state to be compatible with the previous {{kib}} version. + + +## Roll back by restoring the {{kib}} feature state from a snapshot [_roll_back_by_restoring_the_kib_feature_state_from_a_snapshot] + +1. Before proceeding, [take a snapshot](../../tools/snapshot-and-restore/create-snapshots.md) that contains the `kibana` feature state. By default, snapshots include the `kibana` feature state. +2. To make sure no {{kib}} instances are performing an upgrade migration, shut down all {{kib}} instances. +3. [Restore](../../tools/snapshot-and-restore/restore-snapshot.md) the `kibana` feature state from a snapshot taken before the failed {{kib}} upgrade. The following {{es}} request will only restore the {{kib}} feature state + + ```console + POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore + { + "indices": "-*", <1> + "feature_states": ["kibana"] + } + ``` + + 1. Exclude all indices and data streams from the restore operation to ensure that only the {{kib}} system indices included in the {{kib}} feature state will be restored. + +4. Start all {{kib}} instances on the older version you want to rollback to. + diff --git a/deploy-manage/upgrade/orchestrator.md b/deploy-manage/upgrade/orchestrator.md index 15ba07a036..4f64de500b 100644 --- a/deploy-manage/upgrade/orchestrator.md +++ b/deploy-manage/upgrade/orchestrator.md @@ -1,7 +1,14 @@ # Upgrade your orchestrator -% What needs to be done: Write from scratch +:::{note} +The topics in this section apply to customers running the {{stack}} on {{ece}} (ECE) or {{eck}} (ECK). +::: -% GitHub issue: https://github.com/elastic/docs-projects/issues/355 +Elastic provides customers with two major self-managed orchestrators to manage the Elastic Stack. Before you can upgrade the products in the stack, you need to ensure your orchestrator is running a compatible version. If you’re running a version of your orchestrator that’s incompatible with the Elastic Stack version you’re upgrading to, you’ll need to upgrade the orchestrator first. -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +Refer to one of these topics, depending on which orchestrator you have: + +* [Upgrade ECE](/deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md) +* [Upgrade ECK](/deploy-manage/upgrade/orchestrator/upgrade-cloud-on-k8s.md) + +If your orchestrator is up to date, you can skip this step and proceed with [Upgrading on ECE](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ece.md) or [Upgrading on ECK](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md). diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index 9212ac4fa6..8cc343830e 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -1,13 +1,13 @@ # Prepare to upgrade [prepare-to-upgrade] -⚠️ **This page is a work in progress.** ⚠️ - Before you upgrade Elastic, it's important to take some preparation steps. These steps vary based on your current version. ## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] -To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to Reindex to upgrade. +To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. If you're running a pre-8.x version, you might need to perform multiple upgrades or a full-cluster restart to get to 8.18 to prepare to upgrade to 9.0. + +Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade-reindex-to-upgrade). :::{note} {{beats}} and {{ls}} 8.18 are compatible with {{es}} 9.x to give you flexibility in scheduling the upgrade. {{es}} 8.x clients are also compatible with 9.x and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) by default to help ensure compatibility between 8.x clients and the 9.x {{es}} server. @@ -18,7 +18,7 @@ With the exception of serverless, the following recommendations are best practic 1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to prepare for your upgrade from 8.18 to 9.0. The Upgrade Assistant identifies deprecated settings, and guides you through resolving issues, and reindexing data streams and indices created before 8.0. :::{note} - Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms or other code that was targeting the old index. + Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms, or other code that was targeting the old index. ::: 2. Ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) before making configuration changes or reindexing. @@ -55,12 +55,20 @@ With the exception of serverless, the following recommendations are best practic If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. :::: -9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Anomaly detection results migration](#anomaly-detection-results-migration-anomaly-migration). +9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Anomaly detection results migration](#anomaly-migration). + + +10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Transform destination indices migration](#transform-migration). + -10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Transform destination indices migration](#transform-destination-indices-migration-transform-migration). +## Reindex to upgrade [reindex-to-upgrade] +To create a new 9.0 deployment and reindex from remote: -## Reindex to upgrade +1. Provision an additional deployment running 9.0. +2. Reindex your data into the new {{es}} cluster using the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. +3. Verify that the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. +4. Delete the old deployment. On {ecloud}, you are billed only for the time that the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. ## Anomaly detection results migration [anomaly-migration] From accb7081732b01887752d5704bbcafbe094e547c Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Sun, 16 Mar 2025 19:13:02 -0400 Subject: [PATCH 05/11] Saving latest changes. --- deploy-manage/upgrade.md | 86 ----- .../upgrade/deployment-or-cluster.md | 352 ++++++++++++++++- .../deployment-or-cluster/upgrade-on-ech.md | 48 +++ deploy-manage/upgrade/prepare-to-upgrade.md | 356 +++--------------- 4 files changed, 453 insertions(+), 389 deletions(-) diff --git a/deploy-manage/upgrade.md b/deploy-manage/upgrade.md index 54874131f5..71ab10997e 100644 --- a/deploy-manage/upgrade.md +++ b/deploy-manage/upgrade.md @@ -6,89 +6,3 @@ Upgrading to the latest version provides you access to Elastic latest features, Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. ::: -## Plan your upgrade [plan-upgrade] - -There are a number of things you need to plan for before performing the actual upgrade, so create a test plan. Consider the following recommendations: - -* Plan for an appropriate amount of time to complete the upgrade. Depending on your configuration and the size of your cluster, the process can take up to a few weeks or more to complete. -* Consider opening a [support case](https://support.elastic.co/) with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, [Elastic Consulting Services](https://www.elastic.co/consulting) provides the technical expertise and step-by-step approach for upgrading your Elastic deployment. -* Schedule a system maintenance window within your organization. - -## Check system requirements [check-system-requirements] - -Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). - -**OpenJDK compatibility and FIPS compliance** - -By default, {{es}} is built using Java and includes a bundled version of [OpenJDK](https://openjdk.java.net/) within each distribution. While we strongly recommend using the bundled Java Virtual Machine (JVM) in all installations of {{es}}, if you choose to use your own JVM, ensure it’s compatible by reviewing the [Product and JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm). {{es}} 9.0 requires Java 21 and supports Java 24. - -If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [Bouncy Castle's](https://www.bouncycastle.org/java.html) FIPS implementation and is the recommended Java security provider when running {{es}}. - -## Conduct a component inventory - -It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record if each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: - -* {{es}} -* {{es}} Hadoop -* {{es}} plugins -* {{es}} clients -* {{kib}} -* {{ls}} -* {{ls}} plugins -* {{beats}} -* {{beats}} modules -* {{apm-agent}} -* APM server -* {{agent}} -* {{fleet}} -* Security -* Browsers -* External services (Kafka, etc.) - -:::{tip} -When you do your inventory, you can [enable audit logging](/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. -::: - -## Test your development environment - -We highly recommend testing and upgrading in your development environment before your production environment. Therefore, it is crucial to ensure that both your development and production environments have the same settings. Consider checking the following components beforehand: - -* Enrichment information -* Plugins -* Mapping -* Index lifecycle management (ILM) -* Snapshot lifecycle management (SLM) -* Index templates -* {{ml-cap}} jobs -* Inbound sample data -* Live data -* Performance -* Outbound integrations -* Dashboards -* Alerts -* Authentication - -## Choose your upgrade path [choose-upgrade-path] - -The procedures you follow to upgrade depend on your infrastructure and deployment method. You’ve installed Elastic components using either Elastic-managed infrastructure or self-managed infrastructure. - -### Elastic-managed infrastructure - -Elastic-managed infrastructure includes {{ecloud}} – the umbrella term for {{ech}} (ECH) and {{serverless-full}}. {{serverless-full}} (“Serverless”) is a fully managed cloud offering with three products: {{es-serverless}}, {{obs-serverless}}, and {{sec-serverless}}. All serverless products are built on top of the Search AI Lake. Customers on serverless receive the latest features automatically when updates are published and do not need to choose an upgrade path. - -{{ech}} is Elastic’s cloud offering for managing {{stack}} deployments, built on top of {{es}}. A single click in the {{ecloud}} console can upgrade a deployment to a newer version. - -### Self-managed infrastructure - -Self-managed infrastructure – either on-prem or on public cloud, includes: -* {{stack}} -* {{ece}} (ECE) -* {{eck}} (ECK) - -For ECE and ECK, you must ensure the operator is running a compatible version with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. - -If you’re running the {{stack}} on your own self-managed infrastructure, you must upgrade each component individually. - -% Refer to the diagram below for a visualization of the different deployment methods. - - diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index 15b90fd74a..8678555ebe 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -14,11 +14,6 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html --- -# Upgrade your deployment or cluster - -% What needs to be done: Refine - -% GitHub issue: https://github.com/elastic/docs-projects/issues/270 % Use migrated content from existing pages that map to this page: @@ -72,4 +67,349 @@ $$$upgrading-reindex$$$ * [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) \ No newline at end of file +* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) + +# Upgrade your deployment or cluster [upgrade-deployment-cluster] + +When upgrading the version of an existing cluster, you perform either a minor or major upgrade. The difference is that a minor upgrade takes you from version 9.0 to 9.1, for example, while a major upgrade takes you from version 8 to 9. + +The procedures you follow to upgrade depend on whether you’ve installed Elastic components using Elastic-managed infrastructure or self-managed infrastructure. + +If you’re running Elastic-managed infrastructure, your options are to: + +* [Upgrade on {{ech}}](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md) +* Upgrade on {{serverless-full}} (updates are automatic and require no user management) + +If you’re running your own self-managed infrastructure — either on-prem or on public cloud — your options are to: + +* [Upgrade the {{stack}}](/deploy-manage/upgrade/deployment-or-cluster/self-managed.md) (upgrade each component individually) +* [Upgrade on {{ece}} (ECE)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ece.md) +* [Upgrade on {{eck}} (ECK)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md) + +## Prepare to upgrade [prepare-to-upgrade] + +Before you upgrade Elastic, it's important to take some preparation steps. These steps vary based on your current version. + +:::{important} +Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. +::: + +## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] + +To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. If you're running a pre-8.x version, you might need to perform multiple upgrades or a full-cluster restart to get to 8.18 to prepare to upgrade to 9.0. + +Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade). + +:::{note} +{{beats}} and {{ls}} 8.18 are compatible with {{es}} 9.x to give you flexibility in scheduling the upgrade. {{es}} 8.x clients are also compatible with 9.x and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) by default to help ensure compatibility between 8.x clients and the 9.x {{es}} server. +::: + +With the exception of serverless, the following recommendations are best practices for all deployment methods. + +1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to prepare for your upgrade from 8.18 to 9.0. The Upgrade Assistant identifies deprecated settings, and guides you through resolving issues, and reindexing data streams and indices created before 8.0. + + :::{note} + Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms, or other code that was targeting the old index. + ::: + +2. Ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) before making configuration changes or reindexing. + + :::{tip} + Tip: From version 8.3, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots as old as version 5.0 without the need of an old {{es}} cluster. This ensures that data you store in {{es}} doesn’t have an end of life and is still accessible when you upgrade, without requiring a reindex process. + ::: + + You must resolve all critical issues before proceeding with the upgrade. If you make any additional changes, take a new snapshot to back up your data. + +3. Review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.x. + +4. Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Review all breaking changes for each product you use to review more information about changes that could affect your application. Make sure you test against the new version before upgrading existing deployments. + +5. Make the recommended changes to ensure that your clients continue to operate as expected after the upgrade. + + :::{note} + As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information about how to best leverage REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). + ::: + +6. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with the {{es}} version you're upgrading to. + +7. We recommend creating a 9.0 test deployment and test the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings. + + :::{important} + You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot). + ::: + +8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. A monitoring cluster cannot monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. + + :::{note} + If you use {{ccs}}, note that 9.0+ can only search remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). + + If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. + :::: + +9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](prepare-to-upgrade.md#anomaly-migration). + + +10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](prepare-to-upgrade.md#transform-migration). + + +## Reindex to upgrade (optional) [reindex-to-upgrade] + +To create a new 9.0 deployment and reindex from remote: + +1. Provision an additional deployment running 9.0. +2. Reindex your data into the new {{es}} cluster using the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. +3. Verify that the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. +4. Delete the old deployment. On {ecloud}, you are billed only for the time that the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. + + +## Anomaly detection results migration [anomaly-migration] + +The {{anomaly-detect}} result indices `.ml-anomalies-*` created in {{es}} 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x. + +**Reindexing**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. However, you cannot completely delete an {{anomaly-job}} that stores results in this index until the reindexing is complete. + +**Marking indices as read-only**: This is useful for large indexes that contain the results of only one or a few {{anomaly-jobs}}. If you delete these jobs later, you will not be able to create a new job with the same name. + +**Deleting**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. + +:::{dropdown} Which indices require attention? +To identify indices that require action, use the [Deprecation info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-migration-deprecations-1): + +``` +GET /.ml-anomalies-*/_migration/deprecations +``` + +The response contains the list of critical deprecation warnings in the `index_settings` section: + +```json + "index_settings": { + ".ml-anomalies-shared": [ + { + "level": "critical", + "message": "Index created before 8.0", + "url": "https://ela.st/es-deprecation-8-reindex", + "details": "This index was created with version 7.8.23 and is not compatible with 9.0. Reindex or remove the index before upgrading.", + "resolve_during_rolling_upgrade": false + } + ] + } +``` +::: + +:::{dropdown} Reindexing anomaly result indices +For an index with less than 10GB that contains results from multiple jobs that are still required, we recommend reindexing into a new format using UI. You can use the [Get index information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices-1) to obtain the size of an index: + +``` +GET _cat/indices/.ml-anomalies-custom-example?v&h=index,store.size +``` + +The reindexing can be initiated in the {{kib}} Upgrade Assistant. + +If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps: + +1. Set the original index to read-only. + +``` +PUT .ml-anomalies-custom-example/_block/read_only +``` + +2. Create a new index from the legacy index. + +``` +POST _create_from/.ml-anomalies-custom-example/.reindexed-v9-ml-anomalies-custom-example +``` + +3. Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to `0` before the reindexing and then set back to the original number once it is completed. + + 1. Get the number of replicas. + + ``` + GET /.reindexed-v9-ml-anomalies-custom-example/_settings + ``` + + Note the number of replicas in the response. For example: + + ```json + { + ".reindexed-v9-ml-anomalies-custom-example": { + "settings": { + "index": { + "number_of_replicas": "1", + "number_of_shards": "1" + } + } + } + } + ``` + + 2. Set the number of replicas to `0.` + + ```json + PUT /.reindexed-v9-ml-anomalies-custom-example/_settings + { + "index": { + "number_of_replicas": 0 + } + } + ``` + + 3. Start the reindexing process in asynchronous mode. + + ```json + POST _reindex?wait_for_completion=false + { + "source": { + "index": ".ml-anomalies-custom-example" + }, + "dest": { + "index": ".reindexed-v9-ml-anomalies-custom-example" + } + } + ``` + + The response will contain a `task_id`. You can check when the task is completed using the following command: + + ``` + GET _tasks/ + ``` + + 4. Set the number of replicas to the original number when the reindexing is finished. + + ```json + PUT /.reindexed-v9-ml-anomalies-custom-example/_settings + { + "index": { + "number_of_replicas": "" + } + } + ``` + +4. Get the aliases the original index is pointing to. + +``` +GET .ml-anomalies-custom-example/_alias +``` + +The response may contain multiple aliases if the results of multiple jobs are stored in the same index. + +```json +{ + ".ml-anomalies-custom-example": { + "aliases": { + ".ml-anomalies-example1": { + "filter": { + "term": { + "job_id": { + "value": "example1" + } + } + }, + "is_hidden": true + }, + ".ml-anomalies-example2": { + "filter": { + "term": { + "job_id": { + "value": "example2" + } + } + }, + "is_hidden": true + } + } + } +} +``` + +5. Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same `filter` and `is_hidden` parameters as for the original index. + +```json +POST _aliases +{ + "actions": [ + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-example1", + "filter": { + "term": { + "job_id": { + "value": "example1" + } + } + }, + "is_hidden": true + } + }, + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-example2", + "filter": { + "term": { + "job_id": { + "value": "example2" + } + } + }, + "is_hidden": true + } + }, + { + "remove": { + "index": ".ml-anomalies-custom-example", + "aliases": ".ml-anomalies-*" + } + }, + { + "remove_index": { + "index": ".ml-anomalies-custom-example" + } + }, + { + "add": { + "index": ".reindexed-v9-ml-anomalies-custom-example", + "alias": ".ml-anomalies-custom-example", + "is_hidden": true + } + } + ] +} +``` +::: + + +:::{dropdown} Marking anomaly result indices as read-only +Legacy indices created in {{es}} 7.x can be made read-only and supported in {{es}} 9.x. Making an index with a large amount of historical results read-only allows for a quick migration to the next major release, since you don’t have to wait for the data to be reindexed into the new format. However, it has the limitation that even after deleting an {{anomaly-job}}, the historical results associated with this job are not completely deleted. Therefore, the system will prevent you from creating a new job with the same name. + +To set the index as read-only, add the write block to the index: + +``` +PUT .ml-anomalies-custom-example/_block/write +``` + +Indices created in {{es}} 7.x that have a write block will not raise a critical deprecation warning. +::: + +:::{dropdown} Deleting anomaly result indices +If an index contains results of the jobs that are no longer required. To list all jobs that stored results in an index, use the terms aggregation: + +```json +GET .ml-anomalies-custom-example/_search +{ + "size": 0, + "aggs": { + "job_ids": { + "terms": { + "field": "job_id", + "size": 100 + } + } + } +} +``` + +The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. +::: + +## Transform destination indices migration [transform-migration] \ No newline at end of file diff --git a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md index 0d7d0e1d95..26fa68b891 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md +++ b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md @@ -3,3 +3,51 @@ navigation_title: "Upgrade on {{ech}}" --- # Upgrade on {{ech}} (ECH) + +Once you are [prepared to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md), a single click in the {{ecloud}} console can upgrade a deployment to a newer version, add more processing capacity, change plugins, and enable or disable high availability, all at the same time. During the upgrade process, {{es}}, {{kib}}, and all of your deployment components are upgraded simultaneously. + +Minor version upgrades, upgrades from 8.18 to 9.x, and cluster configuration changes can be performed with no downtime. {{ecloud}} only supports upgrades to released versions. Release candidate builds and master snapshots are not supported. + +::::{important} +Although it’s simple to upgrade an {{ecloud}} deployment, the new version might include breaking changes that affect your application. Ensure you review breaking changes and deprecation logs, make any necessary changes, and test against the new version before upgrading your production deployment. +:::: + +## Perform the upgrade [perform-cloud-upgrade] + +Log in to your {{ecloud}} environment: + +1. Log in to the [{{ech}} console](https://cloud.elastic.co/login). +2. Select your deployment on the home page in the {{ech}} card or go to the **Deployments** page. + + Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + + +To upgrade a deployment: + +1. In the **Deployment version** section, click **Upgrade**. +2. Select the appropriate version you're upgrading to. +3. Click **Upgrade**, then **Confirm upgrade**. The new configuration takes a few minutes to create. + + ::::{note} + If any incompatibilities are detected when you attempt to upgrade, the UI provides a link to the Upgrade Assistant, which checks for deprecated settings in your cluster and indices and helps you resolve them. If there are any issues that would prevent a successful upgrade, the upgrade is blocked. After resolving the issues, return to the **Deployments** page and restart the upgrade. + :::: + + +Snapshots +: To keep your data safe during the upgrade process, a snapshot is taken automatically before any changes are made to your cluster. After a major version upgrade is complete and a snapshot of the upgraded cluster is available, all snapshots taken with the previous major version of {{es}} are stored in the snapshot repository. + + +Security realm settings +: During the upgrade process, you are prompted to update the security realm settings if your user settings include a `xpack.security.authc.realms` value. + + If the security realms are configured in `user_settings`, you’ll be prompted to modify the settings: + + 1. On the **Update security realm settings** window, edit the settings. + 2. Click **Update settings**. If the security realm settings are located in `user_settings_override`, contact support to help you upgrade. + +### Next steps [next-steps-ech] + +Once you've successfully upgraded on {{ech}}, do the following: + +* [Upgrade Elastic APM](../../../solutions/observability/apps/upgrade.md) +* [Upgrade your ingest components](/deploy-manage/upgrade/ingest-components.md) diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index 8cc343830e..2a8272ef8c 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -1,324 +1,86 @@ -# Prepare to upgrade [prepare-to-upgrade] +# Prepare to upgrade [upgrade-prep] -Before you upgrade Elastic, it's important to take some preparation steps. These steps vary based on your current version. +There are a number of things you need to plan for before performing the actual upgrade, so create a test plan. Consider the following recommendations: +* Plan for an appropriate amount of time to complete the upgrade. Depending on your configuration and the size of your cluster, the process can take up to a few weeks or more to complete. +* Consider opening a [support case](https://support.elastic.co/) with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, [Elastic Consulting Services](https://www.elastic.co/consulting) provides the technical expertise and step-by-step approach for upgrading your Elastic deployment. +* Schedule a system maintenance window within your organization. -## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] +## Check system requirements [check-system-requirements] -To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. If you're running a pre-8.x version, you might need to perform multiple upgrades or a full-cluster restart to get to 8.18 to prepare to upgrade to 9.0. +Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). -Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade-reindex-to-upgrade). +**OpenJDK compatibility and FIPS compliance** -:::{note} -{{beats}} and {{ls}} 8.18 are compatible with {{es}} 9.x to give you flexibility in scheduling the upgrade. {{es}} 8.x clients are also compatible with 9.x and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) by default to help ensure compatibility between 8.x clients and the 9.x {{es}} server. -::: - -With the exception of serverless, the following recommendations are best practices for all deployment methods. - -1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to prepare for your upgrade from 8.18 to 9.0. The Upgrade Assistant identifies deprecated settings, and guides you through resolving issues, and reindexing data streams and indices created before 8.0. - - :::{note} - Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms, or other code that was targeting the old index. - ::: - -2. Ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) before making configuration changes or reindexing. - - :::{tip} - Tip: From version 8.3, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots as old as version 5.0 without the need of an old {{es}} cluster. This ensures that data you store in {{es}} doesn’t have an end of life and is still accessible when you upgrade, without requiring a reindex process. - ::: - - You must resolve all critical issues before proceeding with the upgrade. If you make any additional changes, take a new snapshot to back up your data. - -3. Review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.x. - -4. Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Review all breaking changes for each product you use to review more information about changes that could affect your application. Make sure you test against the new version before upgrading existing deployments. - -5. Make the recommended changes to ensure that your clients continue to operate as expected after the upgrade. - - :::{note} - As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information about how to best leverage REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). - ::: - -6. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with the {{es}} version you're upgrading to. - -7. We recommend creating a 9.0 test deployment and test the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings. - - :::{important} - You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot). - ::: - -8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. A monitoring cluster cannot monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. - - :::{note} - If you use {{ccs}}, note that 9.0+ can only search remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). - - If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. - :::: - -9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Anomaly detection results migration](#anomaly-migration). - - -10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. For instructions on how to do this, refer to [Transform destination indices migration](#transform-migration). - - -## Reindex to upgrade [reindex-to-upgrade] - -To create a new 9.0 deployment and reindex from remote: - -1. Provision an additional deployment running 9.0. -2. Reindex your data into the new {{es}} cluster using the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. -3. Verify that the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. -4. Delete the old deployment. On {ecloud}, you are billed only for the time that the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. - - -## Anomaly detection results migration [anomaly-migration] - -The {{anomaly-detect}} result indices `.ml-anomalies-*` created in {{es}} 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x. +By default, {{es}} is built using Java and includes a bundled version of [OpenJDK](https://openjdk.java.net/) within each distribution. While we strongly recommend using the bundled Java Virtual Machine (JVM) in all installations of {{es}}, if you choose to use your own JVM, ensure it’s compatible by reviewing the [Product and JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm). {{es}} 9.0 requires Java 21 and supports Java 24. -**Reindexing**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. However, you cannot completely delete an {{anomaly-job}} that stores results in this index until the reindexing is complete. +If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [Bouncy Castle's](https://www.bouncycastle.org/java.html) FIPS implementation and is the recommended Java security provider when running {{es}}. -**Marking indices as read-only**: This is useful for large indexes that contain the results of only one or a few {{anomaly-jobs}}. If you delete these jobs later, you will not be able to create a new job with the same name. +## Conduct a component inventory -**Deleting**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. +It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record if each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: -:::{dropdown} Which indices require attention? -To identify indices that require action, use the [Deprecation info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-migration-deprecations-1): +* {{es}} +* {{es}} Hadoop +* {{es}} plugins +* {{es}} clients +* {{kib}} +* {{ls}} +* {{ls}} plugins +* {{beats}} +* {{beats}} modules +* {{apm-agent}} +* APM server +* {{agent}} +* {{fleet}} +* Security +* Browsers +* External services (Kafka, etc.) -``` -GET /.ml-anomalies-*/_migration/deprecations -``` - -The response contains the list of critical deprecation warnings in the `index_settings` section: - -```json - "index_settings": { - ".ml-anomalies-shared": [ - { - "level": "critical", - "message": "Index created before 8.0", - "url": "https://ela.st/es-deprecation-8-reindex", - "details": "This index was created with version 7.8.23 and is not compatible with 9.0. Reindex or remove the index before upgrading.", - "resolve_during_rolling_upgrade": false - } - ] - } -``` +:::{tip} +When you do your inventory, you can [enable audit logging](/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. ::: -:::{dropdown} Reindexing anomaly result indices -For an index with less than 10GB that contains results from multiple jobs that are still required, we recommend reindexing into a new format using UI. You can use the [Get index information API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices-1) to obtain the size of an index: - -``` -GET _cat/indices/.ml-anomalies-custom-example?v&h=index,store.size -``` - -The reindexing can be initiated in the {{kib}} Upgrade Assistant. - -If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps: - -1. Set the original index to read-only. - -``` -PUT .ml-anomalies-custom-example/_block/read_only -``` - -2. Create a new index from the legacy index. - -``` -POST _create_from/.ml-anomalies-custom-example/.reindexed-v9-ml-anomalies-custom-example -``` - -3. Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to `0` before the reindexing and then set back to the original number once it is completed. - - 1. Get the number of replicas. - - ``` - GET /.reindexed-v9-ml-anomalies-custom-example/_settings - ``` - - Note the number of replicas in the response. For example: +## Test your development environment - ```json - { - ".reindexed-v9-ml-anomalies-custom-example": { - "settings": { - "index": { - "number_of_replicas": "1", - "number_of_shards": "1" - } - } - } - } - ``` +We highly recommend testing and upgrading in your development environment before your production environment. Therefore, it is crucial to ensure that both your development and production environments have the same settings. Consider checking the following components beforehand: - 2. Set the number of replicas to `0.` +* Enrichment information +* Plugins +* Mapping +* Index lifecycle management (ILM) +* Snapshot lifecycle management (SLM) +* Index templates +* {{ml-cap}} jobs +* Inbound sample data +* Live data +* Performance +* Outbound integrations +* Dashboards +* Alerts +* Authentication - ```json - PUT /.reindexed-v9-ml-anomalies-custom-example/_settings - { - "index": { - "number_of_replicas": 0 - } - } - ``` +## Choose your upgrade path [choose-upgrade-path] - 3. Start the reindexing process in asynchronous mode. +The procedures you follow to upgrade depend on your infrastructure and deployment method. You’ve installed Elastic components using either Elastic-managed infrastructure or self-managed infrastructure. - ```json - POST _reindex?wait_for_completion=false - { - "source": { - "index": ".ml-anomalies-custom-example" - }, - "dest": { - "index": ".reindexed-v9-ml-anomalies-custom-example" - } - } - ``` +### Elastic-managed infrastructure - The response will contain a `task_id`. You can check when the task is completed using the following command: +Elastic-managed infrastructure includes {{ecloud}} – the umbrella term for {{ech}} (ECH) and {{serverless-full}}. {{serverless-full}} (“Serverless”) is a fully managed cloud offering with three products: {{es-serverless}}, {{obs-serverless}}, and {{sec-serverless}}. All serverless products are built on top of the Search AI Lake. Customers on serverless receive the latest features automatically when updates are published and do not need to choose an upgrade path. - ``` - GET _tasks/ - ``` +{{ech}} is Elastic’s cloud offering for managing {{stack}} deployments, built on top of {{es}}. A single click in the {{ecloud}} console can upgrade a deployment to a newer version. - 4. Set the number of replicas to the original number when the reindexing is finished. +### Self-managed infrastructure - ```json - PUT /.reindexed-v9-ml-anomalies-custom-example/_settings - { - "index": { - "number_of_replicas": "" - } - } - ``` +Self-managed infrastructure – either on-prem or on public cloud, includes: +* {{stack}} +* {{ece}} (ECE) +* {{eck}} (ECK) -4. Get the aliases the original index is pointing to. +For ECE and ECK, you must ensure the operator is running a compatible version with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. -``` -GET .ml-anomalies-custom-example/_alias -``` +If you’re running the {{stack}} on your own self-managed infrastructure, you must upgrade each component individually. -The response may contain multiple aliases if the results of multiple jobs are stored in the same index. +% Refer to the diagram below for a visualization of the different deployment methods. -```json -{ - ".ml-anomalies-custom-example": { - "aliases": { - ".ml-anomalies-example1": { - "filter": { - "term": { - "job_id": { - "value": "example1" - } - } - }, - "is_hidden": true - }, - ".ml-anomalies-example2": { - "filter": { - "term": { - "job_id": { - "value": "example2" - } - } - }, - "is_hidden": true - } - } - } -} -``` - -5. Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same `filter` and `is_hidden` parameters as for the original index. - -```json -POST _aliases -{ - "actions": [ - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-example1", - "filter": { - "term": { - "job_id": { - "value": "example1" - } - } - }, - "is_hidden": true - } - }, - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-example2", - "filter": { - "term": { - "job_id": { - "value": "example2" - } - } - }, - "is_hidden": true - } - }, - { - "remove": { - "index": ".ml-anomalies-custom-example", - "aliases": ".ml-anomalies-*" - } - }, - { - "remove_index": { - "index": ".ml-anomalies-custom-example" - } - }, - { - "add": { - "index": ".reindexed-v9-ml-anomalies-custom-example", - "alias": ".ml-anomalies-custom-example", - "is_hidden": true - } - } - ] -} -``` -::: - - -:::{dropdown} Marking anomaly result indices as read-only -Legacy indices created in {{es}} 7.x can be made read-only and supported in {{es}} 9.x. Making an index with a large amount of historical results read-only allows for a quick migration to the next major release, since you don’t have to wait for the data to be reindexed into the new format. However, it has the limitation that even after deleting an {{anomaly-job}}, the historical results associated with this job are not completely deleted. Therefore, the system will prevent you from creating a new job with the same name. - -To set the index as read-only, add the write block to the index: - -``` -PUT .ml-anomalies-custom-example/_block/write -``` - -Indices created in {{es}} 7.x that have a write block will not raise a critical deprecation warning. -::: - -:::{dropdown} Deleting anomaly result indices -If an index contains results of the jobs that are no longer required. To list all jobs that stored results in an index, use the terms aggregation: - -```json -GET .ml-anomalies-custom-example/_search -{ - "size": 0, - "aggs": { - "job_ids": { - "terms": { - "field": "job_id", - "size": 100 - } - } - } -} -``` - -The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. -::: -## Transform destination indices migration [transform-migration] \ No newline at end of file From 99df6124fb7b8bd9cc61a2d98138e86e5caf8154 Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Sun, 16 Mar 2025 23:18:30 -0400 Subject: [PATCH 06/11] Saving... --- deploy-manage/upgrade/deployment-or-cluster.md | 6 ++++-- .../upgrade/deployment-or-cluster/self-managed.md | 13 +++++++++++++ 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index 8678555ebe..c2fc1db736 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -146,10 +146,12 @@ With the exception of serverless, the following recommendations are best practic If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. :::: -9. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](prepare-to-upgrade.md#anomaly-migration). +9. Consider closing {{ml}} jobs before you start the upgrade process. While {{ml}} jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process. +10. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](prepare-to-upgrade.md#anomaly-migration). -10. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](prepare-to-upgrade.md#transform-migration). + +11. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](prepare-to-upgrade.md#transform-migration). ## Reindex to upgrade (optional) [reindex-to-upgrade] diff --git a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md index 9dbfc4f8c0..34323a8772 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md +++ b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md @@ -1 +1,14 @@ # Upgrade Elastic on self-managed infrastructure + +If you've installed the {{stack}} on your own self-managed infrastructure, once you're [prepare to upgrade](/deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade), you'll need to upgrade each of your Elastic components individually. + +It's important that you upgrade your components in this order: +* [{{es}}](/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md) +* [{{kib}}](/deploy-manage/upgrade/deployment-or-cluster/kibana.md) +* [Elastic APM](../../../solutions/observability/apps/upgrade.md) +* [Ingest components](/deploy-manage/upgrade/ingest-components.md) + +:::{important} +If you are using {{ls}} and the `logstash-filter-elastic_integration plugin` to extend Elastic integrations, upgrade Logstash (or the `logstash-filter-elastic_integration` plugin specifically) *before* you upgrade Kibana. + +The Elasticsearch-Logstash-Kibana installation order for this specific plugin ensures the best experience with Elastic Agent-managed pipelines, and embeds functionality from a version of {{es}} Ingest Node that is compatible with the plugin version (`major.minor`). From 49f811c3d04d48fb7799db20123733d86e948d9d Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 17 Mar 2025 00:12:31 -0400 Subject: [PATCH 07/11] Fixing an error. --- .../upgrade/deployment-or-cluster.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index c2fc1db736..be11c5ed30 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -56,18 +56,18 @@ $$$upgrading-reindex$$$ **This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) -* [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md) -* [/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md](/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) -* [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) +% * [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) +% * [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md) +% * [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md) +% * [/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md) +% * [/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md](/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) +% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) +% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) +% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) # Upgrade your deployment or cluster [upgrade-deployment-cluster] @@ -135,7 +135,7 @@ With the exception of serverless, the following recommendations are best practic 7. We recommend creating a 9.0 test deployment and test the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings. :::{important} - You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot). + You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). ::: 8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. A monitoring cluster cannot monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. @@ -148,10 +148,10 @@ With the exception of serverless, the following recommendations are best practic 9. Consider closing {{ml}} jobs before you start the upgrade process. While {{ml}} jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process. -10. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](prepare-to-upgrade.md#anomaly-migration). +10. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](#anomaly-migration). -11. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](prepare-to-upgrade.md#transform-migration). +11. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](#transform-migration). ## Reindex to upgrade (optional) [reindex-to-upgrade] From d59517feca40d46dfe1fcb5a8ae5710bc3902de2 Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 17 Mar 2025 00:30:18 -0400 Subject: [PATCH 08/11] Fixing errors part 2... --- deploy-manage/toc.yml | 2 ++ deploy-manage/upgrade/prepare-to-upgrade.md | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index 8b8b5705ae..b5ef3dcb72 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -812,6 +812,8 @@ toc: children: - file: upgrade/deployment-or-cluster/elasticsearch.md - file: upgrade/deployment-or-cluster/kibana.md + children: + - file: upgrade/deployment-or-cluster/kibana-roll-back.md - file: upgrade/deployment-or-cluster/archived-settings.md children: - file: upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index 2a8272ef8c..a45bf11608 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -38,7 +38,7 @@ It is very important to map all the components that are being used on the {{stac * External services (Kafka, etc.) :::{tip} -When you do your inventory, you can [enable audit logging](/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. +When you do your inventory, you can [enable audit logging](/deploy-manage/security/logging-configuration/enabling-audit-logs.md) to evaluate resources accessing your deployment. ::: ## Test your development environment From c35483df3540689627fffd4d138145d8fe3f23d0 Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 17 Mar 2025 00:39:17 -0400 Subject: [PATCH 09/11] Update kibana-roll-back.md --- deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md index de6d616ef0..ae6313f6a3 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md @@ -1,6 +1,6 @@ # Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] -If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. +If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana#preventing-migration-failures.md) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. ::::{warning} Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also rollback {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. From 1329acfad48c7e3e07a98d98a760e7c97fa52c77 Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 17 Mar 2025 00:46:05 -0400 Subject: [PATCH 10/11] Fixing link --- deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md index ae6313f6a3..072aeca468 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md @@ -1,6 +1,6 @@ # Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] -If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana#preventing-migration-failures.md) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. +If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana.md#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. ::::{warning} Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also rollback {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. From be8af63b748d96b4ded950273a358497344c713e Mon Sep 17 00:00:00 2001 From: Janeen Roberts Date: Mon, 17 Mar 2025 21:16:35 -0400 Subject: [PATCH 11/11] Merging feedback and updating a few links --- deploy-manage/upgrade.md | 6 +- .../upgrade/deployment-or-cluster.md | 90 +++++++++---------- .../deployment-or-cluster/elasticsearch.md | 2 +- .../deployment-or-cluster/kibana-roll-back.md | 16 ++-- .../upgrade/deployment-or-cluster/kibana.md | 10 +-- .../deployment-or-cluster/self-managed.md | 6 +- .../deployment-or-cluster/upgrade-on-ech.md | 2 +- deploy-manage/upgrade/orchestrator.md | 2 +- deploy-manage/upgrade/prepare-to-upgrade.md | 6 +- 9 files changed, 72 insertions(+), 68 deletions(-) diff --git a/deploy-manage/upgrade.md b/deploy-manage/upgrade.md index 71ab10997e..ecd16907c9 100644 --- a/deploy-manage/upgrade.md +++ b/deploy-manage/upgrade.md @@ -1,8 +1,10 @@ # Upgrade -Upgrading to the latest version provides you access to Elastic latest features, enhancements, performance improvements, and bug fixes, many of which enable you to save your organization money, respond faster to potential threats, and improve the tools you use to investigate and analyze your data. As new versions are released, older versions reach their end of life at a regular cadence, so it’s important to ensure that your deployment is fully maintained and supported. For more information, refer to Elastic’s [Product End of Life Dates](https://www.elastic.co/support/eol). +Upgrading to the latest version provides access to the newest Elastic features, enhancements, performance improvements, and bug fixes. These updates reduce costs, speed up threat response, and improve investigative and analytical data tools. + +When Elastic releases new versions, older versions reach their end of life on a set schedule. To keep your deployment supported, stay up to date. For more information, refer to [Product End of Life Dates](https://www.elastic.co/support/eol). :::{note} -Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. +Upgrading from a release candidate build, such as 9.0.0-rc1, is unsupported. Use pre-releases only for testing in a temporary environment. ::: diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index 9f834076bd..84a54edecc 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -1,7 +1,6 @@ --- mapped_urls: - https://www.elastic.co/guide/en/kibana/current/upgrade.html - - https://www.elastic.co/guide/en/kibana/current/upgrade-migrations-rolling-back.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elastic-stack.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elasticsearch.html - https://www.elastic.co/guide/en/elastic-stack/current/upgrading-kibana.html @@ -52,8 +51,6 @@ $$$rolling-upgrades$$$ $$$upgrading-reindex$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: - % * [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) % * [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) % * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) @@ -69,108 +66,108 @@ $$$upgrading-reindex$$$ # Upgrade your deployment or cluster [upgrade-deployment-cluster] -When upgrading the version of an existing cluster, you perform either a minor or major upgrade. The difference is that a minor upgrade takes you from version 9.0 to 9.1, for example, while a major upgrade takes you from version 8 to 9. +When upgrading an existing cluster, you perform a minor or major upgrade. For example, a minor upgrade takes you from version 9.0.0 to 9.1.0, while a major upgrade takes you from version 8.0.0 to 9.0.0. -The procedures you follow to upgrade depend on whether you’ve installed Elastic components using Elastic-managed infrastructure or self-managed infrastructure. +Upgrade procedures depend on whether you installed Elastic components using Elastic-managed or self-managed infrastructure. -If you’re running Elastic-managed infrastructure, your options are to: +If you’re using Elastic-managed infrastructure, use the following options: * [Upgrade on {{ech}}](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md) -* Upgrade on {{serverless-full}} (updates are automatic and require no user management) +* Upgrade on [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), which is automatically performed by Elastic and requires no user management -If you’re running your own self-managed infrastructure — either on-prem or on public cloud — your options are to: +If you’re using self-managed infrastructure - either on-prem or public cloud - use the following options: -* [Upgrade the {{stack}}](/deploy-manage/upgrade/deployment-or-cluster/self-managed.md) (upgrade each component individually) +* [Upgrade the {{stack}}](/deploy-manage/upgrade/deployment-or-cluster/self-managed.md) * [Upgrade on {{ece}} (ECE)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ece.md) * [Upgrade on {{eck}} (ECK)](/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-eck.md) ## Prepare to upgrade [prepare-to-upgrade] -Before you upgrade Elastic, it's important to take some preparation steps. These steps vary based on your current version. +Before you upgrade, review and complete the necessary preparation steps, which vary by version. :::{important} -Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. +Upgrading from a release candidate build, such as 9.0.0-rc1, is unsupported. Use pre-releases only for testing in a temporary environment. ::: ## Prepare to upgrade from 8.x [prepare-upgrade-from-8.x] -To upgrade to 9.0 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade. Upgrading to 8.18 before upgrading to 9.x is required even if you opt to do a full-cluster restart of your {{es}} cluster. If you're running a pre-8.x version, you might need to perform multiple upgrades or a full-cluster restart to get to 8.18 to prepare to upgrade to 9.0. +To upgrade from 8.17.0 or earlier to 9.0.0, you must first upgrade to the latest 8.18 patch release. This enables you to use the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to identify and resolve issues, reindex indices created before 8.0.0, and perform a rolling upgrade. Upgrading to the latest 8.18 patch release is required even if you choose a full {{es}} cluster restart. If you're using 7.x and earlier, you may need to complete multiple upgrades or perform a full-cluster restart to reach the latest 8.18 patch release before upgrading to 9.0.0. -Alternatively, you can create a new 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade). +Alternatively, you can create a 9.0 deployment and reindex from remote. For more information, refer to [Reindex to upgrade](#reindex-to-upgrade). :::{note} -{{beats}} and {{ls}} 8.18 are compatible with {{es}} 9.x to give you flexibility in scheduling the upgrade. {{es}} 8.x clients are also compatible with 9.x and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) by default to help ensure compatibility between 8.x clients and the 9.x {{es}} server. +For flexible upgrade scheduling, 8.18.0 {{beats}} and {{ls}} are compatible with 9.0.0 {{es}}. +By default, 8.x {{es}} clients are compatible with 9.0.0 and use [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md) to maintain compatibility with the 9.0.0 {{es}} server. ::: -With the exception of serverless, the following recommendations are best practices for all deployment methods. +Review the best practices to upgrade your deployments. -1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md) to prepare for your upgrade from 8.18 to 9.0. The Upgrade Assistant identifies deprecated settings, and guides you through resolving issues, and reindexing data streams and indices created before 8.0. +1. Run the [Upgrade Assistant](prepare-to-upgrade/upgrade-assistant.md), which identifies deprecated settings, helps resolve issues, and reindexes data streams and indices created in 8.0.0 and earlier. :::{note} - Please be aware that depending on your setup, if your indices change due to reindexing, you might need to change alerts, transforms, or other code that was targeting the old index. + Depending on your setup, reindexing can change your indices, and you may need to update alerts, transforms, or other code targeting the old index. ::: -2. Ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) before making configuration changes or reindexing. +2. Before you change configurations or reindex, ensure you have a current [snapshot](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md). :::{tip} - Tip: From version 8.3, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots as old as version 5.0 without the need of an old {{es}} cluster. This ensures that data you store in {{es}} doesn’t have an end of life and is still accessible when you upgrade, without requiring a reindex process. + Tip: In 8.3.0 and later, snapshots are generally available as simple archives. Use the [archive functionality](/deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots from 5.0.0 and later, without needing an old {{es}} cluster. This ensures that your {{es}} data remains accessible after upgrades, without requiring a reindex process. ::: - You must resolve all critical issues before proceeding with the upgrade. If you make any additional changes, take a new snapshot to back up your data. + To successfully upgrade, resolve all critical issues. If you make additional changes, create a snapshot to back up your data. -3. Review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.x. +3. To identify if your applications use unsupported features or behave differently in 9.0.0, review the deprecation logs in the Upgrade Assistant. -4. Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Review all breaking changes for each product you use to review more information about changes that could affect your application. Make sure you test against the new version before upgrading existing deployments. +4. Major version upgrades can include breaking changes that require additional steps to ensure your applications function as expected. Review the breaking changes for each product you use to learn more about potential impacts on your application. Ensure you test with the new version before upgrading existing deployments. -5. Make the recommended changes to ensure that your clients continue to operate as expected after the upgrade. +5. To ensure your clients continue to operate as expected after the upgrade, make the recommended changes. :::{note} - As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information about how to best leverage REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). + As a temporary solution, use the 8.x syntax to submit requests to 9.0.0 with REST API compatibility mode. While this allows you to submit requests using the old syntax, it doesn’t guarantee the same behavior. REST API compatibility should serve as a bridge during the upgrade, not a long-term solution. For more details on how to effectively use REST API compatibility during an upgrade, refer to [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). ::: -6. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with the {{es}} version you're upgrading to. +6. If you use {{es}} plugins, ensure each plugin is compatible with the {{es}} version you're upgrading. -7. We recommend creating a 9.0 test deployment and test the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings. +7. Before upgrading your production deployment, we recommend creating a 9.0.0 test deployment and testing the upgrade in an isolated environment. Ensure the test and production environments use the same settings. :::{important} - You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). + After you upgrade, you cannot downgrade {{es}} nodes. If you can't complete the upgrade process, you must [restore from the snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). ::: -8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. A monitoring cluster cannot monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. +8. If you use a separate [monitoring cluster](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), upgrade the monitoring cluster before the production cluster. The monitoring cluster and the clusters being monitored should be running the same version of the {{stack}}. Monitoring clusters are unable to monitor production clusters running newer versions of the {{stack}}. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. :::{note} - If you use {{ccs}}, note that 9.0+ can only search remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). + If you use {{ccs}}, 9.0.0 and later can search only remote clusters running the previous minor version, the same version, or a newer minor version in the same major version. For more information, refer to [Cross-cluster search](../../solutions/search/cross-cluster-search.md). - If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). You can view your remote clusters from **Stack Management > Remote Clusters**. + If you use {{ccr}}, a cluster that contains follower indices must run the same or newer (compatible) version as the remote cluster. For more information and to view the version compatibility matrix, refer to [Cross cluster replication](/deploy-manage/tools/cross-cluster-replication.md). To view your remote clusters in {{kib}}, go to **Stack Management > Remote Clusters**. :::: -9. Consider closing {{ml}} jobs before you start the upgrade process. While {{ml}} jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process. - -10. If you have any anomaly detection result indices `.ml-anomalies-*` that were created in {{es}} 7.x, they must be reindexed, marked as read-only, or deleted before upgrading to 9.x. To learn how to do this, refer to [Anomaly detection results migration](#anomaly-migration). +9. To reduce overhead on the cluster during the upgrade, close {{ml}} jobs. Although {{ml}} jobs can run during a rolling upgrade, doing so increases the cluster workload. +10. If you have `.ml-anomalies-*`anomaly detection result indices created in {{es}} 7.x, reindex, mark as read-only, or delete them before you upgrade to 9.0.0. For more information, refer to [Migrate anomaly detection results](#anomaly-migration). -11. If you have any transform destination indices that were created in {{es}} 7.x, they must be reset, reindexed, or deleted before upgrading to 9.x. To learn how to do this, refer to [Transform destination indices migration](#transform-migration). +11. If you have transform destination indices created in {{es}} 7.x, reset, reindex, or delete them before you upgrade to 9.0.0. For more information, refer to [Migrate transform destination indices](#transform-migration). -## Reindex to upgrade (optional) [reindex-to-upgrade] +## Reindex to upgrade [reindex-to-upgrade] -To create a new 9.0 deployment and reindex from remote: +Optionally create a 9.0.0 deployment and reindex from remote: -1. Provision an additional deployment running 9.0. -2. Reindex your data into the new {{es}} cluster using the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. -3. Verify that the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. -4. Delete the old deployment. On {ecloud}, you are billed only for the time that the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. +1. Provision an additional deployment running 9.0.0. +2. To reindex your data into the new {{es}} cluster, use the [reindex documents API](https://www.elastic.co/docs/api/doc/elasticsearch/v8/operation/operation-reindex) and temporarily send new index requests to both clusters. +3. Verify the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. +4. Delete the old deployment. On {ecloud}, you are billed only for the time the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. -## Anomaly detection results migration [anomaly-migration] +## Migrate anomaly detection results [anomaly-migration] -The {{anomaly-detect}} result indices `.ml-anomalies-*` created in {{es}} 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x. +Reindex, mark as read-only, or delete the `.ml-anomalies-*` {{anomaly-detect}} result indices created in {{es}} 7.x. -**Reindexing**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. However, you cannot completely delete an {{anomaly-job}} that stores results in this index until the reindexing is complete. +**Reindex**: While {{anomaly-detect}} results are being reindexed, jobs continue to run and process new data. You are unable to delete an {{anomaly-job}} that stores results in the index until the reindexing is complete. -**Marking indices as read-only**: This is useful for large indexes that contain the results of only one or a few {{anomaly-jobs}}. If you delete these jobs later, you will not be able to create a new job with the same name. +**Mark indices as read-only**: This is useful for large indexes that contain the results of one or two {{anomaly-jobs}}. If you delete these jobs later, you cannot create a new job with the same name. -**Deleting**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. +**Delete**: Delete jobs that are no longer needed in the {{ml-app}} app in {{kib}}. The result index is deleted when all jobs that store results in it have been deleted. :::{dropdown} Which indices require attention? To identify indices that require action, use the [Deprecation info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-migration-deprecations-1): @@ -412,8 +409,7 @@ GET .ml-anomalies-custom-example/_search The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well. ::: -## Transform destination indices migration [transform-migration] +## Migrate transform destination indices [transform-migration] ======= -% EEDUGON note: when working on this document, or in the ECK upgrade documentation we should include a link to [nodes orchestration](../deploy/cloud-on-k8s/nodes-orchestration.md) as reference to learn the details on how ECK orchestates / manages the upgrade of the individual instances. diff --git a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md index 66884d9959..72ce8208fa 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md +++ b/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md @@ -8,7 +8,7 @@ applies_to: An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. -Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#rolling-upgrades): +Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/prepare-to-upgrade.md). When performing a [rolling upgrade](#rolling-upgrades): 1. Upgrade the data nodes first, tier-by-tier, starting with the frozen tier, then the cold tier, then the warm tier, then the hot tier, and finally any other data nodes which are not in a tier. Complete the upgrade for all nodes in each data tier before moving to the next. This ensures {{ilm-init}} can continue to move data through the tiers during the upgrade. You can get the list of nodes in a specific tier with a `GET /_nodes` request, for example: `GET /_nodes/data_frozen:true/_none`. 2. Upgrade all remaining nodes that are neither master-eligible nor data nodes. This includes dedicated ML nodes, dedicated ingest nodes, and dedicated coordinating nodes. diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md index 072aeca468..9e68f618b1 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana-roll-back.md @@ -1,9 +1,15 @@ +--- +navigation_title: "Roll back to a previous version" +mapped_urls: + - https://www.elastic.co/guide/en/kibana/current/upgrade-migrations-rolling-back.html +--- + # Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] -If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana.md#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. +If you’ve followed [preparing for migration](/deploy-manage/upgrade/deployment-or-cluster/kibana.md#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, roll back {{kib}} until you identify and fix the root cause. ::::{warning} -Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also rollback {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. +Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also roll back {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. :::: @@ -14,7 +20,7 @@ To roll back after a failed upgrade migration, you must also roll back the {{kib 1. Before proceeding, [take a snapshot](../../tools/snapshot-and-restore/create-snapshots.md) that contains the `kibana` feature state. By default, snapshots include the `kibana` feature state. 2. To make sure no {{kib}} instances are performing an upgrade migration, shut down all {{kib}} instances. -3. [Restore](../../tools/snapshot-and-restore/restore-snapshot.md) the `kibana` feature state from a snapshot taken before the failed {{kib}} upgrade. The following {{es}} request will only restore the {{kib}} feature state +3. [Restore](../../tools/snapshot-and-restore/restore-snapshot.md) the `kibana` feature state from a snapshot taken before the failed {{kib}} upgrade. The following {{es}} request will only restore the {{kib}} feature state: ```console POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore @@ -24,7 +30,7 @@ To roll back after a failed upgrade migration, you must also roll back the {{kib } ``` - 1. Exclude all indices and data streams from the restore operation to ensure that only the {{kib}} system indices included in the {{kib}} feature state will be restored. + 1. Exclude all indices and data streams from the restore operation to ensure that only the {{kib}} system indices included in the {{kib}} feature state are restored. -4. Start all {{kib}} instances on the older version you want to rollback to. +4. Start all {{kib}} instances on the older version you want to roll back to. diff --git a/deploy-manage/upgrade/deployment-or-cluster/kibana.md b/deploy-manage/upgrade/deployment-or-cluster/kibana.md index 9b512caabe..a5c6342959 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/kibana.md +++ b/deploy-manage/upgrade/deployment-or-cluster/kibana.md @@ -20,13 +20,13 @@ For more information, refer to [Migrate saved objects](../internal-upgrade-proce When upgrading several {{kib}} instances connected to the same {{es}} cluster, ensure that all outdated instances are shut down before starting the upgrade. -Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade migration in parallel. +Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade in parallel. -For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade migration before bringing up the remaining instances. +For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade before bringing up the remaining instances. -## Preparing for migration [preventing-migration-failures] +## Preparing for upgrading [preventing-migration-failures] -Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready for migration. +Before you start, ensure you [take the upgrade preparation steps](../prepare-to-upgrade.md). Then, take these extra steps to ensure you are ready to upgrade. ### Ensure your {{es}} cluster is healthy [_ensure_your_es_cluster_is_healthy] @@ -44,7 +44,7 @@ A healthy cluster has: ### Ensure that all {{kib}} instances are the same [_ensure_that_all_kib_instances_are_the_same] -When you perform an upgrade migration of different {{kib}} versions, the migration can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. +When you perform an upgrade of different {{kib}} versions, the upgrade can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. ## Perform the upgrade [perform-kibana-upgrade] diff --git a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md index 34323a8772..db953d0a76 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/self-managed.md +++ b/deploy-manage/upgrade/deployment-or-cluster/self-managed.md @@ -1,6 +1,6 @@ # Upgrade Elastic on self-managed infrastructure -If you've installed the {{stack}} on your own self-managed infrastructure, once you're [prepare to upgrade](/deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade), you'll need to upgrade each of your Elastic components individually. +If you've installed the {{stack}} on your own self-managed infrastructure, once you're [prepared to upgrade](/deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade), you'll need to upgrade each of your Elastic components individually. It's important that you upgrade your components in this order: * [{{es}}](/deploy-manage/upgrade/deployment-or-cluster/elasticsearch.md) @@ -9,6 +9,6 @@ It's important that you upgrade your components in this order: * [Ingest components](/deploy-manage/upgrade/ingest-components.md) :::{important} -If you are using {{ls}} and the `logstash-filter-elastic_integration plugin` to extend Elastic integrations, upgrade Logstash (or the `logstash-filter-elastic_integration` plugin specifically) *before* you upgrade Kibana. +If you're using {{ls}} and the `logstash-filter-elastic_integration plugin` to extend Elastic integrations, upgrade {{ls}} (or the `logstash-filter-elastic_integration` plugin specifically) *before* you upgrade {{kib}}. -The Elasticsearch-Logstash-Kibana installation order for this specific plugin ensures the best experience with Elastic Agent-managed pipelines, and embeds functionality from a version of {{es}} Ingest Node that is compatible with the plugin version (`major.minor`). +The {{es}} → {{ls}} → {{kib}} installation order for this specific plugin ensures the best experience with {{agent}}-managed pipelines, and embeds functionality from a version of {{es}} Ingest Node that is compatible with the plugin version (`major.minor`). diff --git a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md index 26fa68b891..c0ae27b768 100644 --- a/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md +++ b/deploy-manage/upgrade/deployment-or-cluster/upgrade-on-ech.md @@ -6,7 +6,7 @@ navigation_title: "Upgrade on {{ech}}" Once you are [prepared to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md), a single click in the {{ecloud}} console can upgrade a deployment to a newer version, add more processing capacity, change plugins, and enable or disable high availability, all at the same time. During the upgrade process, {{es}}, {{kib}}, and all of your deployment components are upgraded simultaneously. -Minor version upgrades, upgrades from 8.18 to 9.x, and cluster configuration changes can be performed with no downtime. {{ecloud}} only supports upgrades to released versions. Release candidate builds and master snapshots are not supported. +Minor version upgrades, upgrades from 8.18 to 9.0, and cluster configuration changes can be performed with no downtime. {{ecloud}} only supports upgrades to released versions. Release candidate builds and master snapshots are not supported. ::::{important} Although it’s simple to upgrade an {{ecloud}} deployment, the new version might include breaking changes that affect your application. Ensure you review breaking changes and deprecation logs, make any necessary changes, and test against the new version before upgrading your production deployment. diff --git a/deploy-manage/upgrade/orchestrator.md b/deploy-manage/upgrade/orchestrator.md index 4f64de500b..084299ea92 100644 --- a/deploy-manage/upgrade/orchestrator.md +++ b/deploy-manage/upgrade/orchestrator.md @@ -4,7 +4,7 @@ The topics in this section apply to customers running the {{stack}} on {{ece}} (ECE) or {{eck}} (ECK). ::: -Elastic provides customers with two major self-managed orchestrators to manage the Elastic Stack. Before you can upgrade the products in the stack, you need to ensure your orchestrator is running a compatible version. If you’re running a version of your orchestrator that’s incompatible with the Elastic Stack version you’re upgrading to, you’ll need to upgrade the orchestrator first. +Elastic provides customers with two major self-managed orchestrators to manage the {{stack}}. Before upgrading the products in the stack, ensure your orchestrator is running a compatible version. If you’re running a version of your orchestrator that’s incompatible with the {{stack}} version you’re upgrading to, upgrade the orchestrator first. Refer to one of these topics, depending on which orchestrator you have: diff --git a/deploy-manage/upgrade/prepare-to-upgrade.md b/deploy-manage/upgrade/prepare-to-upgrade.md index a45bf11608..27bf9733f5 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade.md +++ b/deploy-manage/upgrade/prepare-to-upgrade.md @@ -10,7 +10,7 @@ There are a number of things you need to plan for before performing the actual u Ensure the version you’re upgrading to for {{es}}, {{kib}}, and any ingest components supports your current operating system. Refer to the [Product and Operating System support matrix](https://www.elastic.co/support/matrix#matrix_os). -**OpenJDK compatibility and FIPS compliance** +### OpenJDK compatibility and FIPS compliance By default, {{es}} is built using Java and includes a bundled version of [OpenJDK](https://openjdk.java.net/) within each distribution. While we strongly recommend using the bundled Java Virtual Machine (JVM) in all installations of {{es}}, if you choose to use your own JVM, ensure it’s compatible by reviewing the [Product and JVM support matrix](https://www.elastic.co/support/matrix#matrix_jvm). {{es}} 9.0 requires Java 21 and supports Java 24. @@ -18,7 +18,7 @@ If you’re running {{es}} in FIPS 140-2 mode, {{es}} 9.0 has been tested with [ ## Conduct a component inventory -It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record if each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: +It is very important to map all the components that are being used on the {{stack}}. When you upgrade your deployment, you also may need to upgrade all the other components. You should record whether each component is used, and if it is, also record the current version. While not comprehensive, here’s a list of components you should check: * {{es}} * {{es}} Hadoop @@ -77,7 +77,7 @@ Self-managed infrastructure – either on-prem or on public cloud, includes: * {{ece}} (ECE) * {{eck}} (ECK) -For ECE and ECK, you must ensure the operator is running a compatible version with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. +For ECE and ECK, ensure the operator is running a version compatible with the {{stack}} version you’re upgrading to. If not, you need to upgrade that before you can upgrade your cluster. If you’re running the {{stack}} on your own self-managed infrastructure, you must upgrade each component individually.