diff --git a/deploy-manage/upgrade/deployment-or-cluster.md b/deploy-manage/upgrade/deployment-or-cluster.md index 61c3ba7f84..cf5a29b5aa 100644 --- a/deploy-manage/upgrade/deployment-or-cluster.md +++ b/deploy-manage/upgrade/deployment-or-cluster.md @@ -13,58 +13,6 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html --- - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/kibana/kibana/upgrade.md -% - [ ] ./raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md -% Notes: redirect only -% - [ ] ./raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md -% - [ ] ./raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md -% Notes: upgrade explanations - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$preventing-migration-failures$$$ - -$$$prepare-to-upgrade$$$ - -$$$k8s-nodesets$$$ - -$$$k8s-orchestration-limitations$$$ - -$$$k8s-statefulsets$$$ - -$$$k8s-upgrade-patterns$$$ - -$$$k8s-upgrading$$$ - -$$$prepare-to-upgrade-8x$$$ - -$$$rolling-upgrades$$$ - -$$$upgrading-reindex$$$ - -% * [/raw-migrated-files/kibana/kibana/upgrade.md](/raw-migrated-files/kibana/kibana/upgrade.md) -% * [/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md](/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md) -% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md) -% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md) -% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md) -% * [/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md) -% * [/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md](/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md) -% * [/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md](/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md) -% * [/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md](/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md) -% * [/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md](/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md) -% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md) -% * [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-orchestration.md) - # Upgrade your deployment or cluster [upgrade-deployment-cluster] When upgrading an existing cluster, you perform a minor or major upgrade. For example, a minor upgrade takes you from version 9.0.0 to 9.1.0, while a major upgrade takes you from version 8.0.0 to 9.0.0. diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md deleted file mode 100644 index 4be7c0a878..0000000000 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md +++ /dev/null @@ -1,14 +0,0 @@ -# Upgrade the Elastic Stack version [k8s-upgrading-stack] - -::::{warning} -We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. -:::: - - -The operator can safely perform upgrades to newer versions of the various Elastic Stack resources. - -Follow the instructions in the [Elasticsearch documentation](/deploy-manage/upgrade/deployment-or-cluster.md). Make sure that your cluster is compatible with the target version, take backups, and follow the specific upgrade instructions for each resource type. When you are ready, modify the `version` field in the resource spec to the desired stack version and the operator will start the upgrade process automatically. - -ECK will make sure that Elastic Stack resources are upgraded in the correct order. Upgrades to dependent stack resources are delayed until the dependency is upgraded. For example, the Kibana upgrade will be rolled out only when the associated Elasticsearch cluster has been upgraded. - -Check [Nodes orchestration](../../../deploy-manage/upgrade/deployment-or-cluster.md) for more information on how the operator performs upgrades and how to tune its behavior. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md b/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md deleted file mode 100644 index b3ec18b839..0000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md +++ /dev/null @@ -1,49 +0,0 @@ -# Upgrade versions [ece-upgrade-deployment] - -::::{important} -Beginning with Elastic Stack version 8.0, instructions for upgrading your Elastic Cloud Enterprise stack version can be found in [Upgrading on Elastic Cloud](../../../deploy-manage/upgrade/deployment-or-cluster.md). The following instructions apply for upgrading to Elastic Stack versions 7.x and previous. -:::: - - -::::{note} -You should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should run the same version of the stack. A monitoring cluster cannot monitor production clusters that run newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. -:::: - - -When upgrading the version of an existing cluster, either a minor or major upgrade is performed. The difference is that a minor upgrade takes you from 6.2 to 6.3, while a major upgrade takes you from 6 to 7. - -If you are upgrading to version 6.7 and later, minor or major upgrades to highly available deployments require little to no downtime as a rolling upgrade is performed. - -Major version upgrades sometimes require other changes due to breaking changes or discontinued features. For example, some special considerations apply when [upgrading to Elasticsearch 5.0](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-upgrading-v5.html). Our recommended approach for major version upgrades is to create a new deployment with the latest major version you want to upgrade to, reindex everything and make sure index requests are temporarily sent to both clusters. With the new cluster ready, tested, and working, you can then remove the old deployment. - -If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. - -Patch releases also require no downtime when upgrading highly available deployments. A patch fix release takes you from 5.5.1 to 5.5.3, for example. - - -## Before you begin [ece_before_you_begin_13] - -When upgrading from one recent major Elasticsearch version to the next, we recommend that you prepare ahead of time to make the process go smoothly. To learn more, see: - -* [Upgrade to Elasticsearch 7.x](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-upgrading-v7.html) -* [Upgrade to Elasticsearch 6.x](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-upgrading-v6.html) -* [Upgrade to Elasticsearch 5.x](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-upgrading-v5.html) - - -## Perform the upgrade [ece_perform_the_upgrade] - -To upgrade a cluster in Elastic Cloud Enterprise: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. Select **Upgrade**. -4. Select one of the available software versions. Let the user interface guide you through the steps for upgrading a deployment. When you save your changes, your deployment configuration is updated to the new version. - - ::::{tip} - You cannot downgrade after upgrading, so plan ahead to make sure that your applications still work after upgrading. For more information on changes that might affect your applications, check [Breaking changes](elasticsearch://release-notes/breaking-changes.md). - :::: - -5. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md b/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md deleted file mode 100644 index d2c939215a..0000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md +++ /dev/null @@ -1,67 +0,0 @@ -# Upgrade versions [ech-upgrade-deployment] - -::::{important} -Beginning with Elastic Stack version 8.0, instructions for upgrading your Elasticsearch Add-On for Heroku stack version can be found in [Upgrading on Elastic Cloud](../../../deploy-manage/upgrade/deployment-or-cluster.md). The following instructions apply for upgrading to Elastic Stack versions 7.x and previous. -:::: - - -::::{note} -You should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should run the same version of the stack. A monitoring cluster cannot monitor production clusters that run newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. -:::: - - -When upgrading the version of an existing cluster, either a minor or major upgrade is performed. The difference is that a minor upgrade takes you from 6.8.2 to 6.8.3, while a major upgrade takes you from 6.8 to 7.17. - -If you are upgrading to version 6.7 and later, minor or major upgrades to highly available deployments require little to no downtime as a rolling upgrade is performed. - -Major version upgrades sometimes require other changes due to breaking changes or discontinued features. For example, some special considerations apply when [upgrading to Elasticsearch 5.0](https://www.elastic.co/guide/en/cloud-heroku/current/ech-upgrading-v5.html). Our recommended approach for major version upgrades is to create a new deployment with the latest major version you want to upgrade to, reindex everything and make sure index requests are temporarily sent to both clusters. With the new cluster ready, tested, and working, you can then remove the old deployment. - -If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. - -Patch releases also require no downtime when upgrading highly available deployments. A patch fix release takes you from 5.5.1 to 5.5.3, for example. - - -## Before you begin [echbefore_you_begin_6] - -When upgrading from one recent major Elasticsearch version to the next, we recommend that you prepare ahead of time to make the process go smoothly. To learn more, see: - -* [Upgrade to Elasticsearch 7.x](https://www.elastic.co/guide/en/cloud-heroku/current/ech-upgrading-v7.html) -* [Upgrade to Elasticsearch 6.x](https://www.elastic.co/guide/en/cloud-heroku/current/ech-upgrading-v6.html) -* [Upgrade to Elasticsearch 5.x](https://www.elastic.co/guide/en/cloud-heroku/current/ech-upgrading-v5.html) - -::::{warning} -If you have a custom plugin installed, you must [update the plugin](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) so that it matches the Elasticsearch version that you are upgrading to. When the custom plugin does not match the Elasticsearch version, the upgrade fails. -:::: - - -To successfully replace and override a plugin which is being upgraded, the `name` attribute contained in the `plugin-descriptor.properties` file must be the exact same as the currently installed plugin’s `name` attribute. If the attributes do not match, the new plugin bundle will be added to the cluster as a completely new and separate plugin. - - -## Perform the upgrade [echperform_the_upgrade] - -To upgrade a cluster in Elasticsearch Add-On for Heroku: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the **Deployments** page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In the **Deployment version** section, select **Upgrade**. -4. Select a new version. - - If you perform a major version upgrade, the UI provides a link to our migration helper tool that helps you to determine if a direct upgrade is feasible. You can also check our [Elastic Stack upgrade guide](https://www.elastic.co/products/upgrade_guide). - -5. Optional: Make any other changes that are needed, such as increasing the capacity or adding plugins. -6. Select **Upgrade** and then **Confirm upgrade**. The new configuration takes a few minutes to create. -7. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. -8. If you had Kibana enabled, the UI will prompt you to also upgrade Kibana. The Kibana upgrade takes place separately from the Elasticsearch version upgrade and needs to be triggered manually: - - 1. On the **Deployments** page, select your deployment. - 2. From your deployment menu, select **Kibana**. - 3. If the button is available, select **Upgrade Kibana**. If the button is not available, Kibana does not need to be upgraded further. - 4. Confirm the upgrade. - - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md b/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md deleted file mode 100644 index 7c6102fa06..0000000000 --- a/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md +++ /dev/null @@ -1,67 +0,0 @@ -# Upgrade versions [ec-upgrade-deployment] - -::::{important} -Beginning with Elastic Stack version 8.0, instructions for upgrading {{ech}} deployments can be found in [Upgrading on Elastic Cloud](../../../deploy-manage/upgrade/deployment-or-cluster.md). The following instructions apply for upgrading to Elastic Stack versions 7.x and previous. -:::: - - -::::{note} -You should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should run the same version of the stack. A monitoring cluster cannot monitor production clusters that run newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. -:::: - - -When upgrading the version of an existing cluster, either a minor or major upgrade is performed. The difference is that a minor upgrade takes you from 6.8.2 to 6.8.3, while a major upgrade takes you from 6.8 to 7.17. - -If you are upgrading to version 6.7 and later, minor or major upgrades to highly available deployments require little to no downtime as a rolling upgrade is performed. - -Major version upgrades sometimes require other changes due to breaking changes or discontinued features. For example, some special considerations apply when [upgrading to Elasticsearch 5.0](https://www.elastic.co/guide/en/cloud/current/ec-upgrading-v5.html). Our recommended approach for major version upgrades is to create a new deployment with the latest major version you want to upgrade to, reindex everything and make sure index requests are temporarily sent to both clusters. With the new cluster ready, tested, and working, you can then remove the old deployment. - -If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. - -Patch releases also require no downtime when upgrading highly available deployments. A patch fix release takes you from 5.5.1 to 5.5.3, for example. - - -## Before you begin [ec_before_you_begin_9] - -When upgrading from one recent major Elasticsearch version to the next, we recommend that you prepare ahead of time to make the process go smoothly. To learn more, see: - -* [Upgrade to Elasticsearch 7.x](https://www.elastic.co/guide/en/cloud/current/ec-upgrading-v7.html) -* [Upgrade to Elasticsearch 6.x](https://www.elastic.co/guide/en/cloud/current/ec-upgrading-v6.html) -* [Upgrade to Elasticsearch 5.x](https://www.elastic.co/guide/en/cloud/current/ec-upgrading-v5.html) - -::::{warning} -If you have a custom plugin installed, you must [update the plugin](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-update-bundles-and-plugins) so that it matches the Elasticsearch version that you are upgrading to. When the custom plugin does not match the Elasticsearch version, the upgrade fails. -:::: - - -To successfully replace and override a plugin which is being upgraded, the `name` attribute contained in the `plugin-descriptor.properties` file must be the exact same as the currently installed plugin’s `name` attribute. If the attributes do not match, the new plugin bundle will be added to the cluster as a completely new and separate plugin. - - -## Perform the upgrade [ec_perform_the_upgrade] - -To upgrade a cluster in {{ech}}: - -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - - On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. In the **Deployment version** section, select **Upgrade**. -4. Select a new version. - - If you perform a major version upgrade, the UI provides a link to our migration helper tool that helps you to determine if a direct upgrade is feasible. You can also check our [Elastic Stack upgrade guide](https://www.elastic.co/products/upgrade_guide). - -5. Optional: Make any other changes that are needed, such as increasing the capacity or adding plugins. -6. Select **Upgrade** and then **Confirm upgrade**. The new configuration takes a few minutes to create. -7. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. -8. If you had Kibana enabled, the UI will prompt you to also upgrade Kibana. The Kibana upgrade takes place separately from the Elasticsearch version upgrade and needs to be triggered manually: - - 1. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - 2. From your deployment menu, select **Kibana**. - 3. If the button is available, select **Upgrade Kibana**. If the button is not available, Kibana does not need to be upgraded further. - 4. Confirm the upgrade. - - - - - diff --git a/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md b/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md deleted file mode 100644 index 4bfe80b784..0000000000 --- a/raw-migrated-files/kibana/kibana/upgrade-migrations-rolling-back.md +++ /dev/null @@ -1,30 +0,0 @@ -# Roll back to a previous version of {{kib}} [upgrade-migrations-rolling-back] - -If you’ve followed [preparing for migration](../../../deploy-manage/upgrade/deployment-or-cluster.md#preventing-migration-failures) and [resolving migration failures](../../../troubleshoot/kibana/migration-failures.md), and {{kib}} is still unable to successfully upgrade, rollback {{kib}} until you’re able to identify and fix the root cause. - -::::{warning} -Before you roll back {{kib}}, ensure that the version you want to roll back to is compatible with your {{es}} cluster. If the version you want to roll back to is not compatible, you must also rollback {{es}}. Any changes made after an upgrade are lost when you roll back to a previous version. -:::: - - -To roll back after a failed upgrade migration, you must also roll back the {{kib}} feature state to be compatible with the previous {{kib}} version. - - -## Roll back by restoring the {{kib}} feature state from a snapshot [_roll_back_by_restoring_the_kib_feature_state_from_a_snapshot] - -1. Before proceeding, [take a snapshot](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md) that contains the `kibana` feature state. By default, snapshots include the `kibana` feature state. -2. To make sure no {{kib}} instances are performing an upgrade migration, shut down all {{kib}} instances. -3. [Restore](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) the `kibana` feature state from a snapshot taken before the failed {{kib}} upgrade. The following {{es}} request will only restore the {{kib}} feature state - - ```console - POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore - { - "indices": "-*", <1> - "feature_states": ["kibana"] - } - ``` - - 1. Exclude all indices and data streams from the restore operation to ensure that only the {{kib}} system indices included in the {{kib}} feature state will be restored. - -4. Start all {{kib}} instances on the older version you want to rollback to. - diff --git a/raw-migrated-files/kibana/kibana/upgrade.md b/raw-migrated-files/kibana/kibana/upgrade.md deleted file mode 100644 index f284a6f33b..0000000000 --- a/raw-migrated-files/kibana/kibana/upgrade.md +++ /dev/null @@ -1,59 +0,0 @@ -# Upgrade {{kib}} [upgrade] - -To upgrade from 7.16.0 or earlier to 9.0.0-beta1, **you must first upgrade to 8.17**, which enables you to use the **Upgrade Assistant** to [prepare for the upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade). Before you upgrade, you must resolve all critical issues identified by the **Upgrade Assistant**. - -In addition, we recommend to carefully review the [list of breaking changes and deprecations](kibana://release-notes/breaking-changes.md) and to take any necessary actions to mitigate their impact on the upgrade. You can enable the [deprecated APIs debug logs config](kibana://reference/configuration-reference/logging-settings.md#enable-http-debug-logs) to get information about calls to deprecated APIs. - -Rolling upgrades are unsupported in {{kib}}. To upgrade, you must shut down all {{kib}} instances, install the new software, and restart {{kib}}. Upgrading while older {{kib}} instances are running can cause data loss or upgrade failures. - -::::{warning} -When required, {{kib}} automatically migrates [saved objects](../../../deploy-manage/upgrade/internal-upgrade-processes/saved-object-migrations.md). In case of an upgrade failure, you can roll back to an earlier version of {{kib}}. To roll back, you **must** have a [backup snapshot](../../../deploy-manage/tools/snapshot-and-restore.md) that includes the `kibana` feature state. By default, snapshots include the `kibana` feature state. - -:::: - - -For more information about upgrading, refer to [Upgrading to Elastic 9.0.0-beta1.](../../../deploy-manage/upgrade/deployment-or-cluster.md) - -::::{important} -You can upgrade to pre-release versions for testing, but upgrading from a pre-release to the General Available version is unsupported. You should use pre-release versions only for testing in a temporary environment. -:::: - - - -## Upgrading multiple {{kib}} instances [_upgrading_multiple_kib_instances] - -When upgrading several {{kib}} instances connected to the same {{es}} cluster, ensure that all outdated instances are shut down before starting the upgrade. - -Rolling upgrades are unsupported in {{kib}}. However, when outdated instances are shut down, you can start all upgraded instances in parallel, which allows all instances to participate in the upgrade migration in parallel. - -For large deployments with more than 10 {{kib}} instances, and more than 10,000 saved objects, you can reduce the upgrade downtime by bringing up a single {{kib}} instance and waiting for it to complete the upgrade migration before bringing up the remaining instances. - - -## Preparing for migration [preventing-migration-failures] - -Take these extra steps to ensure you are ready for migration. - - -### Ensure your {{es}} cluster is healthy [_ensure_your_es_cluster_is_healthy] - -Problems with your {{es}} cluster can prevent {{kib}} upgrades from succeeding. - -During the upgrade process, {{kib}} creates new indices into which updated documents are written. If a cluster is approaching the low watermark, there’s a high risk of {{kib}} not being able to create these. Reading, transforming and writing updated documents can be memory intensive, using more available heap than during routine operation. You must make sure that enough heap is available to prevent requests from timing out or throwing errors from circuit breaker exceptions. You should also ensure that all shards are replicated and assigned. - -A healthy cluster has: - -* Enough free disk space, at least twice the amount of storage taken up by the `.kibana` and `.kibana_task_manager` indices -* Sufficient heap size -* A "green" cluster status - - -### Ensure that all {{kib}} instances are the same [_ensure_that_all_kib_instances_are_the_same] - -When you perform an upgrade migration of different {{kib}} versions, the migration can fail. Ensure that all {{kib}} instances are running the same version, configuration, and plugins. - - -### Back up your data [_back_up_your_data] - -Be sure to have a [snapshot](../../../deploy-manage/tools/snapshot-and-restore.md) of all your data before attempting a migration. If something goes wrong during migration, you can restore from the snapshot and try again. - -Review the [common causes of {{kib}} upgrade failures](../../../troubleshoot/kibana/migration-failures.md) and how to prevent them. diff --git a/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md b/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md deleted file mode 100644 index 76db772193..0000000000 --- a/raw-migrated-files/stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md +++ /dev/null @@ -1,259 +0,0 @@ -# Upgrade on Elastic Cloud [upgrade-elastic-stack-for-elastic-cloud] - -Once you are [prepared to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md), a single click in the Elastic Cloud console can upgrade a deployment to a newer version, add more processing capacity, change plugins, and enable or disable high availability, all at the same time. During the upgrade process, {{es}}, {{kib}}, and all of your deployment components are upgraded simultaneously. - -Minor version upgrades, upgrades from 8.17 to 9.0.0-beta1, and cluster configuration changes can be performed with no downtime. Elastic Cloud only supports upgrades to released versions. Preview releases and master snapshots are not supported. - -{{ech}} and {{ece}} do not support the ability to upgrade to or from release candidate builds, such as 8.0.0-rc1. - -If you use a separate [monitoring deployment](../../../deploy-manage/monitor/stack-monitoring/ece-ech-stack-monitoring.md), you should upgrade the monitoring deployment before the production deployment. In general, the monitoring deployment and the deployments being monitored should be running the same version of the Elastic Stack. A monitoring deployment cannot monitor production deployments running newer versions of the stack. If necessary, the monitoring deployment can monitor production deployments running the latest release of the previous major version. - -::::{important} -Although it’s simple to upgrade an Elastic Cloud deployment, the new version might include breaking changes that affect your application. Make sure you review the deprecation logs, make any necessary changes, and test against the new version before upgrading your production deployment. -:::: - - -Upgrade Assistant -: Prior to upgrading, Elastic Cloud checks the deprecation API to retrieve information about the cluster, node, and index-level settings that need to be removed or changed. If there are any issues that would prevent a successful upgrade, the upgrade is blocked. Use the [Upgrade Assistant](https://www.elastic.co/guide/en/kibana/8.17/upgrade-assistant.html) in 8.17 to identify and resolve issues and reindex any indices created before 7.0. - -Snapshots -: To keep your data safe during the upgrade process, a snapshot is taken automatically before any changes are made to your cluster. After a major version upgrade is complete and a snapshot of the upgraded cluster is available, all snapshots taken with the previous major version of {{es}} are stored in the snapshot repository. - - From version 8.3, snapshots are generally available as simple archives. Use the [archive functionality](../../../deploy-manage/upgrade/deployment-or-cluster/reading-indices-from-older-elasticsearch-versions.md) to search snapshots as old as version 5.0 without the need of an old {{es}} cluster. This ensures that data you store in {{es}} doesn’t have an end of life and is still accessible when you upgrade, without requiring a reindex process. - - On {{ece}}, you need to [configure a snapshot repository](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md) to enable snapshots. - - -Breaking changes -: Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Make sure you test against the new version before upgrading existing deployments. - - Review the [*Breaking changes*](https://www.elastic.co/guide/en/elastic-stack/current/elastic-stack-breaking-changes.html) and upgrade your code to work with 9.0.0-beta1. - - -Known issues -: In {{es}} 7.8 and later, {{ilm}} ({{ilm-init}}) is always enabled, even if the cluster is still using deprecated index curation methods. If index curation and {{ilm-init}} are both configured to manage the same indices, the outcome can be unpredictable. Elastic solutions default to using {{ilm-init}}, and we strongly encourage you to [migrate all of your indices to {{ilm-init}}](../../../manage-data/lifecycle/index-lifecycle-management.md). - -Security realm settings -: During the upgrade process, you are prompted to update the security realm settings if your user settings include a `xpack.security.authc.realms` value. - - If the security realms are configured in `user_settings`, you’ll be prompted to modify the settings: - - 1. On the **Update security realm settings** window, edit the settings. - 2. Click **Update settings**. If the security realm settings are located in `user_settings_override`, contact support to help you upgrade. - - - -## Perform the upgrade [perform-cloud-upgrade] - -Log in to your Elastic Cloud environment: - - - -:::::::{tab-set} - -::::::{tab-item} {{ech}} -1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co/?page=docs&placement=docs-body). -2. Select your deployment on the home page in the **Hosted deployments** card or go to the **Deployments** page. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. -:::::: - -::::::{tab-item} Elastic Cloud Enterprise -1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md) -2. On the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. -:::::: - -::::::: -To upgrade a deployment: - -1. In the **Deployment version** section, click **Upgrade**. -2. Select version 9.0.0-beta1. -3. Click **Upgrade** and then **Confirm upgrade**. The new configuration takes a few minutes to create. - - ::::{note} - If any incompatibilities are detected when you attempt to upgrade to 9.0.0-beta1, the UI provides a link to the Upgrade Assistant, which checks for deprecated settings in your cluster and indices and helps you resolve them. After resolving the issues, return to the **Deployments** page and restart the upgrade. - :::: - - - -## Upgrading {{es}} clients and ingest components [upgrading-clients-ingest] - -Once you have upgraded from 8.17, you need to update your {{es}} clients and ingest components in the following order: - -1. Java API Client: [dependency configuration](elasticsearch-java://reference/installation.md#maven) -2. Logstash: [upgrade instructions](logstash://reference/upgrading-logstash.md) -3. Beats: [upgrade instructions](beats://reference/libbeat/upgrading.md) -4. {{agent}}: [upgrade instructions](/reference/ingestion-tools/fleet/upgrade-elastic-agent.md) - - -## Reindex to upgrade [upgrading-reindex] - -If you are running a pre-8.x version, you might need to perform multiple upgrades or a full-cluster restart to get to 8.17 to prepare to upgrade to 9.0.0-beta1. - -Alternatively, you can create a new 9.0.0-beta1 deployment and reindex from remote: - -1. Provision an additional deployment running 9.0.0-beta1. -2. Reindex your data into the new {{es}} cluster using [reindex from remote](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) and temporarily send new index requests to both clusters. -3. Verify that the new cluster performs as expected, fix any problems, and then permanently swap in the new cluster. -4. Delete the old deployment. On Elastic Cloud, you are billed only for the time that the new deployment runs in parallel with your old deployment. Usage is billed on an hourly basis. - diff --git a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md b/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md deleted file mode 100644 index bfd2fbd615..0000000000 --- a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md +++ /dev/null @@ -1,16 +0,0 @@ -# Upgrade Elastic on-prem [upgrading-elastic-stack-on-prem] - -Once you are [prepared to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md), you will need to upgrade each of your Elastic components individually. - -1. Consider closing {{ml}} jobs before you start the upgrade process. While {{ml}} jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process. -2. Upgrade the components of your Elastic Stack in the following order: - - 1. {{es}} Hadoop: [install instructions](elasticsearch-hadoop://reference/installation.md) - 2. {{es}}: [upgrade instructions](../../../deploy-manage/upgrade/deployment-or-cluster.md) - 3. Kibana: [upgrade instructions](../../../deploy-manage/upgrade/deployment-or-cluster.md) - 4. Java API Client: [dependency configuration](elasticsearch-java://reference/installation.md#maven) - 5. Logstash: [upgrade instructions](logstash://reference/upgrading-logstash.md) - 6. Beats: [upgrade instructions](beats://reference/libbeat/upgrading.md) - 7. {{agent}}: [upgrade instructions](/reference/ingestion-tools/fleet/upgrade-elastic-agent.md) - - diff --git a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md b/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md deleted file mode 100644 index e12aafd984..0000000000 --- a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elastic-stack.md +++ /dev/null @@ -1,88 +0,0 @@ -# Upgrade to Elastic 9.0.0-beta1 [upgrading-elastic-stack] - -Before you upgrade to 9.0.0-beta1, it’s important to take some preparation steps. These steps vary based on your current version: - -* [Upgrade from an earlier 8.x version](../../../deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade-8x) -* [Upgrade from 7.x](../../../deploy-manage/upgrade/deployment-or-cluster.md#prepare-to-upgrade) - -::::{important} -Upgrading from a release candidate build, such as 8.0.0-rc1 or 8.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment. -:::: - - - -## Prepare to upgrade from an earlier 8.x version [prepare-to-upgrade-8x] - -1. Review the breaking changes for each product you use and make the necessary changes so your code is compatible with 9.0.0-beta1: - - * [APM breaking changes](https://www.elastic.co/guide/en/observability/current/apm-breaking.html) - * [{{beats}} breaking changes](asciidocalypse://docs/beats/docs/release-notes/breaking-changes.md) - * [{{es}} migration guide](elasticsearch://release-notes/breaking-changes.md) - * [{{elastic-sec}} release notes](https://www.elastic.co/guide/en/security/current/release-notes.html) - * [{{ents}} release notes](https://www.elastic.co/guide/en/enterprise-search/current/changelog.html) - * [{{fleet}} and {{agent}} release notes](/release-notes/fleet-elastic-agent/release-notes.md) - * [{{kib}} release notes](kibana://release-notes/index.md) - * [{{ls}} breaking changes](logstash://release-notes/breaking-changes.md) - - ::::{important} - * Make sure you check the breaking changes for each minor release up to 9.0.0-beta1. - * If you are using {{ml}} {{dfeeds}} that contain discontinued search or query domain specific language (DSL), the upgrade will fail. In 5.6.5 and later, the Upgrade Assistant provides information about which {{dfeeds}} need to be updated. - - :::: - -2. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with {{es}} version 9.0.0-beta1. -3. Test the upgrade in an isolated environment before upgrading your production cluster. -4. Make sure you have a current snapshot before you start the upgrade. - - ::::{important} - You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to restore from the snapshot. - :::: - -5. If you use a separate [monitoring cluster](../../../deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the stack. A monitoring cluster cannot monitor production clusters running newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. - - -## Prepare to upgrade from 7.x [prepare-to-upgrade] - -To upgrade to 9.0.0-beta1 from 7.16 or earlier, **you must first upgrade to 8.17**. This enables you to use the **Upgrade Assistant** to identify and resolve issues, reindex indices created before 7.0, and then perform a rolling upgrade. - -**Upgrading to 8.17 before upgrading to 9.0.0-beta1 is required even if you opt to do a full-cluster restart of your {{es}} cluster.** Alternatively, you can create a new 9.0.0-beta1 deployment and reindex from remote. For more information, see [Reindex to upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#upgrading-reindex). - -{{beats}} and {{ls}} 8.17 are compatible with {{es}} 9.0.0-beta1 to give you flexibility in scheduling the upgrade. - -::::{admonition} Remote cluster compatibility -:class: note - -If you use {{ccs}}, note that 9.0.0-beta1 can only search remote clusters running the previous minor version or later. For more information, see [Searching across clusters](../../../solutions/search/cross-cluster-search.md). - -If you use {{ccr}}, a cluster that contains follower indices must run the same or newer version as the remote cluster. For more information, see [Cross cluster replication](../../../deploy-manage/tools/cross-cluster-replication.md) for version compatibility matrix. - -You can view your remote clusters from **Stack Management > Remote Clusters**. - -:::: - - -1. Use the [Upgrade Assistant](https://www.elastic.co/guide/en/kibana/8.17/upgrade-assistant.html) to prepare for your upgrade from 8.17 to 9.0.0-beta1. The **Upgrade Assistant** identifies deprecated settings and guides you through resolving issues and reindexing indices created before 7.0. Make sure you have a current snapshot before making configuration changes or reindexing. - - **You must resolve all critical issues before proceeding with the upgrade.** - -2. Review the deprecation logs from the **Upgrade Assistant** to determine if your applications are using features that are not supported or behave differently in 8.x. See the [breaking changes](https://www.elastic.co/guide/en/elastic-stack/current/elastic-stack-breaking-changes.html) for more information about changes in 9.0.0-beta1 that could affect your application. - - ::::{important} - Make sure you check the breaking changes for each minor 8.x release up to 9.0.0-beta1. - :::: - -3. Make the recommended changes to ensure that your applications continue to operate as expected after the upgrade. - - ::::{note} - As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information, see [REST API compatibility](elasticsearch://reference/elasticsearch/rest-apis/compatibility.md). - :::: - -4. If you use any {{es}} plugins, make sure there is a version of each plugin that is compatible with {{es}} version 9.0.0-beta1. -5. Test the upgrade in an isolated environment before upgrading your production cluster. -6. Make sure you have a current snapshot before you start the upgrade. - - ::::{important} - You cannot downgrade {{es}} nodes after upgrading. If you cannot complete the upgrade process, you will need to restore from the snapshot. - :::: - -7. If you use a separate [monitoring cluster](../../../deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md), you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the stack. A monitoring cluster cannot monitor production clusters running newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version. diff --git a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md b/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md deleted file mode 100644 index 638c157f9b..0000000000 --- a/raw-migrated-files/stack-docs/elastic-stack/upgrading-elasticsearch.md +++ /dev/null @@ -1,196 +0,0 @@ -# Upgrade {{es}} [upgrading-elasticsearch] - -An {{es}} cluster can be upgraded one node at a time so upgrading does not interrupt service. Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported, as shards cannot be replicated from upgraded nodes to nodes running the older version. - -Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/deployment-or-cluster.md). When performing a [rolling upgrade](../../../deploy-manage/upgrade/deployment-or-cluster.md#rolling-upgrades): - -1. Upgrade the data nodes first, tier-by-tier, starting with the frozen tier, then the cold tier, then the warm tier, then the hot tier, and finally any other data nodes which are not in a tier. Complete the upgrade for all nodes in each data tier before moving to the next. This ensures {{ilm-init}} can continue to move data through the tiers during the upgrade. You can get the list of nodes in a specific tier with a `GET /_nodes` request, for example: `GET /_nodes/data_frozen:true/_none`. -2. Upgrade all remaining nodes that are neither master-eligible nor data nodes. This includes dedicated ML nodes, dedicated ingest nodes, and dedicated coordinating nodes. -3. Upgrade the master-eligible nodes last. You can retrieve a list of these nodes with `GET /_nodes/master:true/_none`. - -This order ensures that all nodes can join the cluster during the upgrade. Upgraded nodes can join a cluster with an older master, but older nodes cannot always join a cluster with a upgraded master. - -To upgrade a cluster: - -1. **Disable shard allocation**. - - When you shut down a data node, the allocation process waits for `index.unassigned.node_left.delayed_timeout` (by default, one minute) before starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by [disabling allocation](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-allocation-enable) of replicas before shutting down [data nodes](elasticsearch://reference/elasticsearch/configuration-reference/node-settings.md#data-node): - - ```console - PUT _cluster/settings - { - "persistent": { - "cluster.routing.allocation.enable": "primaries" - } - } - ``` - -2. **Stop non-essential indexing and perform a flush.** (Optional) - - While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a [flush](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-flush). - - ```console - POST /_flush - ``` - -3. **Temporarily stop the tasks associated with active {{ml}} jobs and {{dfeeds}}.** (Optional) - - It is possible to leave your {{ml}} jobs running during the upgrade, but it puts increased load on the cluster. When you shut down a {{ml}} node, its jobs automatically move to another node and restore the model states. - - ::::{note} - Any {{ml}} indices created before 8.x must be reindexed before upgrading, which you can initiate from the **Upgrade Assistant** in 8.17. - :::: - - - * Temporarily halt the tasks associated with your {{ml}} jobs and {{dfeeds}} and prevent new jobs from opening by using the [set upgrade mode API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-set-upgrade-mode): - - ```console - POST _ml/set_upgrade_mode?enabled=true - ``` - - When you disable upgrade mode, the jobs resume using the last model state that was automatically saved. This option avoids the overhead of managing active jobs during the upgrade and is faster than explicitly stopping {{dfeeds}} and closing jobs. - - * [Stop all {{dfeeds}} and close all jobs](../../../explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md#ml-ad-close-job). This option saves the model state at the time of closure. When you reopen the jobs after the upgrade, they use the exact same model. However, saving the latest model state takes longer than using upgrade mode, especially if you have a lot of jobs or jobs with large model states. - -4. $$$upgrade-node$$$ **Shut down a single node**. - - To shut down a single node depends on what is currently used to run {{es}}. For example, if using `systemd` or SysV `init` run the commands below. - - * If you are running {{es}} with `systemd`: - - ```sh - sudo systemctl stop elasticsearch.service - ``` - - * If you are running {{es}} with SysV `init`: - - ```sh - sudo -i service elasticsearch stop - ``` - -5. **Upgrade the node you shut down.** - - To upgrade using a [Debian](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-debian-package.md) or [RPM](../../../deploy-manage/deploy/self-managed/install-elasticsearch-with-rpm.md) package: - - * Use `rpm` or `dpkg` to install the new package. All files are installed in the appropriate location for the operating system and {{es}} config files are not overwritten. - - To upgrade using a zip or compressed tarball: - - 1. Extract the zip or tarball to a **new** directory. This is critical if you are not using external `config` and `data` directories. - 2. Set the `ES_PATH_CONF` environment variable to specify the location of your external `config` directory and `jvm.options` file. If you are not using an external `config` directory, copy your old configuration over to the new installation. - 3. Set `path.data` in `config/elasticsearch.yml` to point to your external data directory. If you are not using an external `data` directory, copy your old data directory over to the new installation.
- - ::::{important} - If you use {{monitor-features}}, re-use the data directory when you upgrade {{es}}. Monitoring identifies unique {{es}} nodes by using the persistent UUID, which is stored in the data directory. - :::: - - 4. Set `path.logs` in `config/elasticsearch.yml` to point to the location where you want to store your logs. If you do not specify this setting, logs are stored in the directory you extracted the archive to. - - ::::{tip} - When you extract the zip or tarball packages, the `elasticsearch-{{bare_version}}` directory contains the {{es}} `config`, `data`, and `logs` directories. - - We recommend moving these directories out of the {{es}} directory so that there is no chance of deleting them when you upgrade {{es}}. To specify the new locations, use the `ES_PATH_CONF` environment variable and the `path.data` and `path.logs` settings. For more information, see [Important {{es}} configuration](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md). - - The Debian and RPM packages place these directories in the appropriate place for each operating system. In production, we recommend using the deb or rpm package. - - :::: - - - $$$rolling-upgrades-bootstrapping$$$ - Leave `cluster.initial_master_nodes` unset when performing a rolling upgrade. Each upgraded node is joining an existing cluster so there is no need for [cluster bootstrapping](../../../deploy-manage/distributed-architecture/discovery-cluster-formation/modules-discovery-bootstrap-cluster.md). You must configure [either `discovery.seed_hosts` or `discovery.seed_providers`](../../../deploy-manage/deploy/self-managed/important-settings-configuration.md#discovery-settings) on every node. - -6. **Upgrade any plugins.** - - Use the `elasticsearch-plugin` script to install the upgraded version of each installed {{es}} plugin. All plugins must be upgraded when you upgrade a node. - -7. **Start the upgraded node.** - - Start the newly-upgraded node and confirm that it joins the cluster by checking the log file or by submitting a `_cat/nodes` request: - - ```console - GET _cat/nodes - ``` - -8. **Reenable shard allocation.** - - For data nodes, once the node has joined the cluster, remove the `cluster.routing.allocation.enable` setting to enable shard allocation and start using the node: - - ```console - PUT _cluster/settings - { - "persistent": { - "cluster.routing.allocation.enable": null - } - } - ``` - -9. **Wait for the node to recover.** - - Before upgrading the next node, wait for the cluster to finish shard allocation. You can check progress by submitting a `_cat/health` request: - - ```console - GET _cat/health?v=true - ``` - - Wait for the `status` column to switch to `green`. Once the node is `green`, all primary and replica shards have been allocated. - - ::::{important} - During a rolling upgrade, primary shards assigned to a node running the new version cannot have their replicas assigned to a node with the old version. The new version might have a different data format that is not understood by the old version. - - If it is not possible to assign the replica shards to another node (there is only one upgraded node in the cluster), the replica shards remain unassigned and status stays `yellow`. - - In this case, you can proceed once there are no initializing or relocating shards (check the `init` and `relo` columns). - - As soon as another node is upgraded, the replicas can be assigned and the status will change to `green`. - - :::: - - - Shards that were not flushed might take longer to recover. You can monitor the recovery status of individual shards by submitting a `_cat/recovery` request: - - ```console - GET _cat/recovery - ``` - - If you stopped indexing, it is safe to resume indexing as soon as recovery completes. - -10. **Repeat**. - - When the node has recovered and the cluster is stable, repeat these steps for each node that needs to be updated. You can monitor the health of the cluster with a `_cat/health` request: - - ```console - GET /_cat/health?v=true - ``` - - And check which nodes have been upgraded with a `_cat/nodes` request: - - ```console - GET /_cat/nodes?h=ip,name,version&v=true - ``` - -11. **Restart machine learning jobs.** - - If you temporarily halted the tasks associated with your {{ml}} jobs, use the set upgrade mode API to return them to active states: - - ```console - POST _ml/set_upgrade_mode?enabled=false - ``` - - If you closed all {{ml}} jobs before the upgrade, open the jobs and start the datafeeds from {{kib}} or with the [open jobs](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-open-job) and [start datafeed](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ml-start-datafeed) APIs. - - - -## Rolling upgrades [rolling-upgrades] - -During a rolling upgrade, the cluster continues to operate normally. However, any new functionality is disabled or operates in a backward compatible mode until all nodes in the cluster are upgraded. New functionality becomes operational once the upgrade is complete and all nodes are running the new version. Once that has happened, there’s no way to return to operating in a backward compatible mode. Nodes running the previous version will not be allowed to join the fully-updated cluster. - -In the unlikely case of a network malfunction during the upgrade process that isolates all remaining old nodes from the cluster, you must take the old nodes offline and upgrade them to enable them to join the cluster. - -If you stop half or more of the master-eligible nodes all at once during the upgrade the cluster will become unavailable. You must upgrade and restart all of the stopped master-eligible nodes to allow the cluster to re-form. It might also be necessary to upgrade all other nodes running the old version to enable them to join the re-formed cluster. - -Similarly, if you run a testing/development environment with a single master node it should be upgraded last. Restarting a single master node forces the cluster to be reformed. The new cluster will initially only have the upgraded master node and will thus reject the older nodes when they re-join the cluster. Nodes that have already been upgraded will successfully re-join the upgraded master. - - -## Archived settings [archived-settings] - -If you upgrade an {{es}} cluster that uses deprecated cluster or index settings that are not used in the target version, they are archived. We recommend you remove any archived settings after upgrading. For more information, see [Archived settings](../../../deploy-manage/upgrade/deployment-or-cluster/archived-settings.md). diff --git a/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md b/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md deleted file mode 100644 index 2c5eb70080..0000000000 --- a/raw-migrated-files/stack-docs/elastic-stack/upgrading-kibana.md +++ /dev/null @@ -1,35 +0,0 @@ -# Upgrade {{kib}} [upgrading-kibana] - -::::{warning} -{{kib}} automatically runs saved object migrations when required. To roll back to an earlier version in case of an upgrade failure, you **must** have a [backup snapshot](../../../deploy-manage/tools/snapshot-and-restore.md) that includes the `kibana` feature state. Snapshots include this feature state by default. - -For more information, check [Migrate saved objects](../../../deploy-manage/upgrade/internal-upgrade-processes/saved-object-migrations.md). - -:::: - - -Before you start, [take the upgrade preparation steps](../../../deploy-manage/upgrade/deployment-or-cluster.md). To upgrade {{kib}}: - -1. Shut down all {{kib}} instances. {{kib}} does not support rolling upgrades. **Upgrading while older {{kib}} instances are running can cause data loss or upgrade failures.** -2. To install the `deb` or `rpm` package: - - 1. Use `rpm` or `dpkg`. This installs all files in their proper locations and will not overwrite the config files. - 2. Upgrade any plugins by removing the existing plugin and reinstalling the appropriate version using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). - -3. To install from a `zip` or `tar.gz` archive: - - 1. **Extract the archive to a new directory** to be sure that you don’t overwrite the `config` or `data` directories. - 2. Copy the files from the `config` directory from your old installation to your new installation. - 3. Copy the files from the `data` directory from your old installation to your new installation. - - ::::{important} - If you use {{monitor-features}}, you must re-use the data directory when you upgrade {{kib}}. Otherwise, the {{kib}} instance is assigned a new persistent UUID and becomes a new instance in the monitoring data. - :::: - - 4. Install the appropriate versions of all your plugins for your new installation using the `kibana-plugin` script. For more information, see [{{kib}} plugins](kibana://reference/kibana-plugins.md). - -4. Start {{kib}}. - -::::{important} -{{kib}} has a new logging system in 8.0 and the log formats have changed. For additional information, see [Logging configuration changes](kibana://extend/logging-config-changes.md). -:::: diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index e02dd8d1c6..bd2b768565 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -14,7 +14,6 @@ toc: - file: cloud-on-k8s/cloud-on-k8s/index.md children: - file: cloud-on-k8s/cloud-on-k8s/k8s-securing-stack.md - - file: cloud-on-k8s/cloud-on-k8s/k8s-upgrading-stack.md - file: cloud/cloud-enterprise/index.md children: - file: cloud/cloud-enterprise/ece_re_running_the_ece_upgrade.md @@ -25,7 +24,6 @@ toc: - file: cloud/cloud-enterprise/ece-securing-ece.md - file: cloud/cloud-enterprise/ece-snapshots.md - file: cloud/cloud-enterprise/ece-terminate-deployment.md - - file: cloud/cloud-enterprise/ece-upgrade-deployment.md - file: cloud/cloud-enterprise/ece-upgrade.md - file: cloud/cloud-heroku/index.md children: @@ -45,7 +43,6 @@ toc: - file: cloud/cloud-heroku/ech-restoring-snapshots.md - file: cloud/cloud-heroku/ech-security.md - file: cloud/cloud-heroku/ech-snapshot-restore.md - - file: cloud/cloud-heroku/ech-upgrade-deployment.md - file: cloud/cloud/index.md children: - file: cloud/cloud/ec_service_status_api.md @@ -78,7 +75,6 @@ toc: - file: cloud/cloud/ec-select-subscription-level.md - file: cloud/cloud/ec-service-status.md - file: cloud/cloud/ec-snapshot-restore.md - - file: cloud/cloud/ec-upgrade-deployment.md - file: docs-content/serverless/index.md children: - file: docs-content/serverless/intro.md @@ -119,9 +115,7 @@ toc: children: - file: kibana/kibana/apm-settings-kb.md - file: kibana/kibana/logging-settings.md - - file: kibana/kibana/reporting-production-considerations.md - - file: kibana/kibana/upgrade-migrations-rolling-back.md - - file: kibana/kibana/upgrade.md + - file: kibana/kibana/reporting-production-considerations.md - file: kibana/kibana/xpack-security.md - file: observability-docs/observability/index.md children: @@ -130,8 +124,3 @@ toc: - file: stack-docs/elastic-stack/index.md children: - file: stack-docs/elastic-stack/installing-stack-demo-self.md - - file: stack-docs/elastic-stack/upgrade-elastic-stack-for-elastic-cloud.md - - file: stack-docs/elastic-stack/upgrading-elastic-stack-on-prem.md - - file: stack-docs/elastic-stack/upgrading-elastic-stack.md - - file: stack-docs/elastic-stack/upgrading-elasticsearch.md - - file: stack-docs/elastic-stack/upgrading-kibana.md diff --git a/troubleshoot/kibana/migration-failures.md b/troubleshoot/kibana/migration-failures.md index 07a89ae6e8..a82ba4c17d 100644 --- a/troubleshoot/kibana/migration-failures.md +++ b/troubleshoot/kibana/migration-failures.md @@ -13,7 +13,7 @@ Migrating {{kib}} primarily involves migrating saved object documents to be comp If {{kib}} unexpectedly terminates while migrating a saved object index, {{kib}} automatically attempts to perform the migration again when the process restarts. Do not delete any saved objects indices to fix a failed migration. Unlike previous versions, {{kib}} 7.12.0 and later does not require deleting indices to release a failed migration lock. -If upgrade migrations fail repeatedly, refer to [preparing for migration](../../deploy-manage/upgrade/deployment-or-cluster.md#preventing-migration-failures). When you address the root cause for the migration failure, {{kib}} automatically retries the migration. If you’re unable to resolve a failed migration, contact Support. +If upgrade migrations fail repeatedly, refer to [preparing for migration](../../deploy-manage/upgrade/prepare-to-upgrade.md). When you address the root cause for the migration failure, {{kib}} automatically retries the migration. If you’re unable to resolve a failed migration, contact Support. ## Corrupt saved objects [_corrupt_saved_objects]