Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 22 additions & 6 deletions reference/fleet/migrate-elastic-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,6 @@ After the restart, {{integrations-server}} will enroll a new {{agent}} for the {
::::



### Confirm your policy settings [migrate-elastic-agent-confirm-policy]

Now that the {{fleet}} settings are correctly set up, it pays to ensure that the {{agent}} policy is also correctly pointing to the correct entities.
Expand All @@ -200,7 +199,6 @@ If you modified the {{fleet-server}} and the output in place these would have be
::::



## Agent policies in the new target cluster [migrate-elastic-agent-migrated-policies]

By creating the new target cluster from a snapshot, all of your policies should have been created along with all of the agents. These agents will be offline due to the fact that the actual agents are not checking in with the new, target cluster (yet) and are still communicating with the source cluster.
Expand All @@ -210,7 +208,11 @@ The agents can now be re-enrolled into these policies and migrated over to the n

## Migrate {{agent}}s to the new target cluster [migrate-elastic-agent-migrated-agents]

In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new, target cluster.
::::{note}
Agents to be migrated cannot be tamper-protected or running as a {{fleet}} server.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: non implemented yet but we will fail migration for agents that are in the process of upgrade including Watching period - this applies only for migration not calling enroll manually

::::

In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new target cluster.

This is best performed one policy at a time. For a given policy, you need to capture the enrollment token and the URL for the agent to connect to. You can find these by running the in-product steps to add a new agent.

Expand All @@ -224,13 +226,26 @@ This is best performed one policy at a time. For a given policy, you need to cap
:screenshot:
:::

::::{tab-set}
:::{tab-item} 9.2.0

**For 9.2.0 and later, you can migrate remote agents directly from the {{fleet}} UI, eliminating the need to run commands on each individual host.**

5. In the source cluster, select the agents you want to migrate. Click the three dots next to the agents, and select **Migrate agents**.
6. In the migration dialog, provide the URI and enrollment token you obtained from the target cluster.
7. Use replace_token (Optional): When you are migrating a single agent, you can use the `replace_token` field to preserve the agent's original ID from the source cluster. This step helps with event matching, but will cause the migration to fail if the target cluster already has an agent with the same ID.
:::

:::{tab-item} 9.1.0
**On 9.1.0 and earlier, you need to run commands on each individual host.**

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably worth specifying it's enroll command that needs to be run

Comment on lines +229 to +240
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 9.2.0 and 9.1.0 as the tab titles might be confusing because:

  • What about versions before 9.1.0 or 9.1.x versions after 9.1.0? It's not clear to the reader without interacting with the tab that it's possible to do this at all in 9.0.x or 9.1.1 - 9.1.5.
  • For the most part, we have been using version tabs to specify the version a feature was added / made available rather than the last version in which is was available.
  • Is it still possible to run commands on each individual host in 9.2.0 or does the user have to use the Fleet UI starting in 9.2.0? (Note: My recommended approach below assumes it's possible to use the command OR UI in 9.2.0+.)
  • Hard-coding the 9.2.0 in the tab title and in the text means that you need to hold on merging this PR until the 9.2.0 release date or we will be making promises about future releases.

I wonder if it would be clearer to do something like this:

Screenshot 2025-10-07 at 11 49 27 AM

And the Command line tab:

Screenshot 2025-10-07 at 11 49 40 AM
Markdown syntax
5. Choose an approach:

    ::::{tab-set}
    :::{tab-item} Fleet UI

    {applies_to}`stack: ga 9.2` Migrate remote agents directly from the {{fleet}} UI:

    1. In the source cluster, select the agents you want to migrate. Click the three dots next to the agents, and select **Migrate agents**.
    2. In the migration dialog, provide the URI and enrollment token you obtained from the target cluster.
    3. Use replace_token (Optional): When you are migrating a single agent, you can use the `replace_token` field to preserve the agent's original ID from the source cluster. This step helps with event matching, but will cause the migration to fail if the target cluster already has an agent with the same ID.
    :::

    :::{tab-item} Command line
    Run commands on each individual host:

    1. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token:

        ```shell
        sudo elastic-agent enroll --url=<fleet server url> --enrollment-token=<token for the new policy>
        ```

        The command output should resemble this:

        :::{image} images/migrate-agent-install-command-output.png
        :alt: Install command output
        :screenshot:
        :::

    2. The agent on each host will now check into the new {{fleet-server}} and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins.

        :::{image} images/migrate-agent-newly-enrolled-agents.png
        :alt: Newly enrolled agents in the target cluster
        :screenshot:
        :::

    3. Repeat this procedure for each {{agent}} policy.
    :::
    ::::

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@colleenmcginnis, I agree that there is room for confusion. I was following the previous set guidelines we used here:
Screenshot 2025-10-07 at 1 38 31 PM

Is this approach still appropriate for smaller sets of content such as in the previous example?

Thanks for the new proposal. I'll take a look.


5. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token:

```shell
sudo elastic-agent enroll --url=<fleet server url> --enrollment-token=<token for the new policy>
```

The command output should be like the following:
The command output should resemble this:

:::{image} images/migrate-agent-install-command-output.png
:alt: Install command output
Expand All @@ -245,6 +260,7 @@ This is best performed one policy at a time. For a given policy, you need to cap
:::

7. Repeat this procedure for each {{agent}} policy.
:::
::::

If all has gone well, you’ve successfully migrated your {{fleet}}-managed {{agent}}s to a new cluster.

If all has gone well, you’ve successfully migrated your {{fleet}}-managed {{agent}}s to a new cluster.
Loading