Skip to content

Commit 9c9edf5

Browse files
authored
Merge branch 'elastic:main' into add-semantic-text-index-options-examples
2 parents dd7b107 + 2191afd commit 9c9edf5

File tree

15 files changed

+118
-167
lines changed

15 files changed

+118
-167
lines changed

deploy-manage/deploy/cloud-enterprise/configure-host-rhel.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -99,26 +99,26 @@ Verify that required traffic is allowed. Check the [Networking prerequisites](ec
9999

100100
* For Podman 5
101101

102-
* Install the latest available version of Podman `5.2.2` using dnf.
102+
* Install the latest available version `5.*` using dnf.
103103

104104
:::{note}
105-
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this bug, use a later build of `5.2.2`, such as `5.2.2-16` or newer. Refer to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for more information.
105+
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this bug, use a later version. Refer to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for more information.
106106
:::
107107

108108
```sh
109-
sudo dnf install podman-5.2.2 podman-remote-5.2.2
109+
sudo dnf install podman-5.* podman-remote-5.*
110110
```
111-
* To prevent automatic Podman updates to unsupported versions, configure the Podman version to be locked at version `5.2.2`.
111+
* To prevent automatic Podman major version updates, configure the Podman version to be locked at version `5.*` while still allowing minor and patch updates.
112112

113113
```sh
114114
## Install versionlock
115115
sudo dnf install 'dnf-command(versionlock)'
116116
117117
## Lock major version
118-
sudo dnf versionlock add --raw 'podman-5.2.2'
119-
sudo dnf versionlock add --raw 'podman-remote-5.2.2'
118+
sudo dnf versionlock add --raw 'podman-5.*'
119+
sudo dnf versionlock add --raw 'podman-remote-5.*'
120120
121-
## Verify that podman-5.2.2 and podman-remote-5.2.2 appear in the output
121+
## Verify that podman-5.* and podman-remote-5.* appear in the output
122122
sudo dnf versionlock list
123123
```
124124

deploy-manage/deploy/cloud-enterprise/migrate-ece-to-podman-hosts.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -129,27 +129,27 @@ Using Docker or Podman as container runtime is a configuration local to the host
129129

130130
* For Podman 5
131131

132-
* Install the latest available version of Podman `5.2.2` using dnf.
132+
* Install the latest available version `5.*` using dnf.
133133

134134
:::{note}
135-
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this bug, use a later build of `5.2.2`, such as `5.2.2-16` or newer. Refer to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for more information.
135+
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this bug, use a later version. Refer to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for more information.
136136
:::
137137

138138
```sh
139-
sudo dnf install podman-5.2.2 podman-remote-5.2.2
139+
sudo dnf install podman-5.* podman-remote-5.*
140140
```
141141

142-
* To prevent automatic Podman updates to unsupported versions, configure the Podman version to be locked at version `5.2.2`.
142+
* To prevent automatic Podman major version updates, configure the Podman version to be locked at version `5.*` while still allowing minor and patch updates.
143143

144144
```sh
145145
## Install versionlock
146146
sudo dnf install 'dnf-command(versionlock)'
147147
148148
## Lock major version
149-
sudo dnf versionlock add --raw 'podman-5.2.2'
150-
sudo dnf versionlock add --raw 'podman-remote-5.2.2'
149+
sudo dnf versionlock add --raw 'podman-5.*'
150+
sudo dnf versionlock add --raw 'podman-remote-5.*'
151151
152-
## Verify that podman-5.2.2 and podman-remote-5.2.2 appear in the output
152+
## Verify that podman-5.* and podman-remote-5.* appear in the output
153153
sudo dnf versionlock list
154154
```
155155

deploy-manage/deploy/cloud-enterprise/migrate-to-podman-5.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,21 +13,21 @@ This guide describes the supported ways to upgrade or migrate your {{ece}} (ECE)
1313

1414
* **Grow-and-shrink upgrade**: [Add new hosts](./install-ece-on-additional-hosts.md) running the desired Podman version to your ECE installation, then [remove the old ones](/deploy-manage/uninstall/uninstall-elastic-cloud-enterprise.md). This method is safer and preferred, as it avoids potential risks associated with upgrading the container engine or the operating system in place.
1515

16-
ECE only supports Podman 5 in version `5.2.2`, regardless of your upgrade method. Later versions such as `5.2.3` and above are not supported. Refer always to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for details on supported versions.
16+
ECE supports Podman 5, regardless of your upgrade method. Refer to the official [Support matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for details on supported versions.
1717

1818
:::{important}
19-
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this issue, use a later build such as `5.2.2-16` or newer.
19+
Podman versions `5.2.2-11` and `5.2.2-13` are affected by a known [memory leak issue](https://github.com/containers/podman/issues/25473). To avoid this issue, use a later Podman version.
2020
:::
2121

2222
The following table summarizes the supported upgrade paths to Podman 5 in ECE.
2323

24-
| **From ↓** ... **To →** | Podman 5.2.2-latest | Podman 5.2.3 |
25-
|-----------------------------------------|-----------------|--------------|
26-
| **<vanilla Linux installation> (grow)** || X |
27-
| **Docker (grow-and-shrink)** || X |
28-
| **Podman 4.9.4 (grow-and-shrink)** || X |
29-
| **Podman 4.9.4 (in-place)** || X |
30-
| **Podman 5.2.2 (grow-and-shrink)** || X |
31-
| **Podman 5.2.2 (in-place)** || X |
24+
| **From ↓** ... **To →** | Podman 5 |
25+
|-----------------------------------------|-----------------|
26+
| **<vanilla Linux installation> (grow)** ||
27+
| **Docker (grow-and-shrink)** ||
28+
| **Podman 4.9.4 (grow-and-shrink)** ||
29+
| **Podman 4.9.4 (in-place)** ||
30+
| **Podman 5.2.2 (grow-and-shrink)** ||
31+
| **Podman 5.2.2 (in-place)** ||
3232

3333
As shown in the table above, [migrations from Docker](./migrate-ece-to-podman-hosts.md) are only supported using the grow-and-shrink method.

deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -148,5 +148,5 @@ In order to make requests to the [{{es}} API](elasticsearch://reference/elastics
148148
This completes the quickstart of deploying an {{es}} cluster. We recommend continuing to:
149149

150150
* [Deploy a {{kib}} instance](kibana-instance-quickstart.md)
151-
* For information about how to apply changes to your deployments, refer to [aplying updates](./update-deployments.md).
151+
* For information about how to apply changes to your deployments, refer to [applying updates](./update-deployments.md).
152152
* To explore other configuration options for your {{es}} cluster, see [](./elasticsearch-configuration.md) and [](./configure-deployments.md).

deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -144,10 +144,7 @@ kind: Ingress
144144
metadata:
145145
name: elasticsearch
146146
labels:
147-
helm.sh/chart: eck-elasticsearch-0.14.1
148-
app.kubernetes.io/name: eck-elasticsearch
149-
app.kubernetes.io/instance: es-kb-quickstart
150-
app.kubernetes.io/managed-by: Helm
147+
...
151148
spec:
152149
rules:
153150
- host: "elasticsearch.example.com"
@@ -167,10 +164,7 @@ kind: Ingress
167164
metadata:
168165
name: es-kb-quickstart-eck-kibana
169166
labels:
170-
helm.sh/chart: eck-kibana-0.14.1
171-
app.kubernetes.io/name: eck-kibana
172-
app.kubernetes.io/instance: es-kb-quickstart
173-
app.kubernetes.io/managed-by: Helm
167+
...
174168
spec:
175169
rules:
176170
- host: "kibana.example.com"

deploy-manage/tools/cross-cluster-replication.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,8 @@
22
mapped_pages:
33
- https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-ccr.html
44
applies_to:
5-
deployment:
6-
eck:
7-
ess:
8-
ece:
9-
self:
5+
stack: ga
6+
serverless: unavailable
107
products:
118
- id: elasticsearch
129
---

docset.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ subs:
276276
agent-pull: "https://github.com/elastic/elastic-agent/pull/"
277277
es-pull: "https://github.com/elastic/elasticsearch/pull/"
278278
kib-pull: "https://github.com/elastic/kibana/pull/"
279-
eck_helm_minimum_version: "3.2.0"
279+
eck_helm_minimum_version: 3.2.0
280280
eck_resources_list: "Elasticsearch, Kibana, APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash"
281281
eck_resources_list_short: "APM Server, Beats, Elastic Agent, Elastic Maps Server, and Logstash"
282282
heroku: "Elasticsearch Add-on for Heroku"

redirects.yml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ redirects:
206206
'troubleshoot/elasticsearch/elasticsearch-client-net-api/logging.md': 'troubleshoot/elasticsearch/clients.md'
207207
'troubleshoot/elasticsearch/elasticsearch-client-net-api/net.md': 'troubleshoot/elasticsearch/clients.md'
208208
'troubleshoot/elasticsearch/elasticsearch-client-ruby-api/ruby.md': 'troubleshoot/elasticsearch/clients.md'
209-
'solutions/observability/get-started/add-data-from-splunk.md': 'solutions/observability/get-started/other-tutorials/add-data-from-splunk.md'
209+
'solutions/observability/get-started/add-data-from-splunk.md': 'solutions/observability/get-started.md'
210210
'solutions/observability/get-started/create-an-observability-project.md': 'solutions/observability/get-started.md'
211211
'solutions/observability/get-started/get-started-with-dashboards.md': 'solutions/observability/get-started.md'
212212
# Related to https://github.com/elastic/docs-content/pull/1329
@@ -586,4 +586,7 @@ redirects:
586586
'deploy-manage/monitor/autoops/cc-cloud-connect-autoops-faq.md': 'deploy-manage/monitor/autoops/ec-autoops-faq.md'
587587

588588
# Related to https://github.com/elastic/docs-team/issues/104
589-
'solutions/observability/get-started/what-is-elastic-observability': 'solutions/observability.md'
589+
'solutions/observability/get-started/what-is-elastic-observability': 'solutions/observability.md'
590+
591+
# Related to https://github.com/elastic/docs-content/pull/3808
592+
'solutions/observability/get-started/other-tutorials/add-data-from-splunk.md': 'solutions/observability/get-started.md'

reference/fleet/migrate-elastic-agent.md

Lines changed: 36 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,6 @@ After the restart, {{integrations-server}} will enroll a new {{agent}} for the {
179179
::::
180180

181181

182-
183182
### Confirm your policy settings [migrate-elastic-agent-confirm-policy]
184183

185184
Now that the {{fleet}} settings are correctly set up, it pays to ensure that the {{agent}} policy is also correctly pointing to the correct entities.
@@ -200,7 +199,6 @@ If you modified the {{fleet-server}} and the output in place these would have be
200199
::::
201200
202201
203-
204202
## Agent policies in the new target cluster [migrate-elastic-agent-migrated-policies]
205203
206204
By creating the new target cluster from a snapshot, all of your policies should have been created along with all of the agents. These agents will be offline due to the fact that the actual agents are not checking in with the new, target cluster (yet) and are still communicating with the source cluster.
@@ -210,7 +208,11 @@ The agents can now be re-enrolled into these policies and migrated over to the n
210208
211209
## Migrate {{agent}}s to the new target cluster [migrate-elastic-agent-migrated-agents]
212210
213-
In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new, target cluster.
211+
::::{note}
212+
Agents to be migrated cannot be tamper-protected or running as a {{fleet-server}}.
213+
::::
214+
215+
In order to ensure that all required API keys are correctly created, the agents in your current cluster need to be re-enrolled into the new target cluster.
214216
215217
This is best performed one policy at a time. For a given policy, you need to capture the enrollment token and the URL for the agent to connect to. You can find these by running the in-product steps to add a new agent.
216218
@@ -224,27 +226,43 @@ This is best performed one policy at a time. For a given policy, you need to cap
224226
:screenshot:
225227
:::
226228
227-
5. On the host machines where the current agents are installed, enroll the agents again using this copied URL and the enrollment token:
229+
5. Choose an approach:
228230
229-
```shell
230-
sudo elastic-agent enroll --url=<fleet server url> --enrollment-token=<token for the new policy>
231-
```
231+
::::{tab-set}
232+
:::{tab-item} Fleet UI
232233
233-
The command output should be like the following:
234+
{applies_to}`stack: ga 9.2` Migrate remote agents directly from the {{fleet}} UI:
234235
235-
:::{image} images/migrate-agent-install-command-output.png
236-
:alt: Install command output
237-
:screenshot:
236+
1. In the source cluster, select the agents you want to migrate. Click the three dots next to the agents, and select **Migrate agents**.
237+
2. In the migration dialog, provide the URI and enrollment token you obtained from the target cluster.
238+
3. Use `replace_token` (Optional): When you are migrating a single agent, you can use the `replace_token` field to preserve the agent's original ID from the source cluster. This step helps with event matching, but will cause the migration to fail if the target cluster already has an agent with the same ID.
238239
:::
239240

240-
6. The agent on each host will now check into the new {{fleet-server}} and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins.
241+
:::{tab-item} Command line
241242

242-
:::{image} images/migrate-agent-newly-enrolled-agents.png
243-
:alt: Newly enrolled agents in the target cluster
244-
:screenshot:
245-
:::
243+
Run the `enroll` command on each individual host:
244+
245+
1. On the host machines where the current agents are installed, enroll the agents again using the URL and enrollment token you obtained from the target cluster:
246+
247+
```shell
248+
sudo elastic-agent enroll --url=<fleet server url> --enrollment-token=<token for the new policy>
249+
```
250+
251+
The command output should resemble this:
252+
253+
:::{image} images/migrate-agent-install-command-output.png
254+
:alt: Install command output
255+
:screenshot:
256+
:::
257+
258+
2. The agent on each host will now check into the new {{fleet-server}} and appear in the new target cluster. In the source cluster, the agents will go offline as they won’t be sending any check-ins.
246259

247-
7. Repeat this procedure for each {{agent}} policy.
260+
:::{image} images/migrate-agent-newly-enrolled-agents.png
261+
:alt: Newly enrolled agents in the target cluster
262+
:screenshot:
263+
:::
248264

249-
If all has gone well, you’ve successfully migrated your {{fleet}}-managed {{agent}}s to a new cluster.
265+
3. Repeat this procedure for each {{agent}} policy.
266+
:::
267+
::::
250268

0 commit comments

Comments
 (0)