Skip to content

Commit e008b61

Browse files
Update doc as per SDH finding (#101285) (#108972)
(cherry picked from commit 66f7298) Co-authored-by: Volodymyr Krasnikov <[email protected]>
1 parent 36303b8 commit e008b61

File tree

1 file changed

+21
-21
lines changed

1 file changed

+21
-21
lines changed

docs/reference/ccr/bi-directional-disaster-recovery.asciidoc

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
----
1111
PUT _data_stream/logs-generic-default
1212
----
13-
// TESTSETUP
13+
// TESTSETUP
1414
1515
[source,console]
1616
----
@@ -20,12 +20,12 @@ DELETE /_data_stream/*
2020
////
2121

2222
Learn how to set up disaster recovery between two clusters based on
23-
bi-directional {ccr}. The following tutorial is designed for data streams which support
24-
<<update-docs-in-a-data-stream-by-query,update by query>> and <<delete-docs-in-a-data-stream-by-query,delete by query>>. You can only perform these actions on the leader index.
23+
bi-directional {ccr}. The following tutorial is designed for data streams which support
24+
<<update-docs-in-a-data-stream-by-query,update by query>> and <<delete-docs-in-a-data-stream-by-query,delete by query>>. You can only perform these actions on the leader index.
2525

26-
This tutorial works with {ls} as the source of ingestion. It takes advantage of a {ls} feature where {logstash-ref}/plugins-outputs-elasticsearch.html[the {ls} output to {es}] can be load balanced across an array of hosts specified. {beats} and {agents} currently do not
27-
support multiple outputs. It should also be possible to set up a proxy
28-
(load balancer) to redirect traffic without {ls} in this tutorial.
26+
This tutorial works with {ls} as the source of ingestion. It takes advantage of a {ls} feature where {logstash-ref}/plugins-outputs-elasticsearch.html[the {ls} output to {es}] can be load balanced across an array of hosts specified. {beats} and {agents} currently do not
27+
support multiple outputs. It should also be possible to set up a proxy
28+
(load balancer) to redirect traffic without {ls} in this tutorial.
2929

3030
* Setting up a remote cluster on `clusterA` and `clusterB`.
3131
* Setting up bi-directional cross-cluster replication with exclusion patterns.
@@ -92,7 +92,7 @@ PUT /_ccr/auto_follow/logs-generic-default
9292
"leader_index_patterns": [
9393
".ds-logs-generic-default-20*"
9494
],
95-
"leader_index_exclusion_patterns":"{{leader_index}}-replicated_from_clustera",
95+
"leader_index_exclusion_patterns":"*-replicated_from_clustera",
9696
"follow_index_pattern": "{{leader_index}}-replicated_from_clusterb"
9797
}
9898
@@ -103,7 +103,7 @@ PUT /_ccr/auto_follow/logs-generic-default
103103
"leader_index_patterns": [
104104
".ds-logs-generic-default-20*"
105105
],
106-
"leader_index_exclusion_patterns":"{{leader_index}}-replicated_from_clusterb",
106+
"leader_index_exclusion_patterns":"*-replicated_from_clusterb",
107107
"follow_index_pattern": "{{leader_index}}-replicated_from_clustera"
108108
}
109109
----
@@ -126,7 +126,7 @@ pattern in the UI. Use the API in this step.
126126
+
127127
This example uses the input generator to demonstrate the document
128128
count in the clusters. Reconfigure this section
129-
to suit your own use case.
129+
to suit your own use case.
130130
+
131131
[source,logstash]
132132
----
@@ -171,15 +171,15 @@ Bi-directional {ccr} will create one more data stream on each of the clusters
171171
with the `-replication_from_cluster{a|b}` suffix. At the end of this step:
172172
+
173173
* data streams on cluster A contain:
174-
** 50 documents in `logs-generic-default-replicated_from_clusterb`
174+
** 50 documents in `logs-generic-default-replicated_from_clusterb`
175175
** 50 documents in `logs-generic-default`
176176
* data streams on cluster B contain:
177177
** 50 documents in `logs-generic-default-replicated_from_clustera`
178178
** 50 documents in `logs-generic-default`
179179

180180
. Queries should be set up to search across both data streams.
181181
A query on `logs*`, on either of the clusters, returns 100
182-
hits in total.
182+
hits in total.
183183
+
184184
[source,console]
185185
----
@@ -199,27 +199,27 @@ use cases where {ls} ingests continuously.)
199199
bin/logstash -f multiple_hosts.conf
200200
----
201201

202-
. Observe all {ls} traffic will be redirected to `cluster B` automatically.
202+
. Observe all {ls} traffic will be redirected to `cluster B` automatically.
203203
+
204-
TIP: You should also redirect all search traffic to the `clusterB` cluster during this time.
204+
TIP: You should also redirect all search traffic to the `clusterB` cluster during this time.
205205

206-
. The two data streams on `cluster B` now contain a different number of documents.
206+
. The two data streams on `cluster B` now contain a different number of documents.
207207
+
208-
* data streams on cluster A (down)
209-
** 50 documents in `logs-generic-default-replicated_from_clusterb`
208+
* data streams on cluster A (down)
209+
** 50 documents in `logs-generic-default-replicated_from_clusterb`
210210
** 50 documents in `logs-generic-default`
211-
* data streams On cluster B (up)
211+
* data streams On cluster B (up)
212212
** 50 documents in `logs-generic-default-replicated_from_clustera`
213213
** 150 documents in `logs-generic-default`
214214

215215

216216
==== Failback when `clusterA` comes back
217-
. You can simulate this by turning `cluster A` back on.
217+
. You can simulate this by turning `cluster A` back on.
218218
. Data ingested to `cluster B` during `cluster A` 's downtime will be
219-
automatically replicated.
219+
automatically replicated.
220220
+
221221
* data streams on cluster A
222-
** 150 documents in `logs-generic-default-replicated_from_clusterb`
222+
** 150 documents in `logs-generic-default-replicated_from_clusterb`
223223
** 50 documents in `logs-generic-default`
224224
* data streams on cluster B
225225
** 50 documents in `logs-generic-default-replicated_from_clustera`
@@ -271,5 +271,5 @@ POST logs-generic-default/_update_by_query
271271
}
272272
}
273273
----
274-
+
274+
+
275275
TIP: If a soft delete is merged away before it can be replicated to a follower the following process will fail due to incomplete history on the leader, see <<ccr-index-soft-deletes-retention-period, index.soft_deletes.retention_lease.period>> for more details.

0 commit comments

Comments
 (0)