You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/ccr/bi-directional-disaster-recovery.asciidoc
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@
10
10
----
11
11
PUT _data_stream/logs-generic-default
12
12
----
13
-
// TESTSETUP
13
+
// TESTSETUP
14
14
15
15
[source,console]
16
16
----
@@ -20,12 +20,12 @@ DELETE /_data_stream/*
20
20
////
21
21
22
22
Learn how to set up disaster recovery between two clusters based on
23
-
bi-directional {ccr}. The following tutorial is designed for data streams which support
24
-
<<update-docs-in-a-data-stream-by-query,update by query>> and <<delete-docs-in-a-data-stream-by-query,delete by query>>. You can only perform these actions on the leader index.
23
+
bi-directional {ccr}. The following tutorial is designed for data streams which support
24
+
<<update-docs-in-a-data-stream-by-query,update by query>> and <<delete-docs-in-a-data-stream-by-query,delete by query>>. You can only perform these actions on the leader index.
25
25
26
-
This tutorial works with {ls} as the source of ingestion. It takes advantage of a {ls} feature where {logstash-ref}/plugins-outputs-elasticsearch.html[the {ls} output to {es}] can be load balanced across an array of hosts specified. {beats} and {agents} currently do not
27
-
support multiple outputs. It should also be possible to set up a proxy
28
-
(load balancer) to redirect traffic without {ls} in this tutorial.
26
+
This tutorial works with {ls} as the source of ingestion. It takes advantage of a {ls} feature where {logstash-ref}/plugins-outputs-elasticsearch.html[the {ls} output to {es}] can be load balanced across an array of hosts specified. {beats} and {agents} currently do not
27
+
support multiple outputs. It should also be possible to set up a proxy
28
+
(load balancer) to redirect traffic without {ls} in this tutorial.
29
29
30
30
* Setting up a remote cluster on `clusterA` and `clusterB`.
31
31
* Setting up bi-directional cross-cluster replication with exclusion patterns.
@@ -92,7 +92,7 @@ PUT /_ccr/auto_follow/logs-generic-default
@@ -126,7 +126,7 @@ pattern in the UI. Use the API in this step.
126
126
+
127
127
This example uses the input generator to demonstrate the document
128
128
count in the clusters. Reconfigure this section
129
-
to suit your own use case.
129
+
to suit your own use case.
130
130
+
131
131
[source,logstash]
132
132
----
@@ -171,15 +171,15 @@ Bi-directional {ccr} will create one more data stream on each of the clusters
171
171
with the `-replication_from_cluster{a|b}` suffix. At the end of this step:
172
172
+
173
173
* data streams on cluster A contain:
174
-
** 50 documents in `logs-generic-default-replicated_from_clusterb`
174
+
** 50 documents in `logs-generic-default-replicated_from_clusterb`
175
175
** 50 documents in `logs-generic-default`
176
176
* data streams on cluster B contain:
177
177
** 50 documents in `logs-generic-default-replicated_from_clustera`
178
178
** 50 documents in `logs-generic-default`
179
179
180
180
. Queries should be set up to search across both data streams.
181
181
A query on `logs*`, on either of the clusters, returns 100
182
-
hits in total.
182
+
hits in total.
183
183
+
184
184
[source,console]
185
185
----
@@ -199,27 +199,27 @@ use cases where {ls} ingests continuously.)
199
199
bin/logstash -f multiple_hosts.conf
200
200
----
201
201
202
-
. Observe all {ls} traffic will be redirected to `cluster B` automatically.
202
+
. Observe all {ls} traffic will be redirected to `cluster B` automatically.
203
203
+
204
-
TIP: You should also redirect all search traffic to the `clusterB` cluster during this time.
204
+
TIP: You should also redirect all search traffic to the `clusterB` cluster during this time.
205
205
206
-
. The two data streams on `cluster B` now contain a different number of documents.
206
+
. The two data streams on `cluster B` now contain a different number of documents.
207
207
+
208
-
* data streams on cluster A (down)
209
-
** 50 documents in `logs-generic-default-replicated_from_clusterb`
208
+
* data streams on cluster A (down)
209
+
** 50 documents in `logs-generic-default-replicated_from_clusterb`
210
210
** 50 documents in `logs-generic-default`
211
-
* data streams On cluster B (up)
211
+
* data streams On cluster B (up)
212
212
** 50 documents in `logs-generic-default-replicated_from_clustera`
213
213
** 150 documents in `logs-generic-default`
214
214
215
215
216
216
==== Failback when `clusterA` comes back
217
-
. You can simulate this by turning `cluster A` back on.
217
+
. You can simulate this by turning `cluster A` back on.
218
218
. Data ingested to `cluster B` during `cluster A` 's downtime will be
219
-
automatically replicated.
219
+
automatically replicated.
220
220
+
221
221
* data streams on cluster A
222
-
** 150 documents in `logs-generic-default-replicated_from_clusterb`
222
+
** 150 documents in `logs-generic-default-replicated_from_clusterb`
223
223
** 50 documents in `logs-generic-default`
224
224
* data streams on cluster B
225
225
** 50 documents in `logs-generic-default-replicated_from_clustera`
@@ -271,5 +271,5 @@ POST logs-generic-default/_update_by_query
271
271
}
272
272
}
273
273
----
274
-
+
274
+
+
275
275
TIP: If a soft delete is merged away before it can be replicated to a follower the following process will fail due to incomplete history on the leader, see <<ccr-index-soft-deletes-retention-period, index.soft_deletes.retention_lease.period>> for more details.
0 commit comments