Skip to content

Commit 6fae5d6

Browse files
Update geo-redundant-deployment.adoc
1 parent d99ac13 commit 6fae5d6

File tree

1 file changed

+21
-19
lines changed

1 file changed

+21
-19
lines changed

modules/ROOT/pages/clustering/multi-region-deployment/geo-redundant-deployment.adoc

Lines changed: 21 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -26,21 +26,21 @@ If primaries are far apart, network latency adds to commit time.
2626

2727
image::secondaries-for-read-resilience.svg[width="400", title="Cluster design with database secondaries for better read performance", role=popup]
2828

29-
You can locate all the database primaries in one data center (DC) and database secondaries in another DC for better read performance.
30-
This provides fast writes, because they will be performed within the DC.
29+
For better read performance, you can locate all database primaries in one data center (DC) and database secondaries in another DC.
30+
This also provides fast writes, because they will be performed within the DC.
3131

3232
However, if the DC with primaries goes down, your cluster loses write availability.
3333
Though read availability may remain via the secondaries.
3434

35-
==== How to recover from loss of a data center?
35+
==== Recovering from the loss of a data center
3636

3737
You can restore the cluster write availability without the failed DC:
3838

39-
* If you have enough secondary members of the database in another data center, you can switch their mode to primary and not have to store copy or wait a long time for primary copies to restore.
40-
* Use secondaries to re-seed databases if needed.
41-
Run xref:database-administration/standard-databases/recreate-database.adoc[the `dbms.recreateDatabase()` procedure].
39+
* If you have enough secondary members of the database in another data center, you can switch their mode to primary and not have to store a copy or wait a long time for primary copies to restore.
40+
* You can use secondaries to re-seed databases if needed.
41+
See xref:database-administration/standard-databases/recreate-database.adoc[the `dbms.recreateDatabase()` procedure] for more details.
4242

43-
Example steps::
43+
Example recovery steps::
4444

4545
. Promote secondary copies of the `system` database to primaries to make the `system` database write-available.
4646
This requires restarting processes.
@@ -50,36 +50,37 @@ For other scenarios, see xref:clustering/multi-region-deployment/disaster-recove
5050
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on the remaining cluster.
5151

5252
. Recreate each user database, letting it choose the existing xref:database-administration/standard-databases/recreate-database.adoc#seed-servers[servers as seeders].
53-
You will need to accept a smaller topology that will fit in the remaining data center/cloud region.
53+
You need to accept a smaller topology that will fit in the remaining DC.
5454

5555
For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster-recovery.adoc[Disaster recovery guide].
5656

5757

5858
[[geo-distributed-dc]]
5959
=== Geo-distribution of user database primaries
6060

61-
image::geo-distributed-primaries.svg[width="400", title="Cluster design with primaries distributed across three data centers", role=popup]
61+
image::geo-distributed-primaries.svg[width="400", title="Cluster design with database primaries distributed across three data centers", role=popup]
6262

63-
You can place each primary copy in a different data center using a minimum of three data centers.
63+
You can place each primary copy in a different data center (DC) using a minimum of three data centers.
6464

65-
Therefore, if one data center fails, only one primary member is lost and the cluster can continue without data loss.
65+
Therefore, if one DC fails, only one primary member is lost and the cluster can continue without data loss.
6666

6767
However, you always pay cross-data center latency times for every write operation.
6868

69-
==== How to recover from loss of a data center?
69+
==== Recovering from the loss of a data center
7070

7171
This setup has no loss of quorum, so the cluster keeps running -- only with reduced fault tolerance (with no room for extra failures).
7272

7373
To restore fault tolerance, you can either wait until the affected DC is back online or start a new primary member somewhere else that will provide resilience and re-establish three-DC fault tolerance.
7474

75-
Example steps::
75+
Example recovery steps::
7676

7777
. Start and enable a new server.
7878
See xref:clustering/servers.adoc#cluster-add-server[How to add a server to the cluster] for details.
7979

8080
. Remove the unavailable server from the cluster:
8181
.. First, xref:clustering/servers.adoc#_deallocating_databases_from_a_server[deallocate databases] from it.
82-
.. Then xref:clustering/servers.adoc#_dropping_a_server[drop the server].
82+
.. Then xref:clustering/servers.adoc#_dropping_a_server[drop the server].
83+
+
8384
For more information, visit the xref:clustering/servers.adoc[].
8485

8586
For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster-recovery.adoc[Disaster recovery guide].
@@ -90,7 +91,7 @@ For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster
9091

9192
image::geo-distribution-system-db.svg[width="400", title="Primaries for the `system` database distributed across three data centers", role=popup]
9293

93-
You can place all primaries for user databases in one data center, with secondaries in another.
94+
You can place all primaries for user databases in one data center (DC), with secondaries in another.
9495

9596
In a third DC, deploy a server that only hosts a primary member of the `system` database (in addition to those in the first two data centers).
9697

@@ -105,14 +106,14 @@ If a DC goes down, you retain write availability for the `system` database, whic
105106
However, if the DC with primaries goes down, you lose write availability for the user databases.
106107
Though read availability may remain via the secondaries.
107108

108-
==== How to recover from loss of a data center?
109+
==== Recovering from the loss of a data center
109110

110111
If you lose the DC with primaries in, the user databases will go write-unavailable, though the secondaries should continue to provide read availability.
111-
Because of the third DC, the `system` database will remain write available, so you will be able to get the user databases back to write available without process downtime.
112+
Because of the third DC, the `system` database remains write-available, so you will be able to get the user databases back to write-available without process downtime.
112113

113114
However, if you need to use the `recreateDatabase()` procedure, it will involve downtime for the user database.
114115

115-
Example steps::
116+
Example recovery steps::
116117

117118
. Mark missing servers as not present by cordoning them.
118119
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on one of the available servers.
@@ -130,9 +131,10 @@ You need to accept a smaller topology that fits in the remaining data center.
130131

131132
image::2dc-unbalanced-membership.svg[width="400", title="Unbalanced data center primary distribution", role=popup]
132133

133-
Suppose you decide to set up just two data centers, placing two primaries in data center 1 (DC1) and one primary in the data center 2 (DC2).
134+
Suppose, you decide to set up just two data centers, placing two primaries in data center 1 (DC1) and one primary in the data center 2 (DC2).
134135

135136
If the writer primary is located in DC1, then writes can be fast because a local quorum can be reached.
137+
136138
This setup can tolerate the loss of one data center — but only if the failure is in DC2.
137139
If DC1 fails, you lose two primary members, which means the quorum is lost and the cluster becomes unavailable for writes.
138140

0 commit comments

Comments
 (0)