You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/ROOT/pages/clustering/multi-region-deployment/geo-redundant-deployment.adoc
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,18 +36,18 @@ Though read availability may remain via the secondaries.
36
36
37
37
You can restore the cluster write availability without the failed DC:
38
38
39
-
* If you have enough secondary servers in another data center, you can switch their mode to primary and not have to store copy or wait a long time for primary servers to restore.
39
+
* If you have enough secondary members of the database in another data center, you can switch their mode to primary and not have to store copy or wait a long time for primary copies to restore.
40
40
* Use secondaries to re-seed databases if needed.
41
41
Run xref:database-administration/standard-databases/recreate-database.adoc[the `dbms.recreateDatabase()` procedure].
42
42
43
43
Example steps::
44
44
45
-
. Promote secondary servers to primaries to make the `system` database write-available.
45
+
. Promote secondary copies of the `system` database to primaries to make the `system` database write-available.
46
46
This requires restarting processes.
47
47
For other scenarios, see xref:clustering/multi-region-deployment/disaster-recovery.adoc#make-the-system-database-write-available[the steps] in the Disaster recovery guide on how to make the `system` database write-available again.
48
48
49
49
. Mark missing servers as not available by cordoning them.
50
-
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on one of the available servers.
50
+
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on the remaining cluster.
51
51
52
52
. Recreate each user database, letting it choose the existing xref:database-administration/standard-databases/recreate-database.adoc#seed-servers[servers as seeders].
53
53
You will need to accept a smaller topology that will fit in the remaining data center/cloud region.
@@ -60,7 +60,7 @@ For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster
60
60
61
61
image::geo-distributed-primaries.svg[width="400", title="Cluster design with primaries distributed across three data centers", role=popup]
62
62
63
-
You can place each primary server in a different data center using a minimum of three data centers.
63
+
You can place each primary copy in a different data center using a minimum of three data centers.
64
64
65
65
Therefore, if one data center fails, only one primary member is lost and the cluster can continue without data loss.
66
66
@@ -92,7 +92,7 @@ image::geo-distribution-system-db.svg[width="400", title="Primaries for the `sys
92
92
93
93
You can place all primaries for user databases in one data center, with secondaries in another.
94
94
95
-
In a third DC, deploy a primary server only for the `system` database (in addition to those in the first two data centers).
95
+
In a third DC, deploy a server that only hosts a primary member of the `system` database (in addition to those in the first two data centers).
96
96
97
97
* This server can be a small machine, since the `system` database has minimal resource requirements.
98
98
@@ -178,7 +178,8 @@ This design pattern is strongly recommended to avoid.
178
178
| * Fast writes (local quorum). +
179
179
* Local reads in remote data centers.
180
180
| * Loss of write availability if DC with primaries fails. +
181
-
* Recovery requires reseeding
181
+
* Recovery requires reseeding.
182
+
* Process restarts required if DC with primaries fails.
182
183
| Applications needing fast writes.
183
184
The cluster can tolerate downtime during recovery.
184
185
@@ -187,7 +188,6 @@ The cluster can tolerate downtime during recovery.
187
188
| * Survives loss of one DC without data loss. +
188
189
* Quorum remains intact.
189
190
| * Higher write latency (cross-data center). +
190
-
* Requires more complex networking.
191
191
| Critical systems needing continuous availability even if a full data center fails.
192
192
193
193
| Full geo-distribution for the `system` database only (3DC)
0 commit comments