Skip to content

Commit d77cf01

Browse files
Apply suggestions from code review
Co-authored-by: Nick Giles <[email protected]>
1 parent 06703a3 commit d77cf01

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

modules/ROOT/pages/clustering/multi-region-deployment/geo-redundant-deployment.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,18 +36,18 @@ Though read availability may remain via the secondaries.
3636

3737
You can restore the cluster write availability without the failed DC:
3838

39-
* If you have enough secondary servers in another data center, you can switch their mode to primary and not have to store copy or wait a long time for primary servers to restore.
39+
* If you have enough secondary members of the database in another data center, you can switch their mode to primary and not have to store copy or wait a long time for primary copies to restore.
4040
* Use secondaries to re-seed databases if needed.
4141
Run xref:database-administration/standard-databases/recreate-database.adoc[the `dbms.recreateDatabase()` procedure].
4242

4343
Example steps::
4444

45-
. Promote secondary servers to primaries to make the `system` database write-available.
45+
. Promote secondary copies of the `system` database to primaries to make the `system` database write-available.
4646
This requires restarting processes.
4747
For other scenarios, see xref:clustering/multi-region-deployment/disaster-recovery.adoc#make-the-system-database-write-available[the steps] in the Disaster recovery guide on how to make the `system` database write-available again.
4848

4949
. Mark missing servers as not available by cordoning them.
50-
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on one of the available servers.
50+
For each `Unavailable` server, run `CALL dbms.cluster.cordonServer("unavailable-server-id")` on the remaining cluster.
5151

5252
. Recreate each user database, letting it choose the existing xref:database-administration/standard-databases/recreate-database.adoc#seed-servers[servers as seeders].
5353
You will need to accept a smaller topology that will fit in the remaining data center/cloud region.
@@ -60,7 +60,7 @@ For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster
6060

6161
image::geo-distributed-primaries.svg[width="400", title="Cluster design with primaries distributed across three data centers", role=popup]
6262

63-
You can place each primary server in a different data center using a minimum of three data centers.
63+
You can place each primary copy in a different data center using a minimum of three data centers.
6464

6565
Therefore, if one data center fails, only one primary member is lost and the cluster can continue without data loss.
6666

@@ -92,7 +92,7 @@ image::geo-distribution-system-db.svg[width="400", title="Primaries for the `sys
9292

9393
You can place all primaries for user databases in one data center, with secondaries in another.
9494

95-
In a third DC, deploy a primary server only for the `system` database (in addition to those in the first two data centers).
95+
In a third DC, deploy a server that only hosts a primary member of the `system` database (in addition to those in the first two data centers).
9696

9797
* This server can be a small machine, since the `system` database has minimal resource requirements.
9898

@@ -178,7 +178,8 @@ This design pattern is strongly recommended to avoid.
178178
| * Fast writes (local quorum). +
179179
* Local reads in remote data centers.
180180
| * Loss of write availability if DC with primaries fails. +
181-
* Recovery requires reseeding
181+
* Recovery requires reseeding.
182+
* Process restarts required if DC with primaries fails.
182183
| Applications needing fast writes.
183184
The cluster can tolerate downtime during recovery.
184185

@@ -187,7 +188,6 @@ The cluster can tolerate downtime during recovery.
187188
| * Survives loss of one DC without data loss. +
188189
* Quorum remains intact.
189190
| * Higher write latency (cross-data center). +
190-
* Requires more complex networking.
191191
| Critical systems needing continuous availability even if a full data center fails.
192192

193193
| Full geo-distribution for the `system` database only (3DC)

0 commit comments

Comments
 (0)