@@ -24,6 +24,8 @@ If primaries are far apart, network latency adds to commit time.
2424[[secondaries-for-read-resilience]]
2525=== Use database secondaries for read resilience
2626
27+ image::secondaries-for-read-resilience.svg[width="400", title="Cluster design with database secondaries for better read performance", role=popup]
28+ 
2729You can locate all the database primaries in one data center (DC) and database secondaries in another DC for better read performance.
2830This provides fast writes, because they will be performed within the DC.
2931
@@ -54,7 +56,9 @@ For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster
5456
5557
5658[[geo-distributed-dc]]
57- === Use geo-distributed data centers (3DC)
59+ === Use geo-distributed data centers
60+ 
61+ image::geo-distributed-primaries.svg[width="400", title="Cluster design with primaries distributed across three data centers", role=popup]
5862
5963You can place each primary server in a different data center using a minimum of three data centers.
6064
@@ -82,7 +86,7 @@ For detailed scenarios, see the xref:clustering/multi-region-deployment/disaster
8286
8387
8488[[geo-distribution-system-database]]
85- === Use full geo-distribution for the `system` database only (3DC) 
89+ === Use full geo-distribution for the `system` database only
8690
8791image::geo-distribution-system-db.svg[width="400", title="Primaries for the `system` database distributed across three data centers", role=popup]
8892
@@ -124,6 +128,8 @@ You need to accept a smaller topology that fits in the remaining data center.
124128[[two-dc-unbalanced-membership]]
125129=== Two data centers with unbalanced membership
126130
131+ image::2dc-unbalanced-membership.svg[width="400", title="Unbalanced data center primary distribution", role=popup]
132+ 
127133Suppose you decide to set up just two data centers, placing two primaries in data center 1 (DC1) and one primary in the data center 2 (DC2).
128134
129135If the writer primary is located in DC1, then writes can be fast because a local quorum can be reached.
@@ -136,14 +142,16 @@ In that case a failure of a member in DC1 means the database is write-unavailabl
136142
137143If leadership shifts to DC2, this makes all writes slow.
138144
139- Finally, there is no guarantee against data lost  if DC1 goes down.
145+ Finally, there is no guarantee against data loss  if DC1 goes down.
140146Beacuse the primary member in DC2 may not be up to date with writes, even in append.
141147
142148
143149
144150[[two-dc-balanced-membership]]
145151=== Two data centers with balanced membership
146152
153+ image::2dc-balanced-membership.svg[width="400", title="Symmetric primaries across two data centers", role=popup]
154+ 
147155The worst scenario is to operate with just two data centers and place two or three primaries in each of them.
148156
149157This means the failure of either data center leads to loss of quorum and, therefore, to loss of the cluster write-availability.
0 commit comments