You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: versioned_docs/version-1.25/architecture.md
+26-27Lines changed: 26 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
---
2
2
id: architecture
3
-
sidebar_position: 5
3
+
sidebar_position: 40
4
4
title: Architecture
5
5
---
6
6
7
7
# Architecture
8
8
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
9
9
10
-
:::tip
10
+
:::tip[Hint]
11
11
For a deeper understanding, we recommend reading our article on the CNCF
12
12
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
13
13
which provides valuable insights into best practices and design
@@ -57,7 +57,7 @@ used as a fallback option, for example, to store WAL files in an object store).
57
57
Replicas are usually called *standby servers* and can also be used for
58
58
read-only workloads, thanks to the *Hot Standby* feature.
59
59
60
-
:::important
60
+
:::info[Important]
61
61
**We recommend against storage-level replication with PostgreSQL**, although
62
62
CloudNativePG allows you to adopt that strategy. For more information, please refer
63
63
to the talk given by Chris Milsted and Gabriele Bartolini at KubeCon NA 2022 entitled
@@ -91,7 +91,7 @@ The multi-availability zone Kubernetes architecture with three (3) or more
91
91
zones is the one that we recommend for PostgreSQL usage.
92
92
This scenario is typical of Kubernetes services managed by Cloud Providers.
93
93
94
-

94
+

95
95
96
96
Such an architecture enables the CloudNativePG operator to control the full
97
97
lifecycle of a `Cluster` resource across the zones within a single Kubernetes
managing them via declarative configuration. This setup is ideal for disaster
114
114
recovery (DR), read-only operations, or cross-region availability.
115
115
116
-
:::important
116
+
:::info[Important]
117
117
Each operator deployment can only manage operations within its local
118
118
Kubernetes cluster. For operations across Kubernetes clusters, such as
119
119
controlled switchover or unexpected failover, coordination must be handled
120
120
manually (through GitOps, for example) or by using a higher-level cluster
121
121
management tool.
122
122
:::
123
123
124
-

124
+

125
125
126
126
### Single availability zone Kubernetes clusters
127
127
@@ -143,9 +143,9 @@ Kubernetes clusters in an active/passive configuration, with the second cluster
143
143
primarily used for Disaster Recovery (see
144
144
the [replica cluster feature](replica_cluster.md)).
145
145
146
-

146
+

147
147
148
-
:::tip
148
+
:::tip[Hint]
149
149
If you are at an early stage of your Kubernetes journey, please share this
150
150
document with your infrastructure team. The two data centers setup might
151
151
be simply the result of a "lift-and-shift" transition to Kubernetes
@@ -179,7 +179,7 @@ is now fully declarative, automated failover across Kubernetes clusters is not
179
179
within CloudNativePG's scope, as the operator can only function within a single
180
180
Kubernetes cluster.
181
181
182
-
:::important
182
+
:::info[Important]
183
183
CloudNativePG provides all the necessary primitives and probes to
184
184
coordinate PostgreSQL active/passive topologies across different Kubernetes
185
185
clusters through a higher-level operator or management tool.
@@ -195,7 +195,7 @@ PostgreSQL workloads is referred to as a **Postgres node** or `postgres` node.
195
195
This approach ensures optimal performance and resource allocation for your
196
196
database operations.
197
197
198
-
:::hint
198
+
:::tip[Hint]
199
199
As a general rule of thumb, deploy Postgres nodes in multiples of
200
200
three—ideally with one node per availability zone. Three nodes is
201
201
an optimal number because it ensures that a PostgreSQL cluster with three
@@ -209,7 +209,7 @@ labels ensure that a node is capable of running `postgres` workloads, while
209
209
taints help prevent any non-`postgres` workloads from being scheduled on that
210
210
node.
211
211
212
-
:::important
212
+
:::info[Important]
213
213
This methodology is the most straightforward way to ensure that PostgreSQL
214
214
workloads are isolated from other workloads in terms of both computing
215
215
resources and, when using locally attached disks, storage. While different
@@ -289,7 +289,7 @@ Kubernetes cluster, with the following specifications:
289
289
* PostgreSQL instances should reside in different availability zones
290
290
within the same Kubernetes cluster / region
291
291
292
-
:::important
292
+
:::info[Important]
293
293
You can configure the above services through the `managed.services` section
294
294
in the `Cluster` configuration. This can be done by reducing the number of
295
295
services and selecting the type (default is `ClusterIP`). For more details,
@@ -302,26 +302,26 @@ architecture for a PostgreSQL cluster spanning across 3 different availability
302
302
zones, running on separate nodes, each with dedicated local storage for
303
303
PostgreSQL data.
304
304
305
-

305
+

306
306
307
307
CloudNativePG automatically takes care of updating the above services if
308
308
the topology of the cluster changes. For example, in case of failover, it
309
309
automatically updates the `-rw` service to point to the promoted primary,
310
310
making sure that traffic from the applications is seamlessly redirected.
311
311
312
-
:::info Replication
312
+
:::note[Replication]
313
313
Please refer to the ["Replication" section](replication.md) for more
314
314
information about how CloudNativePG relies on PostgreSQL replication,
315
315
including synchronous settings.
316
316
:::
317
317
318
-
:::info Connecting from an application
318
+
:::note[Connecting from an application]
319
319
Please refer to the ["Connecting from an application" section](applications.md) for
320
320
information about how to connect to CloudNativePG from a stateless
321
321
application within the same Kubernetes cluster.
322
322
:::
323
323
324
-
:::info Connection Pooling
324
+
:::note[Connection Pooling]
325
325
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
326
326
information about how to take advantage of PgBouncer as a connection pooler,
327
327
and create an access layer between your applications and the PostgreSQL clusters.
@@ -333,7 +333,7 @@ Applications can decide to connect to the PostgreSQL instance elected as
333
333
*current primary* by the Kubernetes operator, as depicted in the following
334
334
diagram:
335
335
336
-

336
+

337
337
338
338
Applications can use the `-rw` suffix service.
339
339
@@ -343,7 +343,7 @@ service to another instance of the cluster.
343
343
344
344
### Read-only workloads
345
345
346
-
:::important
346
+
:::info[Important]
347
347
Applications must be aware of the limitations that
0 commit comments