You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: versioned_docs/version-1.27/appendixes/backup_volumesnapshot.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
1
---
2
2
id: backup_volumesnapshot
3
-
title: backup_volumesnapshot
3
+
title: Appendix A - Backup on volume snapshots
4
4
---
5
5
6
6
# Appendix A - Backup on volume snapshots
7
7
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
8
8
9
-
:::important
9
+
:::info[Important]
10
10
Please refer to the official Kubernetes documentation for a list of all
11
11
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
12
12
that provide snapshotting capabilities.
@@ -54,7 +54,7 @@ that is responsible to ensure that snapshots can be taken from persistent
54
54
volumes of a given storage class, and managed as `VolumeSnapshot` and
55
55
`VolumeSnapshotContent` resources.
56
56
57
-
:::important
57
+
:::info[Important]
58
58
It is your responsibility to verify with the third party vendor
59
59
that volume snapshots are supported. CloudNativePG only interacts
60
60
with the Kubernetes API on this matter, and we cannot support issues
@@ -67,7 +67,7 @@ CloudNativePG allows you to configure a given Postgres cluster for Volume
67
67
Snapshot backups through the `backup.volumeSnapshot` stanza.
68
68
69
69
:::info
70
-
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#postgresql-cnpg-io-v1-VolumeSnapshotConfiguration)
70
+
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#volumesnapshotconfiguration)
71
71
in the API reference for a full list of options.
72
72
:::
73
73
@@ -142,7 +142,7 @@ By default, CloudNativePG requests an online/hot backup on volume snapshots, usi
142
142
- it waits for the WAL archiver to archive the last segment of the backup when
143
143
terminating the backup procedure
144
144
145
-
:::important
145
+
:::info[Important]
146
146
The default values are suitable for most production environments. Hot
147
147
backups are consistent and can be used to perform snapshot recovery, as we
148
148
ensure WAL retention from the start of the backup through a temporary
@@ -358,7 +358,7 @@ The following example shows how to configure volume snapshot base backups on an
358
358
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
359
359
volume snapshot class.
360
360
361
-
:::important
361
+
:::info[Important]
362
362
If you are interested in testing the example, please read
363
363
["Volume Snapshots" for the Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot) <!-- wokeignore:rule=master -->
364
364
for detailed instructions on the installation process for the storage class and the snapshot class.
Copy file name to clipboardExpand all lines: versioned_docs/version-1.27/architecture.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
---
2
2
id: architecture
3
-
sidebar_position: 5
3
+
sidebar_position: 40
4
4
title: Architecture
5
5
---
6
6
7
7
# Architecture
8
8
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
9
9
10
-
:::tip
10
+
:::tip[Hint]
11
11
For a deeper understanding, we recommend reading our article on the CNCF
12
12
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
13
13
which provides valuable insights into best practices and design
@@ -57,7 +57,7 @@ used as a fallback option, for example, to store WAL files in an object store).
57
57
Replicas are usually called *standby servers* and can also be used for
58
58
read-only workloads, thanks to the *Hot Standby* feature.
59
59
60
-
:::important
60
+
:::info[Important]
61
61
**We recommend against storage-level replication with PostgreSQL**, although
62
62
CloudNativePG allows you to adopt that strategy. For more information, please refer
63
63
to the talk given by Chris Milsted and Gabriele Bartolini at KubeCon NA 2022 entitled
@@ -91,7 +91,7 @@ The multi-availability zone Kubernetes architecture with three (3) or more
91
91
zones is the one that we recommend for PostgreSQL usage.
92
92
This scenario is typical of Kubernetes services managed by Cloud Providers.
93
93
94
-

94
+

95
95
96
96
Such an architecture enables the CloudNativePG operator to control the full
97
97
lifecycle of a `Cluster` resource across the zones within a single Kubernetes
managing them via declarative configuration. This setup is ideal for disaster
114
114
recovery (DR), read-only operations, or cross-region availability.
115
115
116
-
:::important
116
+
:::info[Important]
117
117
Each operator deployment can only manage operations within its local
118
118
Kubernetes cluster. For operations across Kubernetes clusters, such as
119
119
controlled switchover or unexpected failover, coordination must be handled
120
120
manually (through GitOps, for example) or by using a higher-level cluster
121
121
management tool.
122
122
:::
123
123
124
-

124
+

125
125
126
126
### Single availability zone Kubernetes clusters
127
127
@@ -143,9 +143,9 @@ Kubernetes clusters in an active/passive configuration, with the second cluster
143
143
primarily used for Disaster Recovery (see
144
144
the [replica cluster feature](replica_cluster.md)).
145
145
146
-

146
+

147
147
148
-
:::tip
148
+
:::tip[Hint]
149
149
If you are at an early stage of your Kubernetes journey, please share this
150
150
document with your infrastructure team. The two data centers setup might
151
151
be simply the result of a "lift-and-shift" transition to Kubernetes
@@ -179,7 +179,7 @@ is now fully declarative, automated failover across Kubernetes clusters is not
179
179
within CloudNativePG's scope, as the operator can only function within a single
180
180
Kubernetes cluster.
181
181
182
-
:::important
182
+
:::info[Important]
183
183
CloudNativePG provides all the necessary primitives and probes to
184
184
coordinate PostgreSQL active/passive topologies across different Kubernetes
185
185
clusters through a higher-level operator or management tool.
@@ -195,7 +195,7 @@ PostgreSQL workloads is referred to as a **Postgres node** or `postgres` node.
195
195
This approach ensures optimal performance and resource allocation for your
196
196
database operations.
197
197
198
-
:::hint
198
+
:::tip[Hint]
199
199
As a general rule of thumb, deploy Postgres nodes in multiples of
200
200
three—ideally with one node per availability zone. Three nodes is
201
201
an optimal number because it ensures that a PostgreSQL cluster with three
@@ -209,7 +209,7 @@ labels ensure that a node is capable of running `postgres` workloads, while
209
209
taints help prevent any non-`postgres` workloads from being scheduled on that
210
210
node.
211
211
212
-
:::important
212
+
:::info[Important]
213
213
This methodology is the most straightforward way to ensure that PostgreSQL
214
214
workloads are isolated from other workloads in terms of both computing
215
215
resources and, when using locally attached disks, storage. While different
@@ -289,7 +289,7 @@ Kubernetes cluster, with the following specifications:
289
289
* PostgreSQL instances should reside in different availability zones
290
290
within the same Kubernetes cluster / region
291
291
292
-
:::important
292
+
:::info[Important]
293
293
You can configure the above services through the `managed.services` section
294
294
in the `Cluster` configuration. This can be done by reducing the number of
295
295
services and selecting the type (default is `ClusterIP`). For more details,
@@ -302,26 +302,26 @@ architecture for a PostgreSQL cluster spanning across 3 different availability
302
302
zones, running on separate nodes, each with dedicated local storage for
303
303
PostgreSQL data.
304
304
305
-

305
+

306
306
307
307
CloudNativePG automatically takes care of updating the above services if
308
308
the topology of the cluster changes. For example, in case of failover, it
309
309
automatically updates the `-rw` service to point to the promoted primary,
310
310
making sure that traffic from the applications is seamlessly redirected.
311
311
312
-
:::info Replication
312
+
:::note[Replication]
313
313
Please refer to the ["Replication" section](replication.md) for more
314
314
information about how CloudNativePG relies on PostgreSQL replication,
315
315
including synchronous settings.
316
316
:::
317
317
318
-
:::info Connecting from an application
318
+
:::note[Connecting from an application]
319
319
Please refer to the ["Connecting from an application" section](applications.md) for
320
320
information about how to connect to CloudNativePG from a stateless
321
321
application within the same Kubernetes cluster.
322
322
:::
323
323
324
-
:::info Connection Pooling
324
+
:::note[Connection Pooling]
325
325
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
326
326
information about how to take advantage of PgBouncer as a connection pooler,
327
327
and create an access layer between your applications and the PostgreSQL clusters.
@@ -333,7 +333,7 @@ Applications can decide to connect to the PostgreSQL instance elected as
333
333
*current primary* by the Kubernetes operator, as depicted in the following
334
334
diagram:
335
335
336
-

336
+

337
337
338
338
Applications can use the `-rw` suffix service.
339
339
@@ -343,7 +343,7 @@ service to another instance of the cluster.
343
343
344
344
### Read-only workloads
345
345
346
-
:::important
346
+
:::info[Important]
347
347
Applications must be aware of the limitations that
0 commit comments