You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: versioned_docs/version-1.28/appendixes/backup_barmanobjectstore.md
+16-8Lines changed: 16 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,13 +7,14 @@ title: Appendix B - Backup on object stores
7
7
8
8
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
9
9
10
-
!!! Warning
10
+
:::warning
11
11
As of CloudNativePG 1.26, **native Barman Cloud support is deprecated** in
12
12
favor of the **Barman Cloud Plugin**. This page has been moved to the appendix
13
13
for reference purposes. While the native integration remains functional for
14
14
now, we strongly recommend beginning a gradual migration to the plugin-based
15
15
interface after appropriate testing. For guidance, see
16
16
[Migrating from Built-in CloudNativePG Backup](https://cloudnative-pg.io/plugin-barman-cloud/docs/migration/).
17
+
:::
17
18
18
19
CloudNativePG natively supports **online/hot backup** of PostgreSQL
19
20
clusters through continuous physical backup and WAL archiving on an object
@@ -35,19 +36,21 @@ You can use the image `ghcr.io/cloudnative-pg/postgresql` for this scope,
35
36
as it is composed of a community PostgreSQL image and the latest
36
37
`barman-cli-cloud` package.
37
38
38
-
!!! Important
39
+
:::info[Important]
39
40
Always ensure that you are running the latest version of the operands
40
41
in your system to take advantage of the improvements introduced in
41
42
Barman cloud (as well as improve the security aspects of your cluster).
43
+
:::
42
44
43
-
!!! Warning "Changes in Barman Cloud 3.16+ and Bucket Creation"
45
+
:::warning[Changes in Barman Cloud 3.16+ and Bucket Creation]
44
46
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
45
47
automatically create the target bucket, assuming it already exists. Only the
46
48
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
47
49
is not the first operation run on an empty bucket, CloudNativePG will throw an
48
50
error. As a result, to ensure reliable, future-proof operations and avoid
49
51
potential issues, we strongly recommend that you create and configure your
50
52
object store bucket *before* creating a `Cluster` resource that references it.
53
+
:::
51
54
52
55
A backup is performed from a primary or a designated primary instance in a
53
56
`Cluster` (please refer to
@@ -71,9 +74,10 @@ in CloudNativePG.
71
74
The WAL archive is defined in the `.spec.backup.barmanObjectStore` stanza of
72
75
a `Cluster` resource.
73
76
74
-
!!! Info
77
+
:::info
75
78
Please refer to [`BarmanObjectStoreConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration)
76
79
in the barman-cloud API for a full list of options.
80
+
:::
77
81
78
82
If required, you can choose to compress WAL files as soon as they
79
83
are uploaded and/or encrypt them:
@@ -98,14 +102,15 @@ PostgreSQL implements a sequential archiving scheme, where the
98
102
`archive_command` will be executed sequentially for every WAL
99
103
segment to be archived.
100
104
101
-
!!! Important
105
+
:::info[Important]
102
106
By default, CloudNativePG sets `archive_timeout` to `5min`, ensuring
103
107
that WAL files, even in case of low workloads, are closed and archived
104
108
at least every 5 minutes, providing a deterministic time-based value for
105
109
your Recovery Point Objective ([RPO](../before_you_start.md#rpo)). Even though you change the value
106
110
of the [`archive_timeout` setting in the PostgreSQL configuration](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT),
107
111
our experience suggests that the default value set by the operator is
108
112
suitable for most use cases.
113
+
:::
109
114
110
115
When the bandwidth between the PostgreSQL instance and the object
111
116
store allows archiving more than one WAL file in parallel, you
@@ -164,7 +169,7 @@ spec:
164
169
retentionPolicy: "30d"
165
170
```
166
171
167
-
!!! Note "There's more ..."
172
+
:::note[There's more ...]
168
173
The **recovery window retention policy** is focused on the concept of
169
174
*Point of Recoverability* (`PoR`), a moving point in time determined by
170
175
`current time - recovery window`. The *first valid backup* is the first
@@ -174,6 +179,7 @@ spec:
174
179
file, starting from the first valid backup. Base backups that are older
175
180
than the first valid backup will be marked as *obsolete* and permanently
176
181
removed after the next backup is completed.
182
+
:::
177
183
178
184
## Compression algorithms
179
185
@@ -338,16 +344,18 @@ are named `app` by default. If the PostgreSQL cluster being restored uses
338
344
different names, you must specify these names before exiting the recovery phase,
339
345
as documented in ["Configure the application database"](../recovery.md#configure-the-application-database).
340
346
341
-
!!! Important
347
+
:::info[Important]
342
348
By default, the `recovery` method strictly uses the `name` of the
343
349
cluster in the `externalClusters` section as the name of the main folder
344
350
of the backup data within the object store. This name is normally reserved
345
351
for the name of the server. You can specify a different folder name
346
352
using the `barmanObjectStore.serverName` property.
353
+
:::
347
354
348
-
!!! Note
355
+
:::note
349
356
This example takes advantage of the parallel WAL restore feature,
350
357
dedicating up to 8 jobs to concurrently fetch the required WAL files from the
351
358
archive. This feature can appreciably reduce the recovery time. Make sure that
352
359
you plan ahead for this scenario and correctly tune the value of this parameter
353
360
for your environment. It will make a difference when you need it, and you will.
Copy file name to clipboardExpand all lines: versioned_docs/version-1.28/appendixes/backup_volumesnapshot.md
+19-11Lines changed: 19 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,11 @@ title: Appendix A - Backup on volume snapshots
6
6
# Appendix A - Backup on volume snapshots
7
7
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
8
8
9
-
!!! Important
9
+
:::info[Important]
10
10
Please refer to the official Kubernetes documentation for a list of all
11
11
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
12
12
that provide snapshotting capabilities.
13
+
:::
13
14
14
15
CloudNativePG is one of the first known cases of database operators that
15
16
directly leverages the Kubernetes native Volume Snapshot API for both
@@ -53,20 +54,22 @@ that is responsible to ensure that snapshots can be taken from persistent
53
54
volumes of a given storage class, and managed as `VolumeSnapshot` and
54
55
`VolumeSnapshotContent` resources.
55
56
56
-
!!! Important
57
+
:::info[Important]
57
58
It is your responsibility to verify with the third party vendor
58
59
that volume snapshots are supported. CloudNativePG only interacts
59
60
with the Kubernetes API on this matter, and we cannot support issues
60
61
at the storage level for each specific CSI driver.
62
+
:::
61
63
62
64
## How to configure Volume Snapshot backups
63
65
64
66
CloudNativePG allows you to configure a given Postgres cluster for Volume
65
67
Snapshot backups through the `backup.volumeSnapshot` stanza.
66
68
67
-
!!! Info
68
-
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#postgresql-cnpg-io-v1-VolumeSnapshotConfiguration)
69
+
:::info
70
+
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#volumesnapshotconfiguration)
69
71
in the API reference for a full list of options.
72
+
:::
70
73
71
74
A generic example with volume snapshots (assuming that PGDATA and WALs share
72
75
the same storage class) is the following:
@@ -102,32 +105,35 @@ As you can see, the `backup` section contains both the `volumeSnapshot` stanza
102
105
(controlling physical base backups on volume snapshots) and the
103
106
`plugins`one (controlling the [WAL archive](../wal_archiving.md)).
104
107
105
-
!!! Info
108
+
:::info
106
109
Once you have defined the `plugin`, you can decide to use
107
110
both volume snapshot and plugin backup strategies simultaneously
108
111
to take physical backups.
112
+
:::
109
113
110
114
The `volumeSnapshot.className` option allows you to reference the default
111
115
`VolumeSnapshotClass`object used for all the storage volumes you have
112
116
defined in your PostgreSQL cluster.
113
117
114
-
!!! Info
118
+
:::info
115
119
In case you are using a different storage class for `PGDATA` and
116
120
WAL files, you can specify a separate `VolumeSnapshotClass` for
117
121
that volume through the `walClassName` option (which defaults to
118
122
the same value as `className`).
123
+
:::
119
124
120
125
Once a cluster is defined for volume snapshot backups, you need to define
121
126
a `ScheduledBackup` resource that requests such backups on a periodic basis.
122
127
123
128
## Hot and cold backups
124
129
125
-
!!! Warning
130
+
:::warning
126
131
As noted in the [backup document](../backup.md), a cold snapshot explicitly
127
132
set to target the primary will result in the primary being fenced for
128
133
the duration of the backup, making the cluster read-only during this
129
134
period. For safety, in a cluster already containing fenced instances, a cold
130
135
snapshot is rejected.
136
+
:::
131
137
132
138
By default, CloudNativePG requests an online/hot backup on volume snapshots, using the
133
139
[PostgreSQL defaults of the low-level API for base backups](https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP):
@@ -136,12 +142,13 @@ By default, CloudNativePG requests an online/hot backup on volume snapshots, usi
136
142
- it waits for the WAL archiver to archive the last segment of the backup when
137
143
terminating the backup procedure
138
144
139
-
!!! Important
145
+
:::info[Important]
140
146
The default values are suitable for most production environments. Hot
141
147
backups are consistent and can be used to perform snapshot recovery, as we
142
148
ensure WAL retention from the start of the backup through a temporary
143
149
replication slot. However, our recommendation is to rely on cold backups for
144
150
that purpose.
151
+
:::
145
152
146
153
You can explicitly change the default behavior through the following options in
147
154
the `.spec.backup.volumeSnapshot` stanza of the `Cluster` resource:
@@ -241,13 +248,14 @@ In case a `VolumeSnapshot` is deleted, the `deletionPolicy` specified in the
241
248
- if set to `Retain`, the `VolumeSnapshotContent` object is kept
242
249
- if set to `Delete`, the `VolumeSnapshotContent` object is removed as well
243
250
244
-
!!! Warning
251
+
:::warning
245
252
`VolumeSnapshotContent`objects do not keep all the information regarding the
246
253
backup and the cluster they refer to (like the annotations and labels that
247
254
are contained in the `VolumeSnapshot` object). Although possible, restoring
248
255
from just this kind of object might not be straightforward. For this reason,
249
256
our recommendation is to always backup the `VolumeSnapshot` definitions,
250
257
even using a Kubernetes level data protection solution.
258
+
:::
251
259
252
260
The value in `VolumeSnapshotContent` is determined by the `deletionPolicy` set
253
261
in the corresponding `VolumeSnapshotClass` definition, which is
@@ -350,11 +358,11 @@ The following example shows how to configure volume snapshot base backups on an
350
358
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
351
359
volume snapshot class.
352
360
353
-
!!! Important
361
+
:::info[Important]
354
362
If you are interested in testing the example, please read
355
363
["Volume Snapshots" for the Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot) <!-- wokeignore:rule=master -->
356
364
for detailed instructions on the installation process for the storage class and the snapshot class.
357
-
365
+
:::
358
366
359
367
The following manifest creates a `Cluster` that is ready to be used for volume
360
368
snapshots and that stores the WAL archive in a S3 bucket via IAM role for the
0 commit comments