Skip to content

Commit 3f44c86

Browse files
committed
chore: update documentation for 1.28 - fixing admonitions
Signed-off-by: Leonardo Cecchi <[email protected]>
1 parent fa20188 commit 3f44c86

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+3691
-7093
lines changed

versioned_docs/version-1.28/appendixes/backup_barmanobjectstore.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,14 @@ title: Appendix B - Backup on object stores
77

88
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
99

10-
!!! Warning
10+
:::warning
1111
As of CloudNativePG 1.26, **native Barman Cloud support is deprecated** in
1212
favor of the **Barman Cloud Plugin**. This page has been moved to the appendix
1313
for reference purposes. While the native integration remains functional for
1414
now, we strongly recommend beginning a gradual migration to the plugin-based
1515
interface after appropriate testing. For guidance, see
1616
[Migrating from Built-in CloudNativePG Backup](https://cloudnative-pg.io/plugin-barman-cloud/docs/migration/).
17+
:::
1718

1819
CloudNativePG natively supports **online/hot backup** of PostgreSQL
1920
clusters through continuous physical backup and WAL archiving on an object
@@ -35,19 +36,21 @@ You can use the image `ghcr.io/cloudnative-pg/postgresql` for this scope,
3536
as it is composed of a community PostgreSQL image and the latest
3637
`barman-cli-cloud` package.
3738

38-
!!! Important
39+
:::info[Important]
3940
Always ensure that you are running the latest version of the operands
4041
in your system to take advantage of the improvements introduced in
4142
Barman cloud (as well as improve the security aspects of your cluster).
43+
:::
4244

43-
!!! Warning "Changes in Barman Cloud 3.16+ and Bucket Creation"
45+
:::warning[Changes in Barman Cloud 3.16+ and Bucket Creation]
4446
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
4547
automatically create the target bucket, assuming it already exists. Only the
4648
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
4749
is not the first operation run on an empty bucket, CloudNativePG will throw an
4850
error. As a result, to ensure reliable, future-proof operations and avoid
4951
potential issues, we strongly recommend that you create and configure your
5052
object store bucket *before* creating a `Cluster` resource that references it.
53+
:::
5154

5255
A backup is performed from a primary or a designated primary instance in a
5356
`Cluster` (please refer to
@@ -71,9 +74,10 @@ in CloudNativePG.
7174
The WAL archive is defined in the `.spec.backup.barmanObjectStore` stanza of
7275
a `Cluster` resource.
7376

74-
!!! Info
77+
:::info
7578
Please refer to [`BarmanObjectStoreConfiguration`](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration)
7679
in the barman-cloud API for a full list of options.
80+
:::
7781

7882
If required, you can choose to compress WAL files as soon as they
7983
are uploaded and/or encrypt them:
@@ -98,14 +102,15 @@ PostgreSQL implements a sequential archiving scheme, where the
98102
`archive_command` will be executed sequentially for every WAL
99103
segment to be archived.
100104

101-
!!! Important
105+
:::info[Important]
102106
By default, CloudNativePG sets `archive_timeout` to `5min`, ensuring
103107
that WAL files, even in case of low workloads, are closed and archived
104108
at least every 5 minutes, providing a deterministic time-based value for
105109
your Recovery Point Objective ([RPO](../before_you_start.md#rpo)). Even though you change the value
106110
of the [`archive_timeout` setting in the PostgreSQL configuration](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT),
107111
our experience suggests that the default value set by the operator is
108112
suitable for most use cases.
113+
:::
109114

110115
When the bandwidth between the PostgreSQL instance and the object
111116
store allows archiving more than one WAL file in parallel, you
@@ -164,7 +169,7 @@ spec:
164169
retentionPolicy: "30d"
165170
```
166171

167-
!!! Note "There's more ..."
172+
:::note[There's more ...]
168173
The **recovery window retention policy** is focused on the concept of
169174
*Point of Recoverability* (`PoR`), a moving point in time determined by
170175
`current time - recovery window`. The *first valid backup* is the first
@@ -174,6 +179,7 @@ spec:
174179
file, starting from the first valid backup. Base backups that are older
175180
than the first valid backup will be marked as *obsolete* and permanently
176181
removed after the next backup is completed.
182+
:::
177183

178184
## Compression algorithms
179185

@@ -338,16 +344,18 @@ are named `app` by default. If the PostgreSQL cluster being restored uses
338344
different names, you must specify these names before exiting the recovery phase,
339345
as documented in ["Configure the application database"](../recovery.md#configure-the-application-database).
340346

341-
!!! Important
347+
:::info[Important]
342348
By default, the `recovery` method strictly uses the `name` of the
343349
cluster in the `externalClusters` section as the name of the main folder
344350
of the backup data within the object store. This name is normally reserved
345351
for the name of the server. You can specify a different folder name
346352
using the `barmanObjectStore.serverName` property.
353+
:::
347354

348-
!!! Note
355+
:::note
349356
This example takes advantage of the parallel WAL restore feature,
350357
dedicating up to 8 jobs to concurrently fetch the required WAL files from the
351358
archive. This feature can appreciably reduce the recovery time. Make sure that
352359
you plan ahead for this scenario and correctly tune the value of this parameter
353360
for your environment. It will make a difference when you need it, and you will.
361+
:::

versioned_docs/version-1.28/appendixes/backup_volumesnapshot.md

Lines changed: 19 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,11 @@ title: Appendix A - Backup on volume snapshots
66
# Appendix A - Backup on volume snapshots
77
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
88

9-
!!! Important
9+
:::info[Important]
1010
Please refer to the official Kubernetes documentation for a list of all
1111
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
1212
that provide snapshotting capabilities.
13+
:::
1314

1415
CloudNativePG is one of the first known cases of database operators that
1516
directly leverages the Kubernetes native Volume Snapshot API for both
@@ -53,20 +54,22 @@ that is responsible to ensure that snapshots can be taken from persistent
5354
volumes of a given storage class, and managed as `VolumeSnapshot` and
5455
`VolumeSnapshotContent` resources.
5556

56-
!!! Important
57+
:::info[Important]
5758
It is your responsibility to verify with the third party vendor
5859
that volume snapshots are supported. CloudNativePG only interacts
5960
with the Kubernetes API on this matter, and we cannot support issues
6061
at the storage level for each specific CSI driver.
62+
:::
6163

6264
## How to configure Volume Snapshot backups
6365

6466
CloudNativePG allows you to configure a given Postgres cluster for Volume
6567
Snapshot backups through the `backup.volumeSnapshot` stanza.
6668

67-
!!! Info
68-
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#postgresql-cnpg-io-v1-VolumeSnapshotConfiguration)
69+
:::info
70+
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#volumesnapshotconfiguration)
6971
in the API reference for a full list of options.
72+
:::
7073

7174
A generic example with volume snapshots (assuming that PGDATA and WALs share
7275
the same storage class) is the following:
@@ -102,32 +105,35 @@ As you can see, the `backup` section contains both the `volumeSnapshot` stanza
102105
(controlling physical base backups on volume snapshots) and the
103106
`plugins` one (controlling the [WAL archive](../wal_archiving.md)).
104107

105-
!!! Info
108+
:::info
106109
Once you have defined the `plugin`, you can decide to use
107110
both volume snapshot and plugin backup strategies simultaneously
108111
to take physical backups.
112+
:::
109113

110114
The `volumeSnapshot.className` option allows you to reference the default
111115
`VolumeSnapshotClass` object used for all the storage volumes you have
112116
defined in your PostgreSQL cluster.
113117

114-
!!! Info
118+
:::info
115119
In case you are using a different storage class for `PGDATA` and
116120
WAL files, you can specify a separate `VolumeSnapshotClass` for
117121
that volume through the `walClassName` option (which defaults to
118122
the same value as `className`).
123+
:::
119124

120125
Once a cluster is defined for volume snapshot backups, you need to define
121126
a `ScheduledBackup` resource that requests such backups on a periodic basis.
122127

123128
## Hot and cold backups
124129

125-
!!! Warning
130+
:::warning
126131
As noted in the [backup document](../backup.md), a cold snapshot explicitly
127132
set to target the primary will result in the primary being fenced for
128133
the duration of the backup, making the cluster read-only during this
129134
period. For safety, in a cluster already containing fenced instances, a cold
130135
snapshot is rejected.
136+
:::
131137

132138
By default, CloudNativePG requests an online/hot backup on volume snapshots, using the
133139
[PostgreSQL defaults of the low-level API for base backups](https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP):
@@ -136,12 +142,13 @@ By default, CloudNativePG requests an online/hot backup on volume snapshots, usi
136142
- it waits for the WAL archiver to archive the last segment of the backup when
137143
terminating the backup procedure
138144

139-
!!! Important
145+
:::info[Important]
140146
The default values are suitable for most production environments. Hot
141147
backups are consistent and can be used to perform snapshot recovery, as we
142148
ensure WAL retention from the start of the backup through a temporary
143149
replication slot. However, our recommendation is to rely on cold backups for
144150
that purpose.
151+
:::
145152

146153
You can explicitly change the default behavior through the following options in
147154
the `.spec.backup.volumeSnapshot` stanza of the `Cluster` resource:
@@ -241,13 +248,14 @@ In case a `VolumeSnapshot` is deleted, the `deletionPolicy` specified in the
241248
- if set to `Retain`, the `VolumeSnapshotContent` object is kept
242249
- if set to `Delete`, the `VolumeSnapshotContent` object is removed as well
243250

244-
!!! Warning
251+
:::warning
245252
`VolumeSnapshotContent` objects do not keep all the information regarding the
246253
backup and the cluster they refer to (like the annotations and labels that
247254
are contained in the `VolumeSnapshot` object). Although possible, restoring
248255
from just this kind of object might not be straightforward. For this reason,
249256
our recommendation is to always backup the `VolumeSnapshot` definitions,
250257
even using a Kubernetes level data protection solution.
258+
:::
251259

252260
The value in `VolumeSnapshotContent` is determined by the `deletionPolicy` set
253261
in the corresponding `VolumeSnapshotClass` definition, which is
@@ -350,11 +358,11 @@ The following example shows how to configure volume snapshot base backups on an
350358
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
351359
volume snapshot class.
352360

353-
!!! Important
361+
:::info[Important]
354362
If you are interested in testing the example, please read
355363
["Volume Snapshots" for the Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot) <!-- wokeignore:rule=master -->
356364
for detailed instructions on the installation process for the storage class and the snapshot class.
357-
365+
:::
358366

359367
The following manifest creates a `Cluster` that is ready to be used for volume
360368
snapshots and that stores the WAL archive in a S3 bucket via IAM role for the

versioned_docs/version-1.28/appendixes/object_stores.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,14 @@ title: Appendix C - Common object stores for backups
66
# Appendix C - Common object stores for backups
77
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
88

9-
!!! Warning
9+
:::warning
1010
As of CloudNativePG 1.26, **native Barman Cloud support is deprecated** in
1111
favor of the **Barman Cloud Plugin**. While the native integration remains
1212
functional for now, we strongly recommend beginning a gradual migration to
1313
the plugin-based interface after appropriate testing. The Barman Cloud
1414
Plugin documentation describes
1515
[how to use common object stores](https://cloudnative-pg.io/plugin-barman-cloud/docs/object_stores/).
16+
:::
1617

1718
You can store the [backup](../backup.md) files in any service that is supported
1819
by the Barman Cloud infrastructure. That is:
@@ -183,10 +184,11 @@ spec:
183184
key: ca.crt
184185
```
185186

186-
!!! Note
187+
:::note
187188
If you want ConfigMaps and Secrets to be **automatically** reloaded by instances, you can
188189
add a label with key `cnpg.io/reload` to the Secrets/ConfigMaps. Otherwise, you will have to reload
189190
the instances using the `kubectl cnpg reload` subcommand.
191+
:::
190192

191193
## Azure Blob Storage
192194

@@ -351,7 +353,8 @@ spec:
351353

352354
Now the operator will use the credentials to authenticate against Google Cloud Storage.
353355

354-
!!! Important
356+
:::info[Important]
355357
This way of authentication will create a JSON file inside the container with all the needed
356358
information to access your Google Cloud Storage bucket, meaning that if someone gets access to the pod
357359
will also have write permissions to the bucket.
360+
:::

versioned_docs/version-1.28/applications.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,11 @@ in the same Kubernetes cluster.
1313
For more information on services and how to manage them, please refer to the
1414
["Service management"](service_management.md) section.
1515

16-
!!! Hint
16+
:::tip[Hint]
1717
It is highly recommended using those services in your applications,
1818
and avoiding connecting directly to a specific PostgreSQL instance, as the latter
1919
can change during the cluster lifetime.
20+
:::
2021

2122
You can use these services in your applications through:
2223

@@ -26,10 +27,11 @@ You can use these services in your applications through:
2627
For the credentials to connect to PostgreSQL, you can
2728
use the secrets generated by the operator.
2829

29-
!!! Seealso "Connection Pooling"
30+
:::note[Connection Pooling]
3031
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
3132
information about how to take advantage of PgBouncer as a connection pooler,
3233
and create an access layer between your applications and the PostgreSQL clusters.
34+
:::
3335

3436
### DNS resolution
3537

@@ -93,5 +95,6 @@ database.
9395
The `-superuser` ones are supposed to be used only for administrative purposes,
9496
and correspond to the `postgres` user.
9597

96-
!!! Important
98+
:::info[Important]
9799
Superuser access over the network is disabled by default.
100+
:::

0 commit comments

Comments
 (0)