Skip to content

Commit 7e54c9a

Browse files
committed
chore: update documentation for 1.27 - fixing admonitions
Signed-off-by: Leonardo Cecchi <[email protected]>
1 parent 3f44c86 commit 7e54c9a

File tree

108 files changed

+3426
-2468
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+3426
-2468
lines changed

versioned_docs/version-1.27/appendixes/backup_barmanobjectstore.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
id: backup_barmanobjectstore
3-
title: backup_barmanobjectstore
3+
title: Appendix B - Backup on object stores
44
---
55

66
# Appendix B - Backup on object stores
@@ -36,12 +36,22 @@ You can use the image `ghcr.io/cloudnative-pg/postgresql` for this scope,
3636
as it is composed of a community PostgreSQL image and the latest
3737
`barman-cli-cloud` package.
3838

39-
:::important
39+
:::info[Important]
4040
Always ensure that you are running the latest version of the operands
4141
in your system to take advantage of the improvements introduced in
4242
Barman cloud (as well as improve the security aspects of your cluster).
4343
:::
4444

45+
:::warning[Changes in Barman Cloud 3.16+ and Bucket Creation]
46+
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
47+
automatically create the target bucket, assuming it already exists. Only the
48+
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
49+
is not the first operation run on an empty bucket, CloudNativePG will throw an
50+
error. As a result, to ensure reliable, future-proof operations and avoid
51+
potential issues, we strongly recommend that you create and configure your
52+
object store bucket *before* creating a `Cluster` resource that references it.
53+
:::
54+
4555
A backup is performed from a primary or a designated primary instance in a
4656
`Cluster` (please refer to
4757
[replica clusters](../replica_cluster.md)
@@ -92,7 +102,7 @@ PostgreSQL implements a sequential archiving scheme, where the
92102
`archive_command` will be executed sequentially for every WAL
93103
segment to be archived.
94104

95-
:::important
105+
:::info[Important]
96106
By default, CloudNativePG sets `archive_timeout` to `5min`, ensuring
97107
that WAL files, even in case of low workloads, are closed and archived
98108
at least every 5 minutes, providing a deterministic time-based value for
@@ -159,7 +169,7 @@ spec:
159169
retentionPolicy: "30d"
160170
```
161171

162-
:::note There's more ...
172+
:::note[There's more ...]
163173
The **recovery window retention policy** is focused on the concept of
164174
*Point of Recoverability* (`PoR`), a moving point in time determined by
165175
`current time - recovery window`. The *first valid backup* is the first
@@ -334,7 +344,7 @@ are named `app` by default. If the PostgreSQL cluster being restored uses
334344
different names, you must specify these names before exiting the recovery phase,
335345
as documented in ["Configure the application database"](../recovery.md#configure-the-application-database).
336346

337-
:::important
347+
:::info[Important]
338348
By default, the `recovery` method strictly uses the `name` of the
339349
cluster in the `externalClusters` section as the name of the main folder
340350
of the backup data within the object store. This name is normally reserved
@@ -348,4 +358,4 @@ as documented in ["Configure the application database"](../recovery.md#configure
348358
archive. This feature can appreciably reduce the recovery time. Make sure that
349359
you plan ahead for this scenario and correctly tune the value of this parameter
350360
for your environment. It will make a difference when you need it, and you will.
351-
:::
361+
:::

versioned_docs/version-1.27/appendixes/backup_volumesnapshot.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
id: backup_volumesnapshot
3-
title: backup_volumesnapshot
3+
title: Appendix A - Backup on volume snapshots
44
---
55

66
# Appendix A - Backup on volume snapshots
77
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
88

9-
:::important
9+
:::info[Important]
1010
Please refer to the official Kubernetes documentation for a list of all
1111
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
1212
that provide snapshotting capabilities.
@@ -54,7 +54,7 @@ that is responsible to ensure that snapshots can be taken from persistent
5454
volumes of a given storage class, and managed as `VolumeSnapshot` and
5555
`VolumeSnapshotContent` resources.
5656

57-
:::important
57+
:::info[Important]
5858
It is your responsibility to verify with the third party vendor
5959
that volume snapshots are supported. CloudNativePG only interacts
6060
with the Kubernetes API on this matter, and we cannot support issues
@@ -67,7 +67,7 @@ CloudNativePG allows you to configure a given Postgres cluster for Volume
6767
Snapshot backups through the `backup.volumeSnapshot` stanza.
6868

6969
:::info
70-
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#postgresql-cnpg-io-v1-VolumeSnapshotConfiguration)
70+
Please refer to [`VolumeSnapshotConfiguration`](../cloudnative-pg.v1.md#volumesnapshotconfiguration)
7171
in the API reference for a full list of options.
7272
:::
7373

@@ -142,7 +142,7 @@ By default, CloudNativePG requests an online/hot backup on volume snapshots, usi
142142
- it waits for the WAL archiver to archive the last segment of the backup when
143143
terminating the backup procedure
144144

145-
:::important
145+
:::info[Important]
146146
The default values are suitable for most production environments. Hot
147147
backups are consistent and can be used to perform snapshot recovery, as we
148148
ensure WAL retention from the start of the backup through a temporary
@@ -358,7 +358,7 @@ The following example shows how to configure volume snapshot base backups on an
358358
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
359359
volume snapshot class.
360360

361-
:::important
361+
:::info[Important]
362362
If you are interested in testing the example, please read
363363
["Volume Snapshots" for the Amazon Elastic Block Store (EBS) CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot) <!-- wokeignore:rule=master -->
364364
for detailed instructions on the installation process for the storage class and the snapshot class.

versioned_docs/version-1.27/appendixes/object_stores.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
id: object_stores
3-
title: object_stores
3+
title: Appendix C - Common object stores for backups
44
---
55

66
# Appendix C - Common object stores for backups
@@ -353,8 +353,8 @@ spec:
353353

354354
Now the operator will use the credentials to authenticate against Google Cloud Storage.
355355

356-
:::important
356+
:::info[Important]
357357
This way of authentication will create a JSON file inside the container with all the needed
358358
information to access your Google Cloud Storage bucket, meaning that if someone gets access to the pod
359359
will also have write permissions to the bucket.
360-
:::
360+
:::

versioned_docs/version-1.27/applications.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
id: applications
3-
sidebar_position: 35
4-
title: Connecting from an Application
3+
sidebar_position: 340
4+
title: Connecting from an application
55
---
66

7-
# Connecting from an Application
7+
# Connecting from an application
88
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
99

1010
Applications are supposed to work with the services created by CloudNativePG
@@ -13,7 +13,7 @@ in the same Kubernetes cluster.
1313
For more information on services and how to manage them, please refer to the
1414
["Service management"](service_management.md) section.
1515

16-
:::info
16+
:::tip[Hint]
1717
It is highly recommended using those services in your applications,
1818
and avoiding connecting directly to a specific PostgreSQL instance, as the latter
1919
can change during the cluster lifetime.
@@ -27,7 +27,7 @@ You can use these services in your applications through:
2727
For the credentials to connect to PostgreSQL, you can
2828
use the secrets generated by the operator.
2929

30-
:::tip Connection Pooling
30+
:::note[Connection Pooling]
3131
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
3232
information about how to take advantage of PgBouncer as a connection pooler,
3333
and create an access layer between your applications and the PostgreSQL clusters.
@@ -95,6 +95,6 @@ database.
9595
The `-superuser` ones are supposed to be used only for administrative purposes,
9696
and correspond to the `postgres` user.
9797

98-
:::important
98+
:::info[Important]
9999
Superuser access over the network is disabled by default.
100-
:::
100+
:::

versioned_docs/version-1.27/architecture.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
id: architecture
3-
sidebar_position: 5
3+
sidebar_position: 40
44
title: Architecture
55
---
66

77
# Architecture
88
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
99

10-
:::tip
10+
:::tip[Hint]
1111
For a deeper understanding, we recommend reading our article on the CNCF
1212
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
1313
which provides valuable insights into best practices and design
@@ -57,7 +57,7 @@ used as a fallback option, for example, to store WAL files in an object store).
5757
Replicas are usually called *standby servers* and can also be used for
5858
read-only workloads, thanks to the *Hot Standby* feature.
5959

60-
:::important
60+
:::info[Important]
6161
**We recommend against storage-level replication with PostgreSQL**, although
6262
CloudNativePG allows you to adopt that strategy. For more information, please refer
6363
to the talk given by Chris Milsted and Gabriele Bartolini at KubeCon NA 2022 entitled
@@ -91,7 +91,7 @@ The multi-availability zone Kubernetes architecture with three (3) or more
9191
zones is the one that we recommend for PostgreSQL usage.
9292
This scenario is typical of Kubernetes services managed by Cloud Providers.
9393

94-
![Kubernetes cluster spanning over 3 independent data centers](/img/k8s-architecture-3-az.png)
94+
![Kubernetes cluster spanning over 3 independent data centers](./images/k8s-architecture-3-az.png)
9595

9696
Such an architecture enables the CloudNativePG operator to control the full
9797
lifecycle of a `Cluster` resource across the zones within a single Kubernetes
@@ -113,15 +113,15 @@ to deploy distributed PostgreSQL topologies hosting "passive"
113113
managing them via declarative configuration. This setup is ideal for disaster
114114
recovery (DR), read-only operations, or cross-region availability.
115115

116-
:::important
116+
:::info[Important]
117117
Each operator deployment can only manage operations within its local
118118
Kubernetes cluster. For operations across Kubernetes clusters, such as
119119
controlled switchover or unexpected failover, coordination must be handled
120120
manually (through GitOps, for example) or by using a higher-level cluster
121121
management tool.
122122
:::
123123

124-
![Example of a multiple Kubernetes cluster architecture distributed over 3 regions each with 3 independent data centers](/img/k8s-architecture-multi.png)
124+
![Example of a multiple Kubernetes cluster architecture distributed over 3 regions each with 3 independent data centers](./images/k8s-architecture-multi.png)
125125

126126
### Single availability zone Kubernetes clusters
127127

@@ -143,9 +143,9 @@ Kubernetes clusters in an active/passive configuration, with the second cluster
143143
primarily used for Disaster Recovery (see
144144
the [replica cluster feature](replica_cluster.md)).
145145

146-
![Example of a Kubernetes architecture with only 2 data centers](/img/k8s-architecture-2-az.png)
146+
![Example of a Kubernetes architecture with only 2 data centers](./images/k8s-architecture-2-az.png)
147147

148-
:::tip
148+
:::tip[Hint]
149149
If you are at an early stage of your Kubernetes journey, please share this
150150
document with your infrastructure team. The two data centers setup might
151151
be simply the result of a "lift-and-shift" transition to Kubernetes
@@ -179,7 +179,7 @@ is now fully declarative, automated failover across Kubernetes clusters is not
179179
within CloudNativePG's scope, as the operator can only function within a single
180180
Kubernetes cluster.
181181

182-
:::important
182+
:::info[Important]
183183
CloudNativePG provides all the necessary primitives and probes to
184184
coordinate PostgreSQL active/passive topologies across different Kubernetes
185185
clusters through a higher-level operator or management tool.
@@ -195,7 +195,7 @@ PostgreSQL workloads is referred to as a **Postgres node** or `postgres` node.
195195
This approach ensures optimal performance and resource allocation for your
196196
database operations.
197197

198-
:::hint
198+
:::tip[Hint]
199199
As a general rule of thumb, deploy Postgres nodes in multiples of
200200
three—ideally with one node per availability zone. Three nodes is
201201
an optimal number because it ensures that a PostgreSQL cluster with three
@@ -209,7 +209,7 @@ labels ensure that a node is capable of running `postgres` workloads, while
209209
taints help prevent any non-`postgres` workloads from being scheduled on that
210210
node.
211211

212-
:::important
212+
:::info[Important]
213213
This methodology is the most straightforward way to ensure that PostgreSQL
214214
workloads are isolated from other workloads in terms of both computing
215215
resources and, when using locally attached disks, storage. While different
@@ -289,7 +289,7 @@ Kubernetes cluster, with the following specifications:
289289
* PostgreSQL instances should reside in different availability zones
290290
within the same Kubernetes cluster / region
291291

292-
:::important
292+
:::info[Important]
293293
You can configure the above services through the `managed.services` section
294294
in the `Cluster` configuration. This can be done by reducing the number of
295295
services and selecting the type (default is `ClusterIP`). For more details,
@@ -302,26 +302,26 @@ architecture for a PostgreSQL cluster spanning across 3 different availability
302302
zones, running on separate nodes, each with dedicated local storage for
303303
PostgreSQL data.
304304

305-
![Bird-eye view of the recommended shared nothing architecture for PostgreSQL in Kubernetes](/img/k8s-pg-architecture.png)
305+
![Bird-eye view of the recommended shared nothing architecture for PostgreSQL in Kubernetes](./images/k8s-pg-architecture.png)
306306

307307
CloudNativePG automatically takes care of updating the above services if
308308
the topology of the cluster changes. For example, in case of failover, it
309309
automatically updates the `-rw` service to point to the promoted primary,
310310
making sure that traffic from the applications is seamlessly redirected.
311311

312-
:::info Replication
312+
:::note[Replication]
313313
Please refer to the ["Replication" section](replication.md) for more
314314
information about how CloudNativePG relies on PostgreSQL replication,
315315
including synchronous settings.
316316
:::
317317

318-
:::info Connecting from an application
318+
:::note[Connecting from an application]
319319
Please refer to the ["Connecting from an application" section](applications.md) for
320320
information about how to connect to CloudNativePG from a stateless
321321
application within the same Kubernetes cluster.
322322
:::
323323

324-
:::info Connection Pooling
324+
:::note[Connection Pooling]
325325
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
326326
information about how to take advantage of PgBouncer as a connection pooler,
327327
and create an access layer between your applications and the PostgreSQL clusters.
@@ -333,7 +333,7 @@ Applications can decide to connect to the PostgreSQL instance elected as
333333
*current primary* by the Kubernetes operator, as depicted in the following
334334
diagram:
335335

336-
![Applications writing to the single primary](/img/architecture-rw.png)
336+
![Applications writing to the single primary](./images/architecture-rw.png)
337337

338338
Applications can use the `-rw` suffix service.
339339

@@ -343,7 +343,7 @@ service to another instance of the cluster.
343343

344344
### Read-only workloads
345345

346-
:::important
346+
:::info[Important]
347347
Applications must be aware of the limitations that
348348
[Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html)
349349
presents and familiar with the way PostgreSQL operates when dealing with
@@ -356,7 +356,7 @@ primary node.
356356

357357
The following diagram shows the architecture:
358358

359-
![Applications reading from hot standby replicas in round robin](/img/architecture-read-only.png)
359+
![Applications reading from hot standby replicas in round robin](./images/architecture-read-only.png)
360360

361361
Applications can also access any PostgreSQL instance through the
362362
`-r` service.
@@ -400,7 +400,7 @@ cluster and the replica cluster is in the second. The second Kubernetes cluster
400400
acts as the company's disaster recovery cluster, ready to be activated in case
401401
of disaster and unavailability of the first one.
402402

403-
![An example of multi-cluster deployment with a primary and a replica cluster](/img/multi-cluster.png)
403+
![An example of multi-cluster deployment with a primary and a replica cluster](./images/multi-cluster.png)
404404

405405
A replica cluster can have the same architecture as the primary cluster.
406406
Instead of a primary instance, a replica cluster has a **designated primary**
@@ -427,7 +427,7 @@ This is typically triggered by:
427427
cluster-aware authority.
428428
:::
429429

430-
:::important
430+
:::info[Important]
431431
CloudNativePG allows you to control the distributed topology via
432432
declarative configuration, enabling you to automate these procedures as part of
433433
your Infrastructure as Code (IaC) process, including GitOps.
@@ -442,10 +442,10 @@ CloudNativePG allows you to define topologies with multiple replica clusters.
442442
You can also define replica clusters with a lower number of replicas, and then
443443
increase this number when the cluster is promoted to primary.
444444

445-
:::info Replica clusters
445+
:::note[Replica clusters]
446446
Please refer to the ["Replica Clusters" section](replica_cluster.md) for
447447
more detailed information on how physical replica clusters operate and how to
448448
define a distributed topology with read-only clusters across different
449449
Kubernetes clusters. This approach can significantly enhance your global
450450
disaster recovery and high availability (HA) strategy.
451-
:::
451+
:::

0 commit comments

Comments
 (0)