Skip to content

Commit 21444af

Browse files
committed
chore: update documentation for 1.25 - fixing admonitions
Signed-off-by: Leonardo Cecchi <[email protected]>
1 parent 7e54c9a commit 21444af

File tree

113 files changed

+3779
-5304
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+3779
-5304
lines changed

versioned_docs/version-1.25/appendixes/object_stores.md

Lines changed: 128 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,11 @@
11
---
22
id: object_stores
3-
title: object_stores
3+
title: Appendix A - Common object stores for backups
44
---
55

6-
# Appendix C - Common object stores for backups
6+
# Appendix A - Common object stores for backups
77
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
88

9-
:::warning
10-
As of CloudNativePG 1.26, **native Barman Cloud support is deprecated** in
11-
favor of the **Barman Cloud Plugin**. While the native integration remains
12-
functional for now, we strongly recommend beginning a gradual migration to
13-
the plugin-based interface after appropriate testing. The Barman Cloud
14-
Plugin documentation describes
15-
[how to use common object stores](https://cloudnative-pg.io/plugin-barman-cloud/docs/object_stores/).
16-
:::
17-
189
You can store the [backup](../backup.md) files in any service that is supported
1910
by the Barman Cloud infrastructure. That is:
2011

@@ -353,8 +344,133 @@ spec:
353344

354345
Now the operator will use the credentials to authenticate against Google Cloud Storage.
355346

356-
:::important
347+
:::info[Important]
357348
This way of authentication will create a JSON file inside the container with all the needed
358349
information to access your Google Cloud Storage bucket, meaning that if someone gets access to the pod
359350
will also have write permissions to the bucket.
360351
:::
352+
353+
## MinIO Gateway
354+
355+
Optionally, you can use MinIO Gateway as a common interface which
356+
relays backup objects to other cloud storage solutions, like S3 or GCS.
357+
For more information, please refer to [MinIO official documentation](https://docs.min.io/).
358+
359+
Specifically, the CloudNativePG cluster can directly point to a local
360+
MinIO Gateway as an endpoint, using previously created credentials and service.
361+
362+
MinIO secrets will be used by both the PostgreSQL cluster and the MinIO instance.
363+
Therefore, you must create them in the same namespace:
364+
365+
```sh
366+
kubectl create secret generic minio-creds \
367+
--from-literal=MINIO_ACCESS_KEY=<minio access key here> \
368+
--from-literal=MINIO_SECRET_KEY=<minio secret key here>
369+
```
370+
371+
:::note
372+
Cloud Object Storage credentials will be used only by MinIO Gateway in this case.
373+
:::
374+
375+
:::info[Important]
376+
In order to allow PostgreSQL to reach MinIO Gateway, it is necessary to create a
377+
`ClusterIP` service on port `9000` bound to the MinIO Gateway instance.
378+
:::
379+
380+
For example:
381+
382+
```yaml
383+
apiVersion: v1
384+
kind: Service
385+
metadata:
386+
name: minio-gateway-service
387+
spec:
388+
type: ClusterIP
389+
ports:
390+
- port: 9000
391+
targetPort: 9000
392+
protocol: TCP
393+
selector:
394+
app: minio
395+
```
396+
397+
:::warning
398+
At the time of writing this documentation, the official
399+
[MinIO Operator](https://github.com/minio/minio-operator/issues/71)
400+
for Kubernetes does not support the gateway feature. As such, we will use a
401+
`deployment` instead.
402+
:::
403+
404+
The MinIO deployment will use cloud storage credentials to upload objects to the
405+
remote bucket and relay backup files to different locations.
406+
407+
Here is an example using AWS S3 as Cloud Object Storage:
408+
409+
```yaml
410+
apiVersion: apps/v1
411+
kind: Deployment
412+
[...]
413+
spec:
414+
containers:
415+
- name: minio
416+
image: minio/minio:RELEASE.2020-06-03T22-13-49Z
417+
args:
418+
- gateway
419+
- s3
420+
env:
421+
# MinIO access key and secret key
422+
- name: MINIO_ACCESS_KEY
423+
valueFrom:
424+
secretKeyRef:
425+
name: minio-creds
426+
key: MINIO_ACCESS_KEY
427+
- name: MINIO_SECRET_KEY
428+
valueFrom:
429+
secretKeyRef:
430+
name: minio-creds
431+
key: MINIO_SECRET_KEY
432+
# AWS credentials
433+
- name: AWS_ACCESS_KEY_ID
434+
valueFrom:
435+
secretKeyRef:
436+
name: aws-creds
437+
key: ACCESS_KEY_ID
438+
- name: AWS_SECRET_ACCESS_KEY
439+
valueFrom:
440+
secretKeyRef:
441+
name: aws-creds
442+
key: ACCESS_SECRET_KEY
443+
# Uncomment the below section if session token is required
444+
# - name: AWS_SESSION_TOKEN
445+
# valueFrom:
446+
# secretKeyRef:
447+
# name: aws-creds
448+
# key: ACCESS_SESSION_TOKEN
449+
ports:
450+
- containerPort: 9000
451+
```
452+
453+
Proceed by configuring MinIO Gateway service as the `endpointURL` in the `Cluster`
454+
definition, then choose a bucket name to replace `BUCKET_NAME`:
455+
456+
```yaml
457+
apiVersion: postgresql.cnpg.io/v1
458+
kind: Cluster
459+
[...]
460+
spec:
461+
backup:
462+
barmanObjectStore:
463+
destinationPath: s3://BUCKET_NAME/
464+
endpointURL: http://minio-gateway-service:9000
465+
s3Credentials:
466+
accessKeyId:
467+
name: minio-creds
468+
key: MINIO_ACCESS_KEY
469+
secretAccessKey:
470+
name: minio-creds
471+
key: MINIO_SECRET_KEY
472+
[...]
473+
```
474+
475+
Verify on `s3://BUCKET_NAME/` the presence of archived WAL files before
476+
proceeding with a backup.

versioned_docs/version-1.25/applications.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
id: applications
3-
sidebar_position: 35
4-
title: Connecting from an Application
3+
sidebar_position: 340
4+
title: Connecting from an application
55
---
66

7-
# Connecting from an Application
7+
# Connecting from an application
88
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
99

1010
Applications are supposed to work with the services created by CloudNativePG
@@ -13,7 +13,7 @@ in the same Kubernetes cluster.
1313
For more information on services and how to manage them, please refer to the
1414
["Service management"](service_management.md) section.
1515

16-
:::info
16+
:::tip[Hint]
1717
It is highly recommended using those services in your applications,
1818
and avoiding connecting directly to a specific PostgreSQL instance, as the latter
1919
can change during the cluster lifetime.
@@ -27,7 +27,7 @@ You can use these services in your applications through:
2727
For the credentials to connect to PostgreSQL, you can
2828
use the secrets generated by the operator.
2929

30-
:::tip Connection Pooling
30+
:::note[Connection Pooling]
3131
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
3232
information about how to take advantage of PgBouncer as a connection pooler,
3333
and create an access layer between your applications and the PostgreSQL clusters.
@@ -95,6 +95,6 @@ database.
9595
The `-superuser` ones are supposed to be used only for administrative purposes,
9696
and correspond to the `postgres` user.
9797

98-
:::important
98+
:::info[Important]
9999
Superuser access over the network is disabled by default.
100-
:::
100+
:::

versioned_docs/version-1.25/architecture.md

Lines changed: 26 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
id: architecture
3-
sidebar_position: 5
3+
sidebar_position: 40
44
title: Architecture
55
---
66

77
# Architecture
88
<!-- SPDX-License-Identifier: CC-BY-4.0 -->
99

10-
:::tip
10+
:::tip[Hint]
1111
For a deeper understanding, we recommend reading our article on the CNCF
1212
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
1313
which provides valuable insights into best practices and design
@@ -57,7 +57,7 @@ used as a fallback option, for example, to store WAL files in an object store).
5757
Replicas are usually called *standby servers* and can also be used for
5858
read-only workloads, thanks to the *Hot Standby* feature.
5959

60-
:::important
60+
:::info[Important]
6161
**We recommend against storage-level replication with PostgreSQL**, although
6262
CloudNativePG allows you to adopt that strategy. For more information, please refer
6363
to the talk given by Chris Milsted and Gabriele Bartolini at KubeCon NA 2022 entitled
@@ -91,7 +91,7 @@ The multi-availability zone Kubernetes architecture with three (3) or more
9191
zones is the one that we recommend for PostgreSQL usage.
9292
This scenario is typical of Kubernetes services managed by Cloud Providers.
9393

94-
![Kubernetes cluster spanning over 3 independent data centers](/img/k8s-architecture-3-az.png)
94+
![Kubernetes cluster spanning over 3 independent data centers](./images/k8s-architecture-3-az.png)
9595

9696
Such an architecture enables the CloudNativePG operator to control the full
9797
lifecycle of a `Cluster` resource across the zones within a single Kubernetes
@@ -113,15 +113,15 @@ to deploy distributed PostgreSQL topologies hosting "passive"
113113
managing them via declarative configuration. This setup is ideal for disaster
114114
recovery (DR), read-only operations, or cross-region availability.
115115

116-
:::important
116+
:::info[Important]
117117
Each operator deployment can only manage operations within its local
118118
Kubernetes cluster. For operations across Kubernetes clusters, such as
119119
controlled switchover or unexpected failover, coordination must be handled
120120
manually (through GitOps, for example) or by using a higher-level cluster
121121
management tool.
122122
:::
123123

124-
![Example of a multiple Kubernetes cluster architecture distributed over 3 regions each with 3 independent data centers](/img/k8s-architecture-multi.png)
124+
![Example of a multiple Kubernetes cluster architecture distributed over 3 regions each with 3 independent data centers](./images/k8s-architecture-multi.png)
125125

126126
### Single availability zone Kubernetes clusters
127127

@@ -143,9 +143,9 @@ Kubernetes clusters in an active/passive configuration, with the second cluster
143143
primarily used for Disaster Recovery (see
144144
the [replica cluster feature](replica_cluster.md)).
145145

146-
![Example of a Kubernetes architecture with only 2 data centers](/img/k8s-architecture-2-az.png)
146+
![Example of a Kubernetes architecture with only 2 data centers](./images/k8s-architecture-2-az.png)
147147

148-
:::tip
148+
:::tip[Hint]
149149
If you are at an early stage of your Kubernetes journey, please share this
150150
document with your infrastructure team. The two data centers setup might
151151
be simply the result of a "lift-and-shift" transition to Kubernetes
@@ -179,7 +179,7 @@ is now fully declarative, automated failover across Kubernetes clusters is not
179179
within CloudNativePG's scope, as the operator can only function within a single
180180
Kubernetes cluster.
181181

182-
:::important
182+
:::info[Important]
183183
CloudNativePG provides all the necessary primitives and probes to
184184
coordinate PostgreSQL active/passive topologies across different Kubernetes
185185
clusters through a higher-level operator or management tool.
@@ -195,7 +195,7 @@ PostgreSQL workloads is referred to as a **Postgres node** or `postgres` node.
195195
This approach ensures optimal performance and resource allocation for your
196196
database operations.
197197

198-
:::hint
198+
:::tip[Hint]
199199
As a general rule of thumb, deploy Postgres nodes in multiples of
200200
three—ideally with one node per availability zone. Three nodes is
201201
an optimal number because it ensures that a PostgreSQL cluster with three
@@ -209,7 +209,7 @@ labels ensure that a node is capable of running `postgres` workloads, while
209209
taints help prevent any non-`postgres` workloads from being scheduled on that
210210
node.
211211

212-
:::important
212+
:::info[Important]
213213
This methodology is the most straightforward way to ensure that PostgreSQL
214214
workloads are isolated from other workloads in terms of both computing
215215
resources and, when using locally attached disks, storage. While different
@@ -289,7 +289,7 @@ Kubernetes cluster, with the following specifications:
289289
* PostgreSQL instances should reside in different availability zones
290290
within the same Kubernetes cluster / region
291291

292-
:::important
292+
:::info[Important]
293293
You can configure the above services through the `managed.services` section
294294
in the `Cluster` configuration. This can be done by reducing the number of
295295
services and selecting the type (default is `ClusterIP`). For more details,
@@ -302,26 +302,26 @@ architecture for a PostgreSQL cluster spanning across 3 different availability
302302
zones, running on separate nodes, each with dedicated local storage for
303303
PostgreSQL data.
304304

305-
![Bird-eye view of the recommended shared nothing architecture for PostgreSQL in Kubernetes](/img/k8s-pg-architecture.png)
305+
![Bird-eye view of the recommended shared nothing architecture for PostgreSQL in Kubernetes](./images/k8s-pg-architecture.png)
306306

307307
CloudNativePG automatically takes care of updating the above services if
308308
the topology of the cluster changes. For example, in case of failover, it
309309
automatically updates the `-rw` service to point to the promoted primary,
310310
making sure that traffic from the applications is seamlessly redirected.
311311

312-
:::info Replication
312+
:::note[Replication]
313313
Please refer to the ["Replication" section](replication.md) for more
314314
information about how CloudNativePG relies on PostgreSQL replication,
315315
including synchronous settings.
316316
:::
317317

318-
:::info Connecting from an application
318+
:::note[Connecting from an application]
319319
Please refer to the ["Connecting from an application" section](applications.md) for
320320
information about how to connect to CloudNativePG from a stateless
321321
application within the same Kubernetes cluster.
322322
:::
323323

324-
:::info Connection Pooling
324+
:::note[Connection Pooling]
325325
Please refer to the ["Connection Pooling" section](connection_pooling.md) for
326326
information about how to take advantage of PgBouncer as a connection pooler,
327327
and create an access layer between your applications and the PostgreSQL clusters.
@@ -333,7 +333,7 @@ Applications can decide to connect to the PostgreSQL instance elected as
333333
*current primary* by the Kubernetes operator, as depicted in the following
334334
diagram:
335335

336-
![Applications writing to the single primary](/img/architecture-rw.png)
336+
![Applications writing to the single primary](./images/architecture-rw.png)
337337

338338
Applications can use the `-rw` suffix service.
339339

@@ -343,7 +343,7 @@ service to another instance of the cluster.
343343

344344
### Read-only workloads
345345

346-
:::important
346+
:::info[Important]
347347
Applications must be aware of the limitations that
348348
[Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html)
349349
presents and familiar with the way PostgreSQL operates when dealing with
@@ -356,7 +356,7 @@ primary node.
356356

357357
The following diagram shows the architecture:
358358

359-
![Applications reading from hot standby replicas in round robin](/img/architecture-read-only.png)
359+
![Applications reading from hot standby replicas in round robin](./images/architecture-read-only.png)
360360

361361
Applications can also access any PostgreSQL instance through the
362362
`-r` service.
@@ -400,7 +400,7 @@ cluster and the replica cluster is in the second. The second Kubernetes cluster
400400
acts as the company's disaster recovery cluster, ready to be activated in case
401401
of disaster and unavailability of the first one.
402402

403-
![An example of multi-cluster deployment with a primary and a replica cluster](/img/multi-cluster.png)
403+
![An example of multi-cluster deployment with a primary and a replica cluster](./images/multi-cluster.png)
404404

405405
A replica cluster can have the same architecture as the primary cluster.
406406
Instead of a primary instance, a replica cluster has a **designated primary**
@@ -427,25 +427,24 @@ This is typically triggered by:
427427
cluster-aware authority.
428428
:::
429429

430-
:::important
430+
:::info[Important]
431431
CloudNativePG allows you to control the distributed topology via
432432
declarative configuration, enabling you to automate these procedures as part of
433433
your Infrastructure as Code (IaC) process, including GitOps.
434434
:::
435435

436-
In the example above, the designated primary receives WAL updates via streaming
437-
replication (`primary_conninfo`). As a fallback, it can retrieve WAL segments
438-
from an object store using file-based WAL shipping—for instance, with the
439-
Barman Cloud plugin through `restore_command` and `barman-cloud-wal-restore`.
436+
The designated primary in the above example is fed via WAL streaming
437+
(`primary_conninfo`), with fallback option for file-based WAL shipping through
438+
the `restore_command` and `barman-cloud-wal-restore`.
440439

441440
CloudNativePG allows you to define topologies with multiple replica clusters.
442441
You can also define replica clusters with a lower number of replicas, and then
443442
increase this number when the cluster is promoted to primary.
444443

445-
:::info Replica clusters
444+
:::note[Replica clusters]
446445
Please refer to the ["Replica Clusters" section](replica_cluster.md) for
447446
more detailed information on how physical replica clusters operate and how to
448447
define a distributed topology with read-only clusters across different
449448
Kubernetes clusters. This approach can significantly enhance your global
450449
disaster recovery and high availability (HA) strategy.
451-
:::
450+
:::

0 commit comments

Comments
 (0)