Skip to content

Commit 0100114

Browse files
authored
docs: updated 1.23, 1.24 and 1.25 (#250)
Closes #246 Signed-off-by: Gabriele Bartolini <[email protected]>
1 parent 4b88ec1 commit 0100114

File tree

378 files changed

+68203
-1660
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

378 files changed

+68203
-1660
lines changed

assets/documentation/1.23/appendixes/object_stores/index.html

Lines changed: 27 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -269,6 +269,8 @@
269269
<li class="toctree-l3"><a class="reference internal" href="#s3-lifecycle-policy">S3 lifecycle policy</a>
270270
</li>
271271
<li class="toctree-l3"><a class="reference internal" href="#other-s3-compatible-object-storages-providers">Other S3-compatible Object Storages providers</a>
272+
</li>
273+
<li class="toctree-l3"><a class="reference internal" href="#using-object-storage-with-a-private-ca">Using Object Storage with a private CA</a>
272274
</li>
273275
</ul>
274276
</li>
@@ -435,12 +437,28 @@ <h3 id="other-s3-compatible-object-storages-providers">Other S3-compatible Objec
435437
s3Credentials:
436438
[...]
437439
</code></pre>
438-
<div class="admonition important">
439-
<p class="admonition-title">Important</p>
440-
<p>Suppose you configure an Object Storage provider which uses a certificate signed with a private CA,
441-
like when using MinIO via HTTPS. In that case, you need to set the option <code>endpointCA</code>
442-
referring to a secret containing the CA bundle so that Barman can verify the certificate correctly.</p>
443-
</div>
440+
<h3 id="using-object-storage-with-a-private-ca">Using Object Storage with a private CA</h3>
441+
<p>Suppose you configure an Object Storage provider which uses a certificate
442+
signed with a private CA, for example when using MinIO via HTTPS. In that case,
443+
you need to set the option <code>endpointCA</code> inside <code>barmanObjectStore</code> referring
444+
to a secret containing the CA bundle, so that Barman can verify the certificate
445+
correctly.
446+
You can find instructions on creating a secret using your cert files in the
447+
<a href="../../certificates/#example">certificates</a> document.
448+
Once you have created the secret, you can populate the <code>endpointCA</code> as in the
449+
following example:</p>
450+
<pre><code class="language-yaml">apiVersion: postgresql.cnpg.io/v1
451+
kind: Cluster
452+
[...]
453+
spec:
454+
[...]
455+
backup:
456+
barmanObjectStore:
457+
endpointURL: &lt;myEndpointURL&gt;
458+
endpointCA:
459+
name: my-ca-secret
460+
key: ca.crt
461+
</code></pre>
444462
<div class="admonition note">
445463
<p class="admonition-title">Note</p>
446464
<p>If you want ConfigMaps and Secrets to be <strong>automatically</strong> reloaded by instances, you can
@@ -475,7 +493,7 @@ <h2 id="azure-blob-storage">Azure Blob Storage</h2>
475493
<p>On the other side, using both <strong>Storage account access key</strong> or <strong>Storage account SAS Token</strong>,
476494
the credentials need to be stored inside a Kubernetes Secret, adding data entries only when
477495
needed. The following command performs that:</p>
478-
<pre><code>kubectl create secret generic azure-creds \
496+
<pre><code class="language-sh">kubectl create secret generic azure-creds \
479497
--from-literal=AZURE_STORAGE_ACCOUNT=&lt;storage account name&gt; \
480498
--from-literal=AZURE_STORAGE_KEY=&lt;storage account key&gt; \
481499
--from-literal=AZURE_STORAGE_SAS_TOKEN=&lt;SAS token&gt; \
@@ -508,14 +526,14 @@ <h2 id="azure-blob-storage">Azure Blob Storage</h2>
508526
</code></pre>
509527
<p>When using the Azure Blob Storage, the <code>destinationPath</code> fulfills the following
510528
structure:</p>
511-
<pre><code>&lt;http|https&gt;://&lt;account-name&gt;.&lt;service-name&gt;.core.windows.net/&lt;resource-path&gt;
529+
<pre><code class="language-sh">&lt;http|https&gt;://&lt;account-name&gt;.&lt;service-name&gt;.core.windows.net/&lt;resource-path&gt;
512530
</code></pre>
513531
<p>where <code>&lt;resource-path&gt;</code> is <code>&lt;container&gt;/&lt;blob&gt;</code>. The <strong>account name</strong>,
514532
which is also called <strong>storage account name</strong>, is included in the used host name.</p>
515533
<h3 id="other-azure-blob-storage-compatible-providers">Other Azure Blob Storage compatible providers</h3>
516534
<p>If you are using a different implementation of the Azure Blob Storage APIs,
517535
the <code>destinationPath</code> will have the following structure:</p>
518-
<pre><code>&lt;http|https&gt;://&lt;local-machine-address&gt;:&lt;port&gt;/&lt;account-name&gt;/&lt;resource-path&gt;
536+
<pre><code class="language-sh">&lt;http|https&gt;://&lt;local-machine-address&gt;:&lt;port&gt;/&lt;account-name&gt;/&lt;resource-path&gt;
519537
</code></pre>
520538
<p>In that case, <code>&lt;account-name&gt;</code> is the first component of the path.</p>
521539
<p>This is required if you are testing the Azure support via the Azure Storage

assets/documentation/1.23/architecture/index.html

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -517,11 +517,12 @@ <h2 id="deployments-across-kubernetes-clusters">Deployments across Kubernetes cl
517517
only write inside a single Kubernetes cluster, at any time.</p>
518518
<p>However, for business continuity objectives it is fundamental to:</p>
519519
<ul>
520-
<li>reduce global <strong>recovery point objectives</strong> (RPO) by storing PostgreSQL backup data
521-
in multiple locations, regions and possibly using different providers
522-
(Disaster Recovery)</li>
523-
<li>reduce global <strong>recovery time objectives</strong> (RTO) by taking advantage of PostgreSQL
524-
replication beyond the primary Kubernetes cluster (High Availability)</li>
520+
<li>reduce global <strong>recovery point objectives</strong> (<a href="../before_you_start/#rpo">RPO</a>)
521+
by storing PostgreSQL backup data in multiple locations, regions and possibly
522+
using different providers (Disaster Recovery)</li>
523+
<li>reduce global <strong>recovery time objectives</strong> (<a href="../before_you_start/#rto">RTO</a>)
524+
by taking advantage of PostgreSQL replication beyond the primary Kubernetes
525+
cluster (High Availability)</li>
525526
</ul>
526527
<p>In order to address the above concerns, CloudNativePG introduces the
527528
concept of a <em>PostgreSQL Replica Cluster</em>. Replica clusters are the

assets/documentation/1.23/backup/index.html

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -373,7 +373,8 @@ <h2 id="wal-archive">WAL archive</h2>
373373
as they can simply rely on the WAL archive to synchronize across long
374374
distances, extending disaster recovery goals across different regions.</p>
375375
<p>When you <a href="../wal_archiving/">configure a WAL archive</a>, CloudNativePG provides
376-
out-of-the-box an RPO &lt;= 5 minutes for disaster recovery, even across regions.</p>
376+
out-of-the-box an <a href="../before_you_start/#rpo">RPO</a> &lt;= 5 minutes for disaster
377+
recovery, even across regions.</p>
377378
<div class="admonition important">
378379
<p class="admonition-title">Important</p>
379380
<p>Our recommendation is to always setup the WAL archive in production.
@@ -419,9 +420,9 @@ <h2 id="object-stores-or-volume-snapshots-which-one-to-use">Object stores or vol
419420
<li>availability of a trusted storage class that supports volume snapshots</li>
420421
<li>size of the database: with object stores, the larger your database, the
421422
longer backup and, most importantly, recovery procedures take (the latter
422-
impacts RTO); in presence of Very Large Databases (VLDB), the general
423-
advice is to rely on Volume Snapshots as, thanks to copy-on-write, they
424-
provide faster recovery</li>
423+
impacts <a href="../before_you_start/#rto">RTO</a>); in presence of Very Large Databases
424+
(VLDB), the general advice is to rely on Volume Snapshots as, thanks to
425+
copy-on-write, they provide faster recovery</li>
425426
<li>data mobility and possibility to store or relay backup files on a
426427
secondary location in a different region, or any subsequent one</li>
427428
<li>other factors, mostly based on the confidence and familiarity with the
@@ -528,7 +529,7 @@ <h2 id="scheduled-backups">Scheduled backups</h2>
528529
are not included.</p>
529530
<div class="admonition hint">
530531
<p class="admonition-title">Hint</p>
531-
<p>Backup frequency might impact your recovery time object (RTO) after a
532+
<p>Backup frequency might impact your recovery time objective (<a href="../before_you_start/#rto">RTO</a>) after a
532533
disaster which requires a full or Point-In-Time recovery operation. Our
533534
advice is that you regularly test your backups by recovering them, and then
534535
measuring the time it takes to recover from scratch so that you can refine

assets/documentation/1.23/backup_barmanobjectstore/index.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -383,9 +383,9 @@ <h2 id="compression-algorithms">Compression algorithms</h2>
383383
<li>snappy</li>
384384
</ul>
385385
<p>The compression settings for backups and WALs are independent. See the
386-
<a href="../cloudnative-pg.v1/#postgresql-cnpg-io-v1-DataBackupConfiguration">DataBackupConfiguration</a> and
387-
<a href="../cloudnative-pg.v1/#postgresql-cnpg-io-v1-WalBackupConfiguration">WALBackupConfiguration</a> sections in
388-
the API reference.</p>
386+
<a href="https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration">DataBackupConfiguration</a> and
387+
<a href="https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration">WALBackupConfiguration</a> sections in
388+
the barman-cloud API reference.</p>
389389
<p>It is important to note that archival time, restore time, and size change
390390
between the algorithms, so the compression algorithm should be chosen according
391391
to your use case.</p>

assets/documentation/1.23/before_you_start/index.html

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -397,6 +397,12 @@ <h2 id="postgresql-terminology">PostgreSQL terminology</h2>
397397
<dd>A PVC group in CloudNativePG's terminology is a group of related PVCs
398398
belonging to the same PostgreSQL instance, namely the main volume containing
399399
the PGDATA (<code>storage</code>) and the volume for WALs (<code>walStorage</code>).</dd>
400+
<dt><a id="rto"></a>RTO</dt>
401+
<dd>Acronym for "recovery time objective", the amount of time a system can be
402+
unavailable without adversely impacting the application.</dd>
403+
<dt><a id="rpo"></a>RPO</dt>
404+
<dd>Acronym for "recovery point objective", a calculation of the level of
405+
acceptable data loss following a disaster recovery scenario.</dd>
400406
</dl>
401407
<h2 id="cloud-terminology">Cloud terminology</h2>
402408
<dl>

assets/documentation/1.23/bootstrap/index.html

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -774,7 +774,7 @@ <h4 id="usernamepassword-authentication">Username/Password authentication</h4>
774774
<pre><code># A more restrictive rule for TLS and IP of origin is recommended
775775
host replication streaming_replica all md5
776776
</code></pre>
777-
<p>The following manifest creates a new PostgreSQL 17.0 cluster,
777+
<p>The following manifest creates a new PostgreSQL 17.2 cluster,
778778
called <code>target-db</code>, using the <code>pg_basebackup</code> bootstrap method
779779
to clone an external PostgreSQL cluster defined as <code>source-db</code>
780780
(in the <code>externalClusters</code> array). As you can see, the <code>source-db</code>
@@ -787,7 +787,7 @@ <h4 id="usernamepassword-authentication">Username/Password authentication</h4>
787787
name: target-db
788788
spec:
789789
instances: 3
790-
imageName: ghcr.io/cloudnative-pg/postgresql:17.0
790+
imageName: ghcr.io/cloudnative-pg/postgresql:17.2
791791

792792
bootstrap:
793793
pg_basebackup:
@@ -806,7 +806,7 @@ <h4 id="usernamepassword-authentication">Username/Password authentication</h4>
806806
key: password
807807
</code></pre>
808808
<p>All the requirements must be met for the clone operation to work, including
809-
the same PostgreSQL version (in our case 17.0).</p>
809+
the same PostgreSQL version (in our case 17.2).</p>
810810
<h4 id="tls-certificate-authentication">TLS certificate authentication</h4>
811811
<p>The second authentication method supported by CloudNativePG
812812
with the <code>pg_basebackup</code> bootstrap is based on TLS client certificates.
@@ -818,7 +818,7 @@ <h4 id="tls-certificate-authentication">TLS certificate authentication</h4>
818818
<p>This example can be easily adapted to cover an instance that resides
819819
outside the Kubernetes cluster.</p>
820820
</div>
821-
<p>The manifest defines a new PostgreSQL 17.0 cluster called <code>cluster-clone-tls</code>,
821+
<p>The manifest defines a new PostgreSQL 17.2 cluster called <code>cluster-clone-tls</code>,
822822
which is bootstrapped using the <code>pg_basebackup</code> method from the <code>cluster-example</code>
823823
external cluster. The host is identified by the read/write service
824824
in the same cluster, while the <code>streaming_replica</code> user is authenticated
@@ -831,7 +831,7 @@ <h4 id="tls-certificate-authentication">TLS certificate authentication</h4>
831831
name: cluster-clone-tls
832832
spec:
833833
instances: 3
834-
imageName: ghcr.io/cloudnative-pg/postgresql:17.0
834+
imageName: ghcr.io/cloudnative-pg/postgresql:17.2
835835

836836
bootstrap:
837837
pg_basebackup:

assets/documentation/1.23/certificates/index.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -431,11 +431,11 @@ <h4 id="example">Example</h4>
431431
<li><code>server.key</code> – The private key of the server TLS certificate.</li>
432432
</ul>
433433
<p>Create a secret containing the CA certificate:</p>
434-
<pre><code>kubectl create secret generic my-postgresql-server-ca \
434+
<pre><code class="language-sh">kubectl create secret generic my-postgresql-server-ca \
435435
--from-file=ca.crt=./server-ca.crt
436436
</code></pre>
437437
<p>Create a secret with the TLS certificate:</p>
438-
<pre><code>kubectl create secret tls my-postgresql-server \
438+
<pre><code class="language-sh">kubectl create secret tls my-postgresql-server \
439439
--cert=./server.crt --key=./server.key
440440
</code></pre>
441441
<p>Create a PostgreSQL cluster referencing those secrets:</p>

0 commit comments

Comments
 (0)