You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: charts/cluster/docs/Getting Started.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,14 @@ The CNPG cluster chart follows a convention over configuration approach. This me
4
4
CNPG setup with sensible defaults. However, you can override these defaults to create a more customized setup. Note that
5
5
you still need to configure backups and monitoring separately. The chart will not install a Prometheus stack for you.
6
6
7
-
_**Note,**_ that this is an opinionated chart. It does not support all configuration options that CNPG supports. If you
7
+
_**Note**_ that this is an opinionated chart. It does not support all configuration options that CNPG supports. If you
8
8
need a highly customized setup, you should manage your cluster via a Kubernetes CNPG cluster manifest instead of this chart.
9
9
Refer to the [CNPG documentation](https://cloudnative-pg.io/documentation/current/) in that case.
10
10
11
11
## Installing the operator
12
12
13
-
To begin, make sure you install the CNPG operator in you cluster. It can be installed via a Helm chart as shown below or
14
-
ir can be installed via a Kubernetes manifest. For more information see the [CNPG documentation](https://cloudnative-pg.io/documentation/current/installation_upgrade/).
13
+
To begin, make sure you install the CNPG operator in your cluster. It can be installed via a Helm chart as shown below or
14
+
it can be installed via a Kubernetes manifest. For more information see the [CNPG documentation](https://cloudnative-pg.io/documentation/current/installation_upgrade/).
Copy file name to clipboardExpand all lines: charts/cluster/docs/Recovery.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,9 +16,9 @@ When performing a recovery you are strongly advised to use the same configuratio
16
16
17
17
To begin, create a `values.yaml` that contains the following:
18
18
19
-
1. Set `mode: recovery` to indicate that you want to perform bootstrap the new cluster from an existing one.
19
+
1. Set `mode: recovery` to indicate that you want to bootstrap the new cluster from an existing one.
20
20
2. Set the `recovery.method` to the type of recovery you want to perform.
21
-
3. Set either the `recovery.backupName` or the Barman Object Store configuration - i.e. `recovery.provider` and appropriate S3, Azure or GCS configuration. In case of `pg_basebackup` complete the `recovery.pgBaseBackup` section.
21
+
3. Set either the `recovery.backupName` or the Barman Object Store configuration - i.e. `recovery.provider` and appropriate S3, Azure or GCS configuration. In case of `pg_basebackup` complete the `recovery.pgBaseBackup` section.
22
22
4. Optionally set the `recovery.pitrTarget.time` in RFC3339 format to perform a point-in-time recovery (not applicable for `pgBaseBackup`).
23
23
5. Retain the identical PostgreSQL version and configuration as the original cluster.
24
24
6. Make sure you don't use the same backup section name as the original cluster. We advise you change the `path` within the storage location if you want to reuse the same storage location/bucket.
Copy file name to clipboardExpand all lines: charts/cluster/docs/runbooks/CNPGClusterHighPhysicalReplicationLagWarning.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ The `CNPGClusterHighPhysicalReplicationLagWarning` alert is triggered when physi
6
6
7
7
## Impact
8
8
9
-
High physical replication lag can cause the cluster replicas to become out of sync. Queries to the `-r` and `-ro` endpoints may return stale data. In the event of a failover, the data that has not yet been replicated from the primary to the replicas may be lost during failover..
9
+
High physical replication lag can cause the cluster replicas to become out of sync. Queries to the `-r` and `-ro` endpoints may return stale data. In the event of a failover, the data that has not yet been replicated from the primary to the replicas may be lost during failover.
10
10
11
11
## Diagnosis
12
12
@@ -34,7 +34,7 @@ Inspect the disk IO statistics using the [CloudNativePG Grafana Dashboard](https
34
34
35
35
Inspect the `Stat Activity` section of the [CloudNativePG Grafana Dashboard](https://grafana.com/grafana/dashboards/20417-cloudnativepg/).
36
36
37
-
- Suboptimal PostgreSQL configuration, e.g. too `few max_wal_senders`. Set this to at least the number of cluster instances (default 10 is usually sufficient).
37
+
- Suboptimal PostgreSQL configuration, e.g. too few `max_wal_senders`. Set this to at least the number of cluster instances (default 10 is usually sufficient).
38
38
39
39
Inspect the `PostgreSQL Parameters` section of the [CloudNativePG Grafana Dashboard](https://grafana.com/grafana/dashboards/20417-cloudnativepg/).
0 commit comments