You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/patroni_configuration.rst
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,21 +38,21 @@ Important rules
38
38
PostgreSQL parameters controlled by Patroni
39
39
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
40
40
41
-
Some of the PostgreSQL parameters **must hold the same values on the primary and the replicas**. For those, **values set either in the local patroni configuration files or via the environment variables take no effect**. To alter or set their values one must change the shared configuration in the DCS. Below is the actual list of such parameters together with the default values:
41
+
Some of the PostgreSQL parameters **must hold the same values on the primary and the replicas**. For those, **values set either in the local patroni configuration files or via the environment variables take no effect**. To alter or set their values one must change the shared configuration in the DCS. Below is the actual list of such parameters together with the default and minimal values:
42
42
43
-
- **max_connections**: 100
44
-
- **max_locks_per_transaction**: 64
45
-
- **max_worker_processes**: 8
46
-
- **max_prepared_transactions**: 0
47
-
- **wal_level**: hot_standby
48
-
- **track_commit_timestamp**: off
43
+
- **max_connections**: default value 100, minimal value 25
44
+
- **max_locks_per_transaction**: default value 64, minimal value 32
45
+
- **max_worker_processes**: default value 8, minimal value 2
46
+
- **max_prepared_transactions**: default value 0, minimal value 0
47
+
- **wal_level**: default value hot_standby, accepted values: hot_standby, replica, logical
48
+
- **track_commit_timestamp**: default value off
49
49
50
50
For the parameters below, PostgreSQL does not require equal values among the primary and all the replicas. However, considering the possibility of a replica to become the primary at any time, it doesn't really make sense to set them differently; therefore, **Patroni restricts setting their values to the** :ref:`dynamic configuration <dynamic_configuration>`.
51
51
52
-
- **max_wal_senders**: 10
53
-
- **max_replication_slots**: 10
54
-
- **wal_keep_segments**: 8
55
-
- **wal_keep_size**: 128MB
52
+
- **max_wal_senders**: default value 10, minimal value 3
53
+
- **max_replication_slots**: default value 10, minimal value 4
54
+
- **wal_keep_segments**: default value 8, minimal value 1
55
+
- **wal_keep_size**: default value 128MB, minimal value 16MB
56
56
- **wal_log_hints**: on
57
57
58
58
These parameters are validated to ensure they are sane, or meet a minimum value.
Make sure Patroni refreshes the ``etcd3`` lease at least once per HA loop.
40
+
41
+
- Recheck annotations on 409 status code when attempting to acquire leader lock (Alexander Kukushkin)
42
+
43
+
Implement the same behavior as was done for the leader object read in Patroni version 4.0.3.
44
+
45
+
- Consider ``replay_lsn`` when advancing slots (Polina Bungina)
46
+
47
+
Do not try to advance slots on replicas past the ``replay_lsn``. Additionally, advance the slot to the ``replay_lsn`` position if it is already past the ``confirmed_flush_lsn`` of this slot on the replica but the replica has still not replayed the actual ``LSN`` at which this slot is on the primary.
48
+
49
+
- Make sure ``CHECKPOINT`` is executed after promote (Alexander Kukushkin)
50
+
51
+
It was possible that checkpoint task wasn't reset on demote because ``CHECKPOINT`` wasn't yet finished. This resulted in using a stale ``result`` when the next promote is triggered.
In case of a slow shutdown, it might happen that the next heartbeat loop hits the DCS error handling method again, resulting in ``AsyncExecutor is busy, demoting from the main thread`` warning and starting offline demotion again.
56
+
57
+
- Normalize the ``data_dir`` value before renaming the data directory on initialization failure (Waynerv)
58
+
59
+
Prevent a trailing slash in the ``data_dir`` parameter value from breaking the renaming process after an initialization failure.
60
+
61
+
- Check that ``synchronous_standby_names`` contains the expected value (Alexander Kukushkin)
62
+
63
+
Previously, the mechanism implementing the state machine for non-quorum synchronous replication didn't check the actual value of ``synchronous_standby_names``, what resulted in a stale value of ``synchronous_standby_names`` being used when ``pg_stat_replication`` is a subset of ``synchronous_standby_names``.
Copy file name to clipboardExpand all lines: docs/rest_api.rst
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,9 +63,11 @@ For all health check ``GET`` requests Patroni returns a JSON document with the s
63
63
64
64
- ``GET /liveness``: returns HTTP status code **200** if Patroni heartbeat loop is properly running and **503** if the last run was more than ``ttl`` seconds ago on the primary or ``2*ttl`` on the replica. Could be used for ``livenessProbe``.
65
65
66
-
- ``GET /readiness``: returns HTTP status code **200** when the Patroni node is running as the leader or when PostgreSQL is upand running. The endpoint could be used for ``readinessProbe`` when it is not possible to use Kubernetes endpoints for leader elections (OpenShift).
66
+
- ``GET /readiness?lag=<max-lag>&mode=apply|write``: returns HTTP status code **200** when the Patroni node is running as the leader or when PostgreSQL is up, replicating and not too far behind the leader. The lag parameter sets how far a standby is allowed to be behind, it defaults to ``maximum_lag_on_failover``. Lag can be specified in bytes or in human readable values, for e.g. 16kB, 64MB, 1GB. Mode sets whether the WAL needs to be replayed (apply) or just received (write). The default is apply.
67
67
68
-
Both, ``readiness`` and ``liveness`` endpoints are very light-weight and not executing any SQL. Probes should be configured in such a way that they start failing about time when the leader key is expiring. With the default value of ``ttl``, which is ``30s`` example probes would look like:
68
+
When used as Kubernetes ``readinessProbe`` it will make sure freshly started pods only become ready when they have caught up to the leader. This combined with a PodDisruptionBudget will protect against leader being terminated too early during a rolling restart of nodes. It will also make sure that replicas that cannot keep up with replication do not service read-only traffic. The endpoint could be used for ``readinessProbe`` when it is not possible to use Kubernetes endpoints for leader elections (OpenShift).
69
+
70
+
The ``liveness`` endpoint is very light-weight and not executing any SQL. Probes should be configured in such a way that they start failing about time when the leader key is expiring. With the default value of ``ttl``, which is ``30s`` example probes would look like:
0 commit comments