Skip to content

Commit c7c779d

Browse files
committed
Merge remote-tracking branch 'zalando/master' into multisite
2 parents 0a35451 + 9644a83 commit c7c779d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

74 files changed

+1623
-845
lines changed

.github/workflows/install_deps.py

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,17 @@ def install_requirements(what):
2727
requirements += ['psycopg[binary]'] if sys.version_info >= (3, 8, 0) and\
2828
(sys.platform != 'darwin' or what == 'etcd3') else ['psycopg2-binary==2.9.9'
2929
if sys.platform == 'darwin' else 'psycopg2-binary']
30+
31+
from pip._vendor.distlib.markers import evaluator, DEFAULT_CONTEXT
32+
from pip._vendor.distlib.util import parse_requirement
33+
3034
for r in read('requirements.txt').split('\n'):
31-
r = r.strip()
32-
if r != '':
33-
extras = {e for e, v in EXTRAS_REQUIRE.items() if v and any(r.startswith(x) for x in v)}
34-
if not extras or what == 'all' or what in extras:
35-
requirements.append(r)
35+
r = parse_requirement(r)
36+
if not r or r.marker and not evaluator.evaluate(r.marker, DEFAULT_CONTEXT):
37+
continue
38+
extras = {e for e, v in EXTRAS_REQUIRE.items() if v and any(r.requirement.startswith(x) for x in v)}
39+
if not extras or what == 'all' or what in extras:
40+
requirements.append(r.requirement)
3641

3742
return subprocess.call([sys.executable, '-m', 'pip', 'install'] + requirements)
3843

README.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,8 @@ raft
102102
`pysyncobj` module in order to use python Raft implementation as DCS
103103
aws
104104
`boto3` in order to use AWS callbacks
105+
systemd
106+
`systemd-python` in order to use sd_notify integration
105107
all
106108
all of the above (except psycopg family)
107109
psycopg3

docker/README.md

Lines changed: 42 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,6 @@ Example session:
5555
2024-08-26 09:04:34,938 INFO: establishing a new patroni heartbeat connection to postgres
5656
2024-08-26 09:04:34,992 INFO: running post_bootstrap
5757
2024-08-26 09:04:35,004 WARNING: User creation via "bootstrap.users" will be removed in v4.0.0
58-
2024-08-26 09:04:35,009 WARNING: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog'
5958
2024-08-26 09:04:35,189 INFO: initialized a new cluster
6059
2024-08-26 09:04:35,328 INFO: no action. I am (patroni1), the leader with the lock
6160
2024-08-26 09:04:43,824 INFO: establishing a new patroni restapi connection to postgres
@@ -65,13 +64,13 @@ Example session:
6564

6665
$ docker exec -ti demo-patroni1 bash
6766
postgres@patroni1:~$ patronictl list
68-
+ Cluster: demo (7303838734793224214) --------+----+-----------+
69-
| Member | Host | Role | State | TL | Lag in MB |
70-
+----------+------------+---------+-----------+----+-----------+
71-
| patroni1 | 172.29.0.2 | Leader | running | 1 | |
72-
| patroni2 | 172.29.0.6 | Replica | streaming | 1 | 0 |
73-
| patroni3 | 172.29.0.5 | Replica | streaming | 1 | 0 |
74-
+----------+------------+---------+-----------+----+-----------+
67+
+ Cluster: demo (7303838734793224214) --------+----+-------------+-----+------------+-----+
68+
| Member | Host | Role | State | TL | Receive LSN | Lag | Replay LSN | Lag |
69+
+----------+------------+---------+-----------+----+-------------+-----+------------+-----+
70+
| patroni1 | 172.18.0.8 | Leader | running | 2 | | | | |
71+
| patroni2 | 172.18.0.3 | Replica | streaming | 2 | 0/404D8A8 | 0 | 0/404D8A8 | 0 |
72+
| patroni3 | 172.18.0.6 | Replica | streaming | 2 | 0/404D8A8 | 0 | 0/404D8A8 | 0 |
73+
+----------+------------+---------+-----------+----+-------------+-----+------------+-----+
7574

7675
postgres@patroni1:~$ etcdctl get --keys-only --prefix /service/demo
7776
/service/demo/config
@@ -172,7 +171,7 @@ Example session:
172171
2024-08-26 08:21:18,202 INFO: running post_bootstrap
173172
2024-08-26 08:21:19.048 UTC [53] LOG: starting maintenance daemon on database 16385 user 10
174173
2024-08-26 08:21:19.048 UTC [53] CONTEXT: Citus maintenance daemon for database 16385 user 10
175-
2024-08-26 08:21:19,058 WARNING: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog'
174+
2024-08-26 08:21:19,058 DEBUG: Could not activate Linux watchdog device: Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog'
176175
2024-08-26 08:21:19.250 UTC [37] LOG: checkpoint starting: immediate force wait
177176
2024-08-26 08:21:19,275 INFO: initialized a new cluster
178177
2024-08-26 08:21:22.946 UTC [37] LOG: checkpoint starting: immediate force wait
@@ -268,47 +267,47 @@ Example session:
268267
citus=# \q
269268

270269
postgres@haproxy:~$ patronictl list
271-
+ Citus cluster: demo ----------+----------------+-----------+----+-----------+
272-
| Group | Member | Host | Role | State | TL | Lag in MB |
273-
+-------+---------+-------------+----------------+-----------+----+-----------+
274-
| 0 | coord1 | 172.19.0.8 | Leader | running | 1 | |
275-
| 0 | coord2 | 172.19.0.7 | Quorum Standby | streaming | 1 | 0 |
276-
| 0 | coord3 | 172.19.0.11 | Quorum Standby | streaming | 1 | 0 |
277-
| 1 | work1-1 | 172.19.0.12 | Quorum Standby | streaming | 1 | 0 |
278-
| 1 | work1-2 | 172.19.0.2 | Leader | running | 1 | |
279-
| 2 | work2-1 | 172.19.0.6 | Quorum Standby | streaming | 1 | 0 |
280-
| 2 | work2-2 | 172.19.0.9 | Leader | running | 1 | |
281-
+-------+---------+-------------+----------------+-----------+----+-----------+
270+
+ Citus cluster: demo ----------+----------------+-----------+----+-------------+-----+------------+-----+
271+
| Group | Member | Host | Role | State | TL | Receive LSN | Lag | Replay LSN | Lag |
272+
+-------+---------+-------------+----------------+-----------+----+-------------+-----+------------+-----+
273+
| 0 | coord1 | 172.19.0.8 | Leader | running | 1 | | | | |
274+
| 0 | coord2 | 172.19.0.7 | Quorum Standby | streaming | 1 | 0/41C06A0 | 0 | 0/41C06A0 | 0 |
275+
| 0 | coord3 | 172.19.0.11 | Quorum Standby | streaming | 1 | 0/41C06A0 | 0 | 0/41C06A0 | 0 |
276+
| 1 | work1-1 | 172.19.0.12 | Quorum Standby | streaming | 1 | 0/31ED910 | 0 | 0/31ED910 | 0 |
277+
| 1 | work1-2 | 172.19.0.2 | Leader | running | 1 | | | | |
278+
| 2 | work2-1 | 172.19.0.6 | Quorum Standby | streaming | 1 | 0/31D22D0 | 0 | 0/31D22D0 | 0 |
279+
| 2 | work2-2 | 172.19.0.9 | Leader | running | 1 | | | | |
280+
+-------+---------+-------------+----------------+-----------+----+-------------+-----+------------+-----+
282281

283282

284283
postgres@haproxy:~$ patronictl switchover --group 2 --force
285284
Current cluster topology
286-
+ Citus cluster: demo (group: 2, 7407360296219029527) ---+-----------+
287-
| Member | Host | Role | State | TL | Lag in MB |
288-
+---------+------------+----------------+-----------+----+-----------+
289-
| work2-1 | 172.19.0.6 | Quorum Standby | streaming | 1 | 0 |
290-
| work2-2 | 172.19.0.9 | Leader | running | 1 | |
291-
+---------+------------+----------------+-----------+----+-----------+
285+
+ Citus cluster: demo (group: 2, 7407360296219029527) ---+-------------+-----+------------+-----+
286+
| Member | Host | Role | State | TL | Receive LSN | Lag | Replay LSN | Lag |
287+
+---------+------------+----------------+-----------+----+-------------+-----+------------+-----+
288+
| work2-1 | 172.19.0.6 | Quorum Standby | streaming | 1 | 0/31D22D0 | 0 | 0/31D22D0 | 0 |
289+
| work2-2 | 172.19.0.9 | Leader | running | 1 | | | | |
290+
+---------+------------+----------------+-----------+----+-------------+-----+------------+-----+
292291
2024-08-26 08:31:45.92277 Successfully switched over to "work2-1"
293-
+ Citus cluster: demo (group: 2, 7407360296219029527) ------+
294-
| Member | Host | Role | State | TL | Lag in MB |
295-
+---------+------------+---------+---------+----+-----------+
296-
| work2-1 | 172.19.0.6 | Leader | running | 1 | |
297-
| work2-2 | 172.19.0.9 | Replica | stopped | | unknown |
298-
+---------+------------+---------+---------+----+-----------+
292+
+ Citus cluster: demo (group: 2, 7407360296219029527) --------+---------+------------+---------+
293+
| Member | Host | Role | State | TL | Receive LSN | Lag | Replay LSN | Lag |
294+
+---------+------------+---------+---------+----+-------------+---------+------------+---------+
295+
| work2-1 | 172.19.0.6 | Leader | running | 1 | | | | |
296+
| work2-2 | 172.19.0.9 | Replica | stopped | | unknown | unknown | unknown | unknown |
297+
+---------+------------+---------+---------+----+-------------+---------+------------+---------+
299298

300299
postgres@haproxy:~$ patronictl list
301-
+ Citus cluster: demo ----------+----------------+-----------+----+-----------+
302-
| Group | Member | Host | Role | State | TL | Lag in MB |
303-
+-------+---------+-------------+----------------+-----------+----+-----------+
304-
| 0 | coord1 | 172.19.0.8 | Leader | running | 1 | |
305-
| 0 | coord2 | 172.19.0.7 | Quorum Standby | streaming | 1 | 0 |
306-
| 0 | coord3 | 172.19.0.11 | Quorum Standby | streaming | 1 | 0 |
307-
| 1 | work1-1 | 172.19.0.12 | Quorum Standby | streaming | 1 | 0 |
308-
| 1 | work1-2 | 172.19.0.2 | Leader | running | 1 | |
309-
| 2 | work2-1 | 172.19.0.6 | Leader | running | 2 | |
310-
| 2 | work2-2 | 172.19.0.9 | Quorum Standby | streaming | 2 | 0 |
311-
+-------+---------+-------------+----------------+-----------+----+-----------+
300+
+ Citus cluster: demo ----------+----------------+-----------+----+-------------+-----+------------+-----+
301+
| Group | Member | Host | Role | State | TL | Receive LSN | Lag | Replay LSN | Lag |
302+
+-------+---------+-------------+----------------+-----------+----+-------------+-----+------------+-----+
303+
| 0 | coord1 | 172.19.0.8 | Leader | running | 1 | | | | |
304+
| 0 | coord2 | 172.19.0.7 | Quorum Standby | streaming | 1 | 0/41C06A0 | 0 | 0/41C06A0 | 0 |
305+
| 0 | coord3 | 172.19.0.11 | Quorum Standby | streaming | 1 | 0/41C06A0 | 0 | 0/41C06A0 | 0 |
306+
| 1 | work1-1 | 172.19.0.12 | Quorum Standby | streaming | 1 | 0/31ED910 | 0 | 0/31ED910 | 0 |
307+
| 1 | work1-2 | 172.19.0.2 | Leader | running | 1 | | | | |
308+
| 2 | work2-1 | 172.19.0.6 | Leader | running | 2 | | | | |
309+
| 2 | work2-2 | 172.19.0.9 | Quorum Standby | streaming | 2 | 0/31D22D0 | 0 | 0/31D22D0 | 0 |
310+
+-------+---------+-------------+----------------+-----------+----+-------------+-----+------------+-----+
312311

313312
postgres@haproxy:~$ psql -h localhost -p 5000 -U postgres -d citus
314313
Password for user postgres: postgres

docs/ENVIRONMENT.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,10 @@ Log
3232
- **PATRONI\_LOG\_FILE\_NUM**: The number of application logs to retain.
3333
- **PATRONI\_LOG\_FILE\_SIZE**: Size of patroni.log file (in bytes) that triggers a log rolling.
3434
- **PATRONI\_LOG\_LOGGERS**: Redefine logging level per python module. Example ``PATRONI_LOG_LOGGERS="{patroni.postmaster: WARNING, urllib3: DEBUG}"``
35+
- **PATRONI\_LOG\_DEDUPLICATE\_HEARTBEAT\_LOGS**: If set to ``true``, successive heartbeat logs that are identical shall not be output. Default value is ``false``.
36+
37+
.. warning::
38+
The time the HA loop executes at can be very valuable information in diagnosing failovers due to resource exhaustion and similar problems. When ``PATRONI_LOG_DEDUPLICATE_HEARTBEAT_LOGS`` is set to ``true`` there will be no log generated for the HA loop execution (unless the leader changes) and hence this potentially useful information will not be available from the logs.
3539

3640
Citus
3741
-----
@@ -113,6 +117,7 @@ Kubernetes
113117
- **PATRONI\_KUBERNETES\_NAMESPACE**: (optional) Kubernetes namespace where the Patroni pod is running. Default value is `default`.
114118
- **PATRONI\_KUBERNETES\_LABELS**: Labels in format ``{label1: value1, label2: value2}``. These labels will be used to find existing objects (Pods and either Endpoints or ConfigMaps) associated with the current cluster. Also Patroni will set them on every object (Endpoint or ConfigMap) it creates.
115119
- **PATRONI\_KUBERNETES\_SCOPE\_LABEL**: (optional) name of the label containing cluster name. Default value is `cluster-name`.
120+
- **PATRONI\_KUBERNETES\_BOOTSTRAP\_LABELS**: (optional) Labels in format ``{label1: value1, label2: value2}``. These labels will be assigned to a Patroni pod when its state is either ``initializing new cluster``, ``running custom bootstrap script``, ``starting after custom bootstrap`` or ``creating replica``.
116121
- **PATRONI\_KUBERNETES\_ROLE\_LABEL**: (optional) name of the label containing role (`primary`, `replica` or other custom value). Patroni will set this label on the pod it runs in. Default value is ``role``.
117122
- **PATRONI\_KUBERNETES\_LEADER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is `primary`. Default value is `primary`.
118123
- **PATRONI\_KUBERNETES\_FOLLOWER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is `replica`. Default value is `replica`.

0 commit comments

Comments
 (0)