Skip to content

Commit 2b10272

Browse files
author
avandras
committed
Fix some RST
1 parent 5df8f57 commit 2b10272

File tree

1 file changed

+59
-58
lines changed

1 file changed

+59
-58
lines changed

docs/multisite.rst

Lines changed: 59 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -91,48 +91,49 @@ The configuration is very similar to the usual Patroni config. In fact, the key
9191

9292
An example configuration for two Patroni sites:
9393

94-
```
95-
multisite:
96-
name: dc1
97-
namespace: /multisite/
98-
etcd3: # <DCS>
99-
hosts:
100-
# dc1
101-
- 10.0.1.1:2379
102-
- 10.0.1.2:2379
103-
- 10.0.1.3:2379
104-
# dc2
105-
- 10.0.2.1:2379
106-
- 10.0.2.2:2379
107-
- 10.0.2.3:2379
108-
# dc 3
109-
- 10.0.0.1:2379
110-
host: 10.0.1.1,10.0.1.2,10.0.1.3 # How the leader of the other site(s) can connect to the primary on this site
111-
port: 5432
112-
# Multisite failover timeouts
113-
ttl: 90
114-
retry_timeout: 40
115-
```
94+
.. code:: YAML
95+
96+
multisite:
97+
name: dc1
98+
namespace: /multisite/
99+
etcd3: # <DCS>
100+
hosts:
101+
# dc1
102+
- 10.0.1.1:2379
103+
- 10.0.1.2:2379
104+
- 10.0.1.3:2379
105+
# dc2
106+
- 10.0.2.1:2379
107+
- 10.0.2.2:2379
108+
- 10.0.2.3:2379
109+
# dc 3
110+
- 10.0.0.1:2379
111+
host: 10.0.1.1,10.0.1.2,10.0.1.3 # How the leader of the other site(s) can connect to the primary on this site
112+
port: 5432
113+
# Multisite failover timeouts
114+
ttl: 90
115+
retry_timeout: 40
116+
116117
117118
Details of the configuration parameters
118119
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119120

120-
`name`
121-
: The name of the site. All nodes that share the same value are considered to be a part of the same site, thus it must be different for each site.
122-
`namespace`
123-
: Optional path within DCS where Patroni stores the multisite state. If used, it should be different from the namespace used by the base config, but the same on all sites.
124-
`<DCS>` (in the example `etcd3`)
125-
: The DCS implementation in use. Possible values are `etcd`, `etcd3`, `zookeeper`, `consul`, `exhibitor`, `kubernetes`, or `raft` (the latter is deprecated).
126-
`<DCS>.hosts`
127-
: a list of IP addresses of nodes forming the global DCS cluster, including the extra (tiebreaking) node(s)
128-
`host`
129-
: Comma-separated list of IPs of the Patroni nodes that can become a primary on the present site
130-
`port`
131-
: Postgres port, through which other sites' members can connect to this site. It can be specified once if all nodes use the same port, or as a comma-separated list matching the different port numbers, in the order used in the `host` key.
132-
`ttl`
133-
: Time to live of site leader lock. If the site is unable to elect a functioning leader within this timeout, a different site can take over the leader role. Must be a few times longer than the usual `ttl` value in order to prevent unnecessary site failovers.
134-
`retry_timeout`
135-
: How long the global etcd cluster can be inaccessible before the cluster is demoted. Must be a few times longer than the usual `retry_timeout` value in order to prevent unnecessary site failovers.
121+
``name``
122+
The name of the site. All nodes that share the same value are considered to be a part of the same site, thus it must be different for each site.
123+
``namespace``
124+
Optional path within DCS where Patroni stores the multisite state. If used, it should be different from the namespace used by the base config, but the same on all sites.
125+
``<DCS>`` (in the example ``etcd3``)
126+
The DCS implementation in use. Possible values are ``etcd``, ``etcd3``, ``zookeeper``, ``consul``, ``exhibitor``, ``kubernetes``, or ``raft`` (the latter is deprecated).
127+
``<DCS>.hosts``
128+
a list of IP addresses of nodes forming the global DCS cluster, including the extra (tiebreaking) node(s)
129+
``host``
130+
Comma-separated list of IPs of the Patroni nodes that can become a primary on the present site
131+
``port``
132+
Postgres port, through which other sites' members can connect to this site. It can be specified once if all nodes use the same port, or as a comma-separated list matching the different port numbers, in the order used in the ``host`` key.
133+
``ttl``
134+
Time to live of site leader lock. If the site is unable to elect a functioning leader within this timeout, a different site can take over the leader role. Must be a few times longer than the usual ``ttl`` value in order to prevent unnecessary site failovers.
135+
``retry_timeout``
136+
How long the global etcd cluster can be inaccessible before the cluster is demoted. Must be a few times longer than the usual ``retry_timeout`` value in order to prevent unnecessary site failovers.
136137

137138
Passwords in the YAML configuration
138139
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -184,23 +185,23 @@ Applications should be ready to try to connect to the new primary. See 'Connect
184185
Glossary
185186
++++++++
186187

187-
DCS
188-
: distributed configuration store
189-
site
190-
: a Patroni cluster with any number of nodes, and the respective DCS - usually corresponding to a data centre
191-
primary
192-
: the writable PostgreSQL node, from which the other nodes replicate their data (either directly or in a cascading fashion)
193-
leader
194-
: the node which other nodes inside the same site replicate from - the leader can be a replica itself, in which case it's called a _standby leader_
195-
site switchover
196-
: a (manual) leader site switch performed when both sites are functioning fine
197-
site failover
198-
: when the main site goes down (meaning there is no Patroni leader and none of the remaining nodes (if any left) can become a leader), the standby leader will be promoted, becoming a leader proper, and the Postgres instance running there becoming the primary
199-
leader site
200-
: the site where the PostgreSQL primary instance is
201-
standby site
202-
: a site replicating from the leader site, and a potential target for site switchover/failover
203-
DCS quorum
204-
: more than half of the DCS nodes are available (and can take part in a leader race)
205-
multisite leader lock
206-
: just like under normal Patroni operation, the leader puts/updates an entry in DCS, thus notifying other sites that there is a functioning Postgres primary running. The entry mentioned is the multisite leader lock.
188+
**DCS**
189+
distributed configuration store
190+
**site**
191+
a Patroni cluster with any number of nodes, and the respective DCS - usually corresponding to a data centre
192+
**primary**
193+
the writable PostgreSQL node, from which the other nodes replicate their data (either directly or in a cascading fashion)
194+
**leader**
195+
the node which other nodes inside the same site replicate from - the leader can be a replica itself, in which case it's called a *standby leader*
196+
**site switchover**
197+
a (manual) leader site switch performed when both sites are functioning fine
198+
**site failover**
199+
when the main site goes down (meaning there is no Patroni leader and none of the remaining nodes (if any left) can become a leader), the standby leader will be promoted, becoming a leader proper, and the Postgres instance running there becoming the primary
200+
**leader site**
201+
the site where the PostgreSQL primary instance is
202+
**standby site**
203+
a site replicating from the leader site, and a potential target for site switchover/failover
204+
**DCS quorum**
205+
more than half of the DCS nodes are available (and can take part in a leader race)
206+
**multisite leader lock**
207+
just like under normal Patroni operation, the leader puts/updates an entry in DCS, thus notifying other sites that there is a functioning Postgres primary running. The entry mentioned is the multisite leader lock.

0 commit comments

Comments
 (0)