You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/advanced/multi-zone-design-considerations.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,15 +13,15 @@ When using the PostgreSQL Operator in a Kubernetes cluster consisting of nodes t
13
13
must be taken to ensure all pods and the associated volumes re scheduled and provisioned within the same zone.
14
14
15
15
Given that a pod is unable mount a volume that is located in another zone, any volumes that are dynamically provisioned must
16
-
be provisioned in a topology-aware manner according to the specific scheduling requirements for the pod.
16
+
be provisioned in a topology-aware manner according to the specific scheduling requirements for the pod.
17
17
18
18
This means that when a new PostgreSQL cluster is created, it is necessary to ensure that the volume containing the database
19
19
files for the primary PostgreSQL database within the PostgreSQL clluster is provisioned in the same zone as the node containing the PostgreSQL primary pod that will be accesing the applicable volume.
20
20
21
21
#### Dynamic Provisioning of Volumes: Default Behavior
22
22
23
-
By default, the Kubernetes scheduler will ensure any pods created that claim a specific volume via a PVC are scheduled on a
24
-
node in the same zone as that volume. This is part of the default Kubernetes [multi-zone support](https://kubernetes.io/docs/setup/multiple-zones/).
23
+
By default, the Kubernetes scheduler will ensure any pods created that claim a specific volume via a PVC are scheduled on a
24
+
node in the same zone as that volume. This is part of the default Kubernetes [multi-zone support](https://kubernetes.io/docs/setup/multiple-zones/).
25
25
26
26
However, when using Kubernetes [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/),
27
27
volumes are not provisioned in a topology-aware manner.
@@ -54,14 +54,14 @@ Unfortunately, the default setting for dynamic provisinoing of volumes in mulit-
54
54
55
55
Within the PostgreSQL Operator, a **node label** is implemented as a `preferredDuringSchedulingIgnoredDuringExecution` node
56
56
affinity rule, which is an affinity rule that Kubernetes will attempt to adhere to when scheduling any pods for the cluster,
57
-
but _will not guarantee_. More information on node affinity rules can be found [here](https://kubernetes.i/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)).
57
+
but _will not guarantee_. More information on node affinity rules can be found [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)).
58
58
59
59
By using `Immediate` for the `volumeBindingMode` in a multi-zone cluster environment, the scheduler will ignore any requested
60
60
_(but not mandatory)_ scheduling requirements if necessary to ensure the pod can be scheduled. The scheduler will ultimately
61
-
schedule the pod on a node in the same zone as the volume, even if another node was requested for scheduling that pod.
61
+
schedule the pod on a node in the same zone as the volume, even if another node was requested for scheduling that pod.
62
62
63
63
As it relates to the PostgreSQL Operator specifically, a node label specified using the `--node-label` option when creating a
64
-
cluster using the `pgo create cluster` command in order target a specific node (or nodes) for the deployment of that cluster.
64
+
cluster using the `pgo create cluster` command in order target a specific node (or nodes) for the deployment of that cluster.
65
65
66
66
Therefore, if the volume ends up in a zone other than the zone containing the node (or nodes) defined by the node label, the
67
67
node label will be ignored, and the pod will be scheduled according to the zone containing the volume.
@@ -71,7 +71,7 @@ node label will be ignored, and the pod will be scheduled according to the zone
71
71
In order to overcome this default behavior, it is necessary to make the dynamically provisioned volumes topology aware.
72
72
73
73
This is accomplished by setting the `volumeBindingMode` for the storage class to `WaitForFirstConsumer`, which delays the
74
-
dynamic provisioning of a volume until a pod using it is created.
74
+
dynamic provisioning of a volume until a pod using it is created.
75
75
76
76
In other words, the PVC is no longer bound as soon as it is requested, but rather waits for a pod utilizing it to be creating
77
77
prior to binding. This change ensures that volume can take into account the scheduling requirements for the pod, which in the
@@ -124,13 +124,13 @@ From there those storage configurations can then be selected when creating a new
0 commit comments