You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Remove references to node_exporter in config, installation, & docs.
The node_exporter container is no longer a part of the Crunchy
Container suite per:
CrunchyData/crunchy-containers@87107f4
This is due to Kubernetes and Kube-derived builds providing their own
node exporters in each Kubelet that provides their own metrics via
cAdvisor. Additionally, node_exporter would provide metrics on an
entire node vs. what was in the container itself, which might not
provide the requisite info for monitoring and diagnosing issues in
a particular container.
The following variables should be configured at a minimum to deploy the Crunchy
167
+
The following variables should be configured at a minimum to deploy the Crunchy
168
168
PostgreSQL Operator:
169
169
170
-
*`kubernetes_context`
170
+
*`kubernetes_context`
171
171
*`openshift_user`
172
172
*`openshift_password`
173
173
*`openshift_token`
@@ -211,15 +211,15 @@ PostgreSQL Operator:
211
211
Additionally, `storage` variables will need to be defined to provide the Crunchy PGO with any required storage configuration. Guidance for defining `storage` variables can be found in the next section.
212
212
213
213
{{% notice tip %}}
214
-
Users should remove or comment out the `kubernetes` or `openshift` variables if they're not being used
214
+
Users should remove or comment out the `kubernetes` or `openshift` variables if they're not being used
215
215
from the inventory file. Both sets of variables cannot be used at the same time.
216
216
{{% /notice %}}
217
217
218
218
## Storage
219
219
220
-
Kubernetes and OpenShift offer support for a wide variety of different storage types, and by default, the `inventory` is
220
+
Kubernetes and OpenShift offer support for a wide variety of different storage types, and by default, the `inventory` is
221
221
pre-populated with storage configurations for some of these storage types. However, the storage types defined
222
-
in the `inventory` can be modified or removed as needed, while additional storage configurations can also be
222
+
in the `inventory` can be modified or removed as needed, while additional storage configurations can also be
223
223
added to meet the specific storage requirements for your PG clusters.
224
224
225
225
The following `storage` variables are utilized to add or modify operator storage configurations in the `inventory`:
@@ -274,12 +274,12 @@ storage5_class='fast'
274
274
storage5_fs_group=26
275
275
```
276
276
277
-
To assign this storage definition to all `primary` pods created by the Operator, we
277
+
To assign this storage definition to all `primary` pods created by the Operator, we
278
278
can configure the `primary_storage=storageos` variable in the inventory file.
279
279
280
280
#### GKE
281
281
282
-
The storage class provided by Google Kubernetes Environment (GKE) can be configured
282
+
The storage class provided by Google Kubernetes Environment (GKE) can be configured
283
283
to be used by the Operator by setting the following variables in the `inventory` file:
284
284
285
285
```ini
@@ -291,30 +291,30 @@ storage8_class='standard'
291
291
storage8_fs_group=26
292
292
```
293
293
294
-
To assign this storage definition to all `primary` pods created by the Operator, we
294
+
To assign this storage definition to all `primary` pods created by the Operator, we
295
295
can configure the `primary_storage=gce` variable in the inventory file.
296
296
297
297
### Considerations for Multi-Zone Cloud Environments
298
298
299
-
When using the Operator in a Kubernetes cluster consisting of nodes that span
300
-
multiple zones, special consideration must betaken to ensure all pods and the
299
+
When using the Operator in a Kubernetes cluster consisting of nodes that span
300
+
multiple zones, special consideration must betaken to ensure all pods and the
301
301
volumes they require are scheduled and provisioned within the same zone. Specifically,
302
-
being that a pod is unable mount a volume that is located in another zone, any
303
-
volumes that are dynamically provisioned must be provisioned in a topology-aware
304
-
manner according to the specific scheduling requirements for the pod. For instance,
305
-
this means ensuring that the volume containing the database files for the primary
306
-
database in a new PostgreSQL cluster is provisioned in the same zone as the node
302
+
being that a pod is unable mount a volume that is located in another zone, any
303
+
volumes that are dynamically provisioned must be provisioned in a topology-aware
304
+
manner according to the specific scheduling requirements for the pod. For instance,
305
+
this means ensuring that the volume containing the database files for the primary
306
+
database in a new PostgreSQL cluster is provisioned in the same zone as the node
307
307
containing the PostgreSQL primary pod that will be using it.
308
308
309
-
For instructions on setting up storage classes for multi-zone environments, see
309
+
For instructions on setting up storage classes for multi-zone environments, see
310
310
the [PostgreSQL Operator Documentation](/gettingstarted/design/designoverview/).
311
311
312
312
## Resource Configuration
313
313
314
-
Kubernetes and OpenShift allow specific resource requirements to be specified for the various containers deployed inside of a pod.
314
+
Kubernetes and OpenShift allow specific resource requirements to be specified for the various containers deployed inside of a pod.
315
315
This includes defining the required resources for each container, i.e. how much memory and CPU each container will need, while also
316
316
allowing resource limits to be defined, i.e. the maximum amount of memory and CPU a container will be allowed to consume.
317
-
In support of this capability, the Crunchy PGO allows any required resource configurations to be defined in the `inventory`, which
317
+
In support of this capability, the Crunchy PGO allows any required resource configurations to be defined in the `inventory`, which
318
318
can the be utilized by the operator to set any desired resource requirements/limits for the various containers that will
319
319
be deployed by the Crunchy PGO when creating and managing PG clusters.
320
320
@@ -355,13 +355,13 @@ With the configuration shown above, the `large` resource configuration would be
0 commit comments