Skip to content

Commit d689823

Browse files
Jonathan S. Katzjkatz
authored andcommitted
Remove references to node_exporter in config, installation, & docs.
The node_exporter container is no longer a part of the Crunchy Container suite per: CrunchyData/crunchy-containers@87107f4 This is due to Kubernetes and Kube-derived builds providing their own node exporters in each Kubelet that provides their own metrics via cAdvisor. Additionally, node_exporter would provide metrics on an entire node vs. what was in the container itself, which might not provide the requisite info for monitoring and diagnosing issues in a particular container.
1 parent bea821e commit d689823

File tree

4 files changed

+37
-51
lines changed

4 files changed

+37
-51
lines changed

ansible/roles/pgo-operator/files/pgo-configs/cluster-service.json

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,6 @@
1818
"targetPort": {{.Port}},
1919
"nodePort": 0
2020
}, {
21-
"name": "node-exporter",
22-
"protocol": "TCP",
23-
"port": 9100,
24-
"targetPort": 9100,
25-
"nodePort": 0
26-
}, {
2721
"name": "pgbadger",
2822
"protocol": "TCP",
2923
"port": {{.PGBadgerPort}},

conf/postgres-operator/cluster-service.json

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,6 @@
1818
"targetPort": {{.Port}},
1919
"nodePort": 0
2020
}, {
21-
"name": "node-exporter",
22-
"protocol": "TCP",
23-
"port": 9100,
24-
"targetPort": 9100,
25-
"nodePort": 0
26-
}, {
2721
"name": "pgbadger",
2822
"protocol": "TCP",
2923
"port": {{.PGBadgerPort}},

hugo/content/Installation/install-with-ansible/prerequisites.md

Lines changed: 37 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The following is required prior to installing Crunchy PostgreSQL Operator using
2626

2727
## Installing from a Windows Host
2828

29-
If the Crunchy PostgreSQL Operator is being installed from a Windows host the following
29+
If the Crunchy PostgreSQL Operator is being installed from a Windows host the following
3030
are required:
3131

3232
* [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10)
@@ -35,7 +35,7 @@ are required:
3535
## Permissions
3636

3737
The installation of the Crunchy PostgreSQL Operator requires elevated privileges.
38-
It is required that the playbooks are run as a `cluster-admin` to ensure the playbooks
38+
It is required that the playbooks are run as a `cluster-admin` to ensure the playbooks
3939
can install:
4040

4141
* Custom Resource Definitions
@@ -52,17 +52,17 @@ There are two ways to obtain the Crunchy PostgreSQL Operator Roles:
5252

5353
### GitHub Installation
5454

55-
All necessary files (inventory, main playbook and roles) can be found in the `ansible`
55+
All necessary files (inventory, main playbook and roles) can be found in the `ansible`
5656
directory in the [postgres-operator project](https://github.com/CrunchyData/postgres-operator).
5757

5858
### RPM Installation using Yum
5959

60-
Available to Crunchy customers is an RPM containing all the necessary Ansible roles
61-
and files required for installation using Ansible. The RPM can be found in Crunchy's
62-
yum repository. For information on setting up `yum` to use the Crunchy repoistory,
60+
Available to Crunchy customers is an RPM containing all the necessary Ansible roles
61+
and files required for installation using Ansible. The RPM can be found in Crunchy's
62+
yum repository. For information on setting up `yum` to use the Crunchy repoistory,
6363
see the [Crunchy Access Portal](https://access.crunchydata.com/).
6464

65-
To install the Crunchy PostgreSQL Operator Ansible roles using `yum`, run the following
65+
To install the Crunchy PostgreSQL Operator Ansible roles using `yum`, run the following
6666
command on a RHEL or CentOS host:
6767

6868
```bash
@@ -72,7 +72,7 @@ sudo yum install postgres-operator-playbooks
7272
* Ansible roles can be found in: `/usr/share/ansible/roles/crunchydata`
7373
* Ansible playbooks/inventory files can be found in: `/usr/share/ansible/postgres-operator/playbooks`
7474

75-
Once installed users should take a copy of the `inventory` file included in the installation
75+
Once installed users should take a copy of the `inventory` file included in the installation
7676
using the following command:
7777

7878
```bash
@@ -81,8 +81,8 @@ cp /usr/share/ansible/postgres-operator/playbooks/inventory ${HOME?}
8181

8282
## Configuring the Inventory File
8383

84-
The `inventory` file included with the PostgreSQL Operator Playbooks allows installers
85-
to configure how the operator will function when deployed into Kubernetes. This file
84+
The `inventory` file included with the PostgreSQL Operator Playbooks allows installers
85+
to configure how the operator will function when deployed into Kubernetes. This file
8686
should contain all configurable variables the playbooks offer.
8787

8888
The following are the variables available for configuration:
@@ -164,10 +164,10 @@ kubectl config current-context
164164

165165
### Minimal Variable Requirements
166166

167-
The following variables should be configured at a minimum to deploy the Crunchy
167+
The following variables should be configured at a minimum to deploy the Crunchy
168168
PostgreSQL Operator:
169169

170-
* `kubernetes_context`
170+
* `kubernetes_context`
171171
* `openshift_user`
172172
* `openshift_password`
173173
* `openshift_token`
@@ -211,15 +211,15 @@ PostgreSQL Operator:
211211
Additionally, `storage` variables will need to be defined to provide the Crunchy PGO with any required storage configuration. Guidance for defining `storage` variables can be found in the next section.
212212

213213
{{% notice tip %}}
214-
Users should remove or comment out the `kubernetes` or `openshift` variables if they're not being used
214+
Users should remove or comment out the `kubernetes` or `openshift` variables if they're not being used
215215
from the inventory file. Both sets of variables cannot be used at the same time.
216216
{{% /notice %}}
217217

218218
## Storage
219219

220-
Kubernetes and OpenShift offer support for a wide variety of different storage types, and by default, the `inventory` is
220+
Kubernetes and OpenShift offer support for a wide variety of different storage types, and by default, the `inventory` is
221221
pre-populated with storage configurations for some of these storage types. However, the storage types defined
222-
in the `inventory` can be modified or removed as needed, while additional storage configurations can also be
222+
in the `inventory` can be modified or removed as needed, while additional storage configurations can also be
223223
added to meet the specific storage requirements for your PG clusters.
224224

225225
The following `storage` variables are utilized to add or modify operator storage configurations in the `inventory`:
@@ -274,12 +274,12 @@ storage5_class='fast'
274274
storage5_fs_group=26
275275
```
276276

277-
To assign this storage definition to all `primary` pods created by the Operator, we
277+
To assign this storage definition to all `primary` pods created by the Operator, we
278278
can configure the `primary_storage=storageos` variable in the inventory file.
279279

280280
#### GKE
281281

282-
The storage class provided by Google Kubernetes Environment (GKE) can be configured
282+
The storage class provided by Google Kubernetes Environment (GKE) can be configured
283283
to be used by the Operator by setting the following variables in the `inventory` file:
284284

285285
```ini
@@ -291,30 +291,30 @@ storage8_class='standard'
291291
storage8_fs_group=26
292292
```
293293

294-
To assign this storage definition to all `primary` pods created by the Operator, we
294+
To assign this storage definition to all `primary` pods created by the Operator, we
295295
can configure the `primary_storage=gce` variable in the inventory file.
296296

297297
### Considerations for Multi-Zone Cloud Environments
298298

299-
When using the Operator in a Kubernetes cluster consisting of nodes that span
300-
multiple zones, special consideration must betaken to ensure all pods and the
299+
When using the Operator in a Kubernetes cluster consisting of nodes that span
300+
multiple zones, special consideration must betaken to ensure all pods and the
301301
volumes they require are scheduled and provisioned within the same zone. Specifically,
302-
being that a pod is unable mount a volume that is located in another zone, any
303-
volumes that are dynamically provisioned must be provisioned in a topology-aware
304-
manner according to the specific scheduling requirements for the pod. For instance,
305-
this means ensuring that the volume containing the database files for the primary
306-
database in a new PostgreSQL cluster is provisioned in the same zone as the node
302+
being that a pod is unable mount a volume that is located in another zone, any
303+
volumes that are dynamically provisioned must be provisioned in a topology-aware
304+
manner according to the specific scheduling requirements for the pod. For instance,
305+
this means ensuring that the volume containing the database files for the primary
306+
database in a new PostgreSQL cluster is provisioned in the same zone as the node
307307
containing the PostgreSQL primary pod that will be using it.
308308

309-
For instructions on setting up storage classes for multi-zone environments, see
309+
For instructions on setting up storage classes for multi-zone environments, see
310310
the [PostgreSQL Operator Documentation](/gettingstarted/design/designoverview/).
311311

312312
## Resource Configuration
313313

314-
Kubernetes and OpenShift allow specific resource requirements to be specified for the various containers deployed inside of a pod.
314+
Kubernetes and OpenShift allow specific resource requirements to be specified for the various containers deployed inside of a pod.
315315
This includes defining the required resources for each container, i.e. how much memory and CPU each container will need, while also
316316
allowing resource limits to be defined, i.e. the maximum amount of memory and CPU a container will be allowed to consume.
317-
In support of this capability, the Crunchy PGO allows any required resource configurations to be defined in the `inventory`, which
317+
In support of this capability, the Crunchy PGO allows any required resource configurations to be defined in the `inventory`, which
318318
can the be utilized by the operator to set any desired resource requirements/limits for the various containers that will
319319
be deployed by the Crunchy PGO when creating and managing PG clusters.
320320

@@ -355,13 +355,13 @@ With the configuration shown above, the `large` resource configuration would be
355355

356356
## Understanding `pgo_operator_namespace` & `namespace`
357357

358-
The Crunchy PostgreSQL Operator can be configured to be deployed and manage a single
359-
namespace or manage several namespaces. The following are examples of different types
358+
The Crunchy PostgreSQL Operator can be configured to be deployed and manage a single
359+
namespace or manage several namespaces. The following are examples of different types
360360
of deployment models configurable in the `inventory` file.
361361

362362
### Single Namespace
363363

364-
To deploy the Crunchy PostgreSQL Operator to work with a single namespace (in this example
364+
To deploy the Crunchy PostgreSQL Operator to work with a single namespace (in this example
365365
our namespace is named `pgo`), configure the following `inventory` settings:
366366

367367
```ini
@@ -382,12 +382,12 @@ namespace='pgouser1,pgouser2'
382382
## Deploying Multiple Operators
383383

384384
The 4.0 release of the Crunchy PostgreSQL Operator allows for multiple operator deployments in the same cluster.
385-
To install the Crunchy PostgreSQL Operator to multiple namespaces, it's recommended to have an `inventory` file
385+
To install the Crunchy PostgreSQL Operator to multiple namespaces, it's recommended to have an `inventory` file
386386
for each deployment of the operator.
387387

388388
For each operator deployment the following inventory variables should be configured uniquely for each install.
389389

390-
For example, operator could be deployed twice by changing the `pgo_operator_namespace` and `namespace` for those
390+
For example, operator could be deployed twice by changing the `pgo_operator_namespace` and `namespace` for those
391391
deployments:
392392

393393
Inventory A would deploy operator to the `pgo` namespace and it would manage the `pgo` target namespace.
@@ -407,22 +407,21 @@ namespace='pgo2,pgo3'
407407
...
408408
```
409409

410-
Each install of the operator will create a corresponding directory in `$HOME/.pgo/<PGO NAMESPACE>` which will contain
410+
Each install of the operator will create a corresponding directory in `$HOME/.pgo/<PGO NAMESPACE>` which will contain
411411
the TLS and `pgouser` client credentials.
412412

413413
## Deploying Grafana and Prometheus
414414

415415
PostgreSQL clusters created by the operator can be configured to create additional containers for collecting metrics.
416-
These metrics are very useful for understanding the overall health and performance of PostgreSQL database deployments
416+
These metrics are very useful for understanding the overall health and performance of PostgreSQL database deployments
417417
over time. The collectors included by the operator are:
418418

419-
* Node Exporter - Host metrics where the PostgreSQL containers are running
420419
* PostgreSQL Exporter - PostgreSQL metrics
421420

422-
The operator, however, does not install the necessary timeseries database (Prometheus) for storing the collected
421+
The operator, however, does not install the necessary timeseries database (Prometheus) for storing the collected
423422
metrics or the front end visualization (Grafana) of those metrics.
424423

425-
Included in these playbooks are roles for deploying Granfana and/or Prometheus. See the `inventory` file
424+
Included in these playbooks are roles for deploying Granfana and/or Prometheus. See the `inventory` file
426425
for options to install the metrics stack.
427426

428427
{{% notice tip %}}

hugo/content/gettingstarted/prereq/_index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,6 @@ This is a list of service ports that are used in the PostgreSQL Operator. Verify
3434
| pgpool | 5432 |
3535
| pgbouncer | 5432 |
3636
| pgbackrest | 2022 |
37-
| node-exporter | 9100 |
3837
| postgres-exporter | 9187 |
3938

4039
## Application Ports

0 commit comments

Comments
 (0)