Skip to content

Commit f365164

Browse files
smiycgbartolini
andcommitted
feat: add modular, multi-region monitoring stack (#38)
Introduce Prometheus and Grafana operators for modular deployments across multi-region playground clusters. The new `monitoring/setup.sh` script installs the monitoring stack either for all detected cnpg-playground clusters (when invoked with no arguments) or only for explicitly provided region names. Additionally, the CNPG demo now automatically creates PodMonitor resources for PostgreSQL clusters when the corresponding CRD is present in the target Kind cluster. Signed-off-by: Daniel Chambre <smiyc@pm.me> Signed-off-by: Gabriele Bartolini <gabriele.bartolini@enterprisedb.com> Co-authored-by: Gabriele Bartolini <gabriele.bartolini@enterprisedb.com> Signed-off-by: Gabriele Bartolini <gabriele.bartolini@enterprisedb.com>
1 parent b1fd81e commit f365164

File tree

14 files changed

+317
-3
lines changed

14 files changed

+317
-3
lines changed

.gitignore

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,5 +30,4 @@
3030
go.work
3131

3232
# minio data directories
33-
minio-eu
34-
minio-us
33+
minio-*

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,14 @@ You can also remove specific clusters by passing the region names as arguments.
155155
./scripts/teardown.sh it
156156
```
157157

158+
## Monitoring with Prometheus and Grafana
159+
160+
The [`monitoring`](./monitoring/) directory provides instructions and resources
161+
for setting up a monitoring environment based on Prometheus and Grafana.
162+
Although this component is optional, it is highly recommended—especially for
163+
demonstration and learning purposes—as it offers valuable insight into the
164+
system’s behavior and performance.
165+
158166
## Demonstration with CNPG Playground
159167

160168
The **CNPG Playground** offers a great environment for exploring the

demo/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,10 @@ secondary (Disaster Recovery) cluster through the
2020
To follow this demonstration, ensure the following are installed on your system:
2121

2222
1. **CNPG Playground**: Refer to the [installation guide](../README.md) for
23-
setup instructions.
23+
setup instructions. If you intend to use Prometheus together with the Grafana
24+
dashboards, make sure that you also deploy the [monitoring](../monitoring/)
25+
environment.
26+
2427
2. **`cmctl` (cert-manager CLI)**: Required for secure communication between
2528
the operator and the `barman-cloud` plugin, which is used for backup and
2629
recovery with MinIO object stores.

demo/setup.sh

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,12 @@ git_repo_root=$(git rev-parse --show-toplevel)
2929
kube_config_path=${git_repo_root}/k8s/kube-config.yaml
3030
demo_yaml_path=${git_repo_root}/demo/yaml
3131

32+
check_crd_existence() {
33+
# Check if the CRD exists in the cluster
34+
kubectl get crd "$1" &> /dev/null
35+
return $?
36+
}
37+
3238
legacy=
3339
if [ "${LEGACY:-}" = "true" ]; then
3440
legacy="-legacy"
@@ -103,6 +109,13 @@ for region in eu us; do
103109
kubectl apply --context kind-k8s-${region} -f \
104110
${demo_yaml_path}/${region}/pg-${region}${legacy}.yaml
105111

112+
# Create the PodMonitor if Prometheus has been installed
113+
if check_crd_existence podmonitors.monitoring.coreos.com
114+
then
115+
kubectl apply --context kind-k8s-${region} -f \
116+
${demo_yaml_path}/${region}/pg-${region}-podmonitor.yaml
117+
fi
118+
106119
# Wait for the cluster to be ready
107120
kubectl wait --context kind-k8s-${region} \
108121
--timeout 30m \

monitoring/README.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Monitoring
2+
3+
This directory enables monitoring of your CloudNativePG clusters using the official
4+
[CloudNativePG Grafana Dashboard](https://github.com/cloudnative-pg/grafana-dashboards).
5+
The included script installs both the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)
6+
and the [Grafana Operator](https://github.com/grafana/grafana-operator),
7+
and deploys the dashboard on top of your existing playground environment.
8+
9+
---
10+
11+
## Setup
12+
13+
To install monitoring components for the environment you previously created (by
14+
default consisting of two regions: `eu` and `us`), simply run:
15+
16+
```bash
17+
./setup.sh
18+
```
19+
20+
You may also specify one or more region names to match a customised setup:
21+
22+
```bash
23+
# Monitoring setup for clusters named 'it' and 'de'
24+
./setup.sh it de
25+
26+
# Monitoring setup for a single-region environment
27+
./setup.sh local
28+
```
29+
30+
The script will automatically deploy Prometheus, Grafana, and the CloudNativePG dashboard in each region provided.
31+
32+
---
33+
34+
## Accessing the Dashboard
35+
36+
Once installation completes, you can access Grafana via port forwarding.
37+
The `setup.sh` script prints the exact commands needed.
38+
For the default two-region environment, they look similar to:
39+
40+
```bash
41+
kubectl port-forward service/grafana-service 3000:3000 -n grafana --context kind-k8s-eu
42+
kubectl port-forward service/grafana-service 3001:3000 -n grafana --context kind-k8s-us
43+
```
44+
45+
After forwarding the port, open your browser at:
46+
47+
```
48+
http://localhost:3000
49+
```
50+
51+
Log in using:
52+
53+
- **Username:** `admin`
54+
- **Password:** `admin`
55+
56+
Grafana will prompt you to choose a new password at first login.
57+
58+
59+
You can find the dashboard under `Home > Dashboards > grafana > CloudNativePG`.
60+
61+
![dashboard](image.png)
62+
63+
## PodMonitor
64+
65+
To enable Prometheus to scrape metrics from your PostgreSQL pods, you must
66+
create a `PodMonitor` resource as described in the
67+
[documentation](https://cloudnative-pg.io/documentation/current/monitoring/#creating-a-podmonitor).
68+
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
apiVersion: grafana.integreatly.org/v1beta1
2+
kind: GrafanaDashboard
3+
metadata:
4+
name: cloudnativepg-dashboard
5+
namespace: grafana
6+
spec:
7+
instanceSelector:
8+
matchLabels:
9+
dashboards: "grafana"
10+
url: "https://raw.githubusercontent.com/cloudnative-pg/grafana-dashboards/refs/heads/main/charts/cluster/grafana-dashboard.json"
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
apiVersion: grafana.integreatly.org/v1beta1
2+
kind: GrafanaDatasource
3+
metadata:
4+
name: prometheus
5+
namespace: grafana
6+
spec:
7+
instanceSelector:
8+
matchLabels:
9+
dashboards: grafana
10+
allowCrossNamespaceImport: true
11+
datasource:
12+
access: proxy
13+
database: prometheus
14+
jsonData:
15+
timeInterval: 5s
16+
tlsSkipVerify: true
17+
name: DS_PROMETHEUS
18+
type: prometheus
19+
url: http://prometheus-operated.prometheus-operator.svc.cluster.local:9090
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
apiVersion: grafana.integreatly.org/v1beta1
2+
kind: Grafana
3+
metadata:
4+
name: grafana
5+
labels:
6+
dashboards: "grafana"
7+
spec:
8+
config:
9+
log:
10+
mode: "console"
11+
security:
12+
admin_user: admin
13+
admin_password: admin
14+
deployment:
15+
spec:
16+
template:
17+
spec:
18+
nodeSelector:
19+
node-role.kubernetes.io/infra: ""
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
apiVersion: kustomize.config.k8s.io/v1beta1
2+
kind: Kustomization
3+
resources:
4+
- grafana_instance.yaml
5+
- grafana_datasource.yaml
6+
- grafana_dashboard.yaml
7+
namespace: grafana

monitoring/image.png

56.8 KB
Loading

0 commit comments

Comments
 (0)