Skip to content

Commit 469514e

Browse files
authored
feat(scripts): introduce modular, multi-region playground management (#33)
This overhauls the playground management scripts to provide a more flexible, powerful, and maintainable user experience. The entire workflow, from setup to teardown, is now dynamic and customizable. Key changes include: - **Multi-Region Support**: `setup.sh` now accepts a list of region names as arguments to provision multiple, isolated Kubernetes clusters and MinIO instances. It defaults to 'eu' and 'us' if no arguments are provided, for back-compatibility. - **New Teardown Script**: Adds a new `teardown.sh` script for cleanup. It operates in two modes: - Auto-detects and removes all managed clusters if run without arguments. - Selectively removes specific clusters when region names are provided. - **Auto-Discovery**: `info.sh` now automatically discovers all running playground clusters created by the setup script, removing the need for manual input. - **Modularization**: Introduces a `common.sh` file to centralize shared configurations, prerequisite checks, and functions. This reduces code duplication and simplifies the `setup`, `info`, and `teardown` scripts. Closes #32 Signed-off-by: Gabriele Bartolini <[email protected]>
1 parent cef4cb3 commit 469514e

File tree

5 files changed

+332
-180
lines changed

5 files changed

+332
-180
lines changed

README.md

Lines changed: 65 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -58,28 +58,52 @@ The architecture is illustrated in the diagram below:
5858

5959
![Local Environment Architecture](images/cnpg-playground-architecture.png)
6060

61-
## Setting Up the Learning Environment
61+
## Usage
6262

63-
To set up the environment, simply run the following script:
63+
This playground environment is managed by three main scripts located in the
64+
`/scripts` directory.
65+
66+
| Script | Description |
67+
| :----------- | :----------------------------------------------------------- |
68+
| `setup.sh` | Creates and configures the multi-region Kubernetes clusters. |
69+
| `info.sh` | Displays status and access information for active clusters. |
70+
| `teardown.sh`| Removes clusters and all associated resources. |
71+
72+
### Setting Up the Learning Environment
73+
74+
The `setup.sh` script provisions the entire environment. By default, it creates
75+
two regional clusters: `eu` and `us`.
6476

6577
```bash
78+
# Create the default two-region environment (eu, us)
6679
./scripts/setup.sh
6780
```
6881

69-
## Connecting to the Kubernetes Clusters
82+
You can easily customize this by providing your own list of region names as
83+
arguments.
84+
85+
```bash
86+
# Create a custom environment with 'it' and 'de' regions, simulating Italy and Germany
87+
./scripts/setup.sh it de
88+
89+
# Create a single-region environment
90+
./scripts/setup.sh local
91+
```
92+
93+
### Connecting to the Kubernetes Clusters
7094

7195
To configure and interact with both Kubernetes clusters during the learning
72-
process, you will need to connect to them.
96+
process, you will need to connect to them. After setup, you can run the
97+
`info.sh` script at any time to see the status of your environment.
7398

74-
The **setup** script provides detailed instructions for accessing the clusters.
75-
If you need to view the connection details again after the setup, you can
76-
retrieve them by running:
99+
It automatically detects all running playground clusters and displays their
100+
access instructions, and node status.
77101

78102
```bash
79103
./scripts/info.sh
80104
```
81105

82-
## Inspecting Nodes in a Kubernetes Cluster
106+
### Inspecting Nodes in a Kubernetes Cluster
83107

84108
To inspect the nodes in a Kubernetes cluster, you can use the following
85109
command:
@@ -93,19 +117,43 @@ output similar to:
93117

94118
```console
95119
NAME STATUS ROLES AGE VERSION
96-
k8s-eu-control-plane Ready control-plane 10m v1.33.0
97-
k8s-eu-worker Ready infra 9m58s v1.33.0
98-
k8s-eu-worker2 Ready app 9m58s v1.33.0
99-
k8s-eu-worker3 Ready postgres 9m58s v1.33.0
100-
k8s-eu-worker4 Ready postgres 9m58s v1.33.0
101-
k8s-eu-worker5 Ready postgres 9m58s v1.33.0
120+
k8s-eu-control-plane Ready control-plane 10m v1.34.0
121+
k8s-eu-worker Ready infra 9m58s v1.34.0
122+
k8s-eu-worker2 Ready app 9m58s v1.34.0
123+
k8s-eu-worker3 Ready postgres 9m58s v1.34.0
124+
k8s-eu-worker4 Ready postgres 9m58s v1.34.0
125+
k8s-eu-worker5 Ready postgres 9m58s v1.34.0
102126
```
103127

104128
In this example:
105129
- The control plane node (`k8s-eu-control-plane`) manages the cluster.
106130
- Worker nodes have different roles, such as `infra` for infrastructure, `app`
107131
for application workloads, and `postgres` for PostgreSQL databases. Each node
108-
runs Kubernetes version `v1.33.0`.
132+
runs Kubernetes version `v1.34.0`.
133+
134+
### Cleaning Up the Environment
135+
136+
When you're finished, the `teardown.sh` script can remove the resources. It can
137+
be run in two ways:
138+
139+
#### Full Cleanup
140+
141+
Running the script with no arguments will auto-detect and remove all playground
142+
clusters and their resources, returning your system to its initial state.
143+
144+
```bash
145+
# Destroy all created regions
146+
./scripts/teardown.sh
147+
```
148+
149+
#### Selective Cleanup
150+
151+
You can also remove specific clusters by passing the region names as arguments.
152+
153+
```bash
154+
# Destroy only the 'it' cluster
155+
./scripts/teardown.sh it
156+
```
109157

110158
## Demonstration with CNPG Playground
111159

@@ -121,8 +169,8 @@ distributed across two regions** within the playground. The symmetric
121169
architecture also includes **continuous backup** using the
122170
[Barman Cloud Plugin](https://cloudnative-pg.io/plugin-barman-cloud/).
123171

124-
For complete instructions and supporting resources, refer to the [demo
125-
folder](./demo/README.md).
172+
For complete instructions and supporting resources, refer to the
173+
[demo folder](./demo/README.md).
126174

127175
## Installing CloudNativePG on the Control Plane
128176

@@ -149,50 +197,6 @@ both the `kind-k8s-eu` and `kind-k8s-us` clusters.
149197
Ensure that you have the latest version of the `cnpg` plugin installed on your
150198
local machine.
151199

152-
## Cleaning up the Learning Environment
153-
154-
When you're ready to clean up and remove all resources from the learning
155-
environment, run the following script to tear down the containers and
156-
associated resources:
157-
158-
```bash
159-
./scripts/teardown.sh
160-
```
161-
162-
This will safely destroy all running containers and return your environment to
163-
its initial state.
164-
165-
## Single Kubernetes Cluster Setup
166-
167-
In some situations, you may prefer to have a single Kubernetes cluster
168-
playground without the object store. To create such a cluster, run the
169-
following command:
170-
171-
```sh
172-
kind create cluster --config k8s/kind-cluster.yaml
173-
```
174-
175-
Then, run:
176-
177-
```sh
178-
kubectl label node -l postgres.node.kubernetes.io node-role.kubernetes.io/postgres=
179-
kubectl label node -l infra.node.kubernetes.io node-role.kubernetes.io/infra=
180-
kubectl label node -l app.node.kubernetes.io node-role.kubernetes.io/app=
181-
```
182-
183-
The result is the following:
184-
185-
```console
186-
$ kubectl get nodes
187-
NAME STATUS ROLES AGE VERSION
188-
cnpg-control-plane Ready control-plane 22m v1.33.0
189-
cnpg-worker Ready infra 22m v1.33.0
190-
cnpg-worker2 Ready app 22m v1.33.0
191-
cnpg-worker3 Ready postgres 22m v1.33.0
192-
cnpg-worker4 Ready postgres 22m v1.33.0
193-
cnpg-worker5 Ready postgres 22m v1.33.0
194-
```
195-
196200
## Nix Flakes
197201

198202
Do you use Nix flakes? If you do, this package have a configured

scripts/common.sh

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
#!/usr/bin/env bash
2+
#
3+
# This script contains common variables and functions shared by the setup,
4+
# info, and cleanup scripts for the CloudNativePG playground.
5+
#
6+
#
7+
# Copyright The CloudNativePG Contributors
8+
#
9+
# Licensed under the Apache License, Version 2.0 (the "License");
10+
# you may not use this file except in compliance with the License.
11+
# You may obtain a copy of the License at
12+
#
13+
# http://www.apache.org/licenses/LICENSE-2.0
14+
#
15+
# Unless required by applicable law or agreed to in writing, software
16+
# distributed under the License is distributed on an "AS IS" BASIS,
17+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18+
# See the License for the specific language governing permissions and
19+
# limitations under the License.
20+
#
21+
22+
set -euo pipefail
23+
24+
# --- Common Configuration ---
25+
# Kind base name for clusters
26+
K8S_BASE_NAME=${K8S_NAME:-k8s}
27+
28+
# MinIO Configuration
29+
MINIO_IMAGE="${MINIO_IMAGE:-quay.io/minio/minio:RELEASE.2025-09-07T16-13-09Z}"
30+
MINIO_BASE_NAME="${MINIO_BASE_NAME:-minio}"
31+
MINIO_BASE_PORT=${MINIO_BASE_PORT:-9001}
32+
MINIO_ROOT_USER="${MINIO_ROOT_USER:-cnpg}"
33+
MINIO_ROOT_PASSWORD="${MINIO_ROOT_PASSWORD:-Cl0udNativePGRocks}"
34+
35+
# --- Common Prerequisite Checks ---
36+
REQUIRED_COMMANDS="kind kubectl git grep sed"
37+
for cmd in $REQUIRED_COMMANDS; do
38+
if ! command -v "$cmd" &> /dev/null; then
39+
echo "❌ Error: Missing required command: $cmd"
40+
exit 1
41+
fi
42+
done
43+
44+
# --- Common Setup ---
45+
# Find a supported container provider
46+
CONTAINER_PROVIDER=""
47+
for provider in docker podman; do
48+
if command -v "$provider" &> /dev/null; then
49+
CONTAINER_PROVIDER=$provider
50+
break
51+
fi
52+
done
53+
54+
if [ -z "${CONTAINER_PROVIDER:-}" ]; then
55+
echo "❌ Error: Missing container provider. Supported providers are: docker, podman"
56+
exit 1
57+
fi
58+
59+
# Determine project root and kubeconfig path
60+
GIT_REPO_ROOT=$(git rev-parse --show-toplevel)
61+
KUBE_CONFIG_PATH="${GIT_REPO_ROOT}/k8s/kube-config.yaml"

scripts/info.sh

Lines changed: 55 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
#!/usr/bin/env bash
22
#
3+
# This script automatically detects running CloudNativePG playground clusters
4+
# and displays their status, including version, nodes, and pods.
5+
#
36
# Copyright The CloudNativePG Contributors
47
#
58
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -15,23 +18,60 @@
1518
# limitations under the License.
1619
#
1720

18-
set -eu
19-
20-
git_repo_root=$(git rev-parse --show-toplevel)
21-
kube_config_path=${git_repo_root}/k8s/kube-config.yaml
22-
23-
cat <<EOF
24-
To access the playground clusters, ensure you set the following environment
25-
variable:
21+
# Source the common setup script
22+
source "$(dirname "$0")/common.sh"
2623

27-
export KUBECONFIG=${kube_config_path}
24+
# --- Script Setup ---
25+
if [ ! -f "${KUBE_CONFIG_PATH}" ]; then
26+
echo "❌ Error: Kubeconfig file not found at '${KUBE_CONFIG_PATH}'"
27+
echo "Please run the setup.sh script first."
28+
exit 1
29+
fi
30+
export KUBECONFIG="${KUBE_CONFIG_PATH}"
2831

29-
To switch between clusters, use the commands below:
32+
# --- Auto-detect Regions ---
33+
echo "🔎 Detecting active playground clusters..."
34+
REGIONS=($(kind get clusters | grep "^${K8S_BASE_NAME}-" | sed "s/^${K8S_BASE_NAME}-//" || true))
3035

31-
kubectl config use-context kind-k8s-eu
32-
kubectl config use-context kind-k8s-us
36+
if [ ${#REGIONS[@]} -eq 0 ]; then
37+
echo "🤷 No active playground clusters found with the prefix '${K8S_BASE_NAME}-'."
38+
exit 0
39+
fi
40+
echo "✅ Found regions: ${REGIONS[*]}"
3341

34-
To check which cluster you’re currently connected to:
42+
# --- Access Instructions ---
43+
echo
44+
echo "--------------------------------------------------"
45+
echo "🕹️ Cluster Access Instructions"
46+
echo "--------------------------------------------------"
47+
echo
48+
echo "To access your playground clusters, first set the KUBECONFIG environment variable:"
49+
echo "export KUBECONFIG=${KUBE_CONFIG_PATH}"
50+
echo
51+
echo "Available cluster contexts:"
52+
for region in "${REGIONS[@]}"; do
53+
echo " • kind-${K8S_BASE_NAME}-${region}"
54+
done
55+
echo
56+
echo "To switch to a specific cluster (e.g., the '${REGIONS[0]}' region), use:"
57+
echo "kubectl config use-context kind-${K8S_BASE_NAME}-${REGIONS[0]}"
58+
echo
3559

36-
kubectl config current-context
37-
EOF
60+
# --- Main Info Loop ---
61+
echo "--------------------------------------------------"
62+
echo "ℹ️ Cluster Information"
63+
echo "--------------------------------------------------"
64+
for region in "${REGIONS[@]}"; do
65+
CONTEXT="kind-${K8S_BASE_NAME}-${region}"
66+
echo
67+
echo "🔷 Cluster: ${CONTEXT}"
68+
echo "==================================="
69+
echo "🔹 Version:"
70+
kubectl --context "${CONTEXT}" version
71+
echo
72+
echo "🔹 Nodes:"
73+
kubectl --context "${CONTEXT}" get nodes -o wide
74+
echo
75+
echo "🔹 Secrets:"
76+
kubectl --context "${CONTEXT}" get secrets
77+
done

0 commit comments

Comments
 (0)