A comprehensive set of scripts for managing Kubernetes clusters on various Linux distributions, including installation, configuration, and cleanup operations.
- Overview
- Supported Distributions
- Prerequisites
- Installation Guide
- Cleanup Guide
- Post-Installation Configuration
- Distribution-Specific Notes
- Troubleshooting
- Support
- Distribution Test Results
This repository provides two main scripts:
setup-k8s.sh
: For installing and configuring Kubernetes nodescleanup-k8s.sh
: For safely removing Kubernetes components from nodes
Both scripts automatically detect your Linux distribution and use the appropriate package manager and configuration methods.
Blog and additional information: https://www.munenick.me/blog/k8s-setup-script
The scripts support the following Linux distribution families:
-
Debian-based:
- Ubuntu
- Debian
-
Red Hat-based:
- RHEL (Red Hat Enterprise Linux)
- CentOS
- Fedora
- Rocky Linux
- AlmaLinux
-
SUSE-based:
- openSUSE
- SLES (SUSE Linux Enterprise Server)
-
Arch-based:
- Arch Linux
- Manjaro
For unsupported distributions, the scripts will attempt to use generic methods but may require manual intervention.
- One of the supported Linux distributions
- 2 CPUs or more
- 2GB of RAM per machine
- Full network connectivity between cluster machines
- Root privileges or sudo access
- Internet connectivity
- Open required ports for Kubernetes communication
Download and run the installation script:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- [options]
Manual download and inspection:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh -o setup-k8s.sh
less setup-k8s.sh
chmod +x setup-k8s.sh
sudo ./setup-k8s.sh [options]
Basic setup with default containerd:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- --node-type master
Setup with CRI-O runtime:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- \
--node-type master \
--cri crio
Advanced setup:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- \
--node-type master \
--kubernetes-version 1.29 \
--cri containerd \
--pod-network-cidr 192.168.0.0/16 \
--apiserver-advertise-address 192.168.1.10 \
--service-cidr 10.96.0.0/12
Setup with IPVS mode for better performance:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- \
--node-type master \
--proxy-mode ipvs \
--pod-network-cidr 192.168.0.0/16
Setup with nftables mode (requires K8s 1.29+):
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- \
--node-type master \
--proxy-mode nftables \
--kubernetes-version 1.31 \
--pod-network-cidr 192.168.0.0/16
Obtain join information from master node:
# Run on master node
kubeadm token create --print-join-command
Join worker node:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/setup-k8s.sh | sudo bash -s -- \
--node-type worker \
--cri containerd \
--join-token <token> \
--join-address <address> \
--discovery-token-hash <hash>
Note: The worker node must use the same CRI as the master node.
Option | Description | Example |
---|---|---|
--node-type | Type of node (master/worker) | --node-type master |
--proxy-mode | Kube-proxy mode (iptables/ipvs/nftables) | --proxy-mode nftables |
--kubernetes-version | Kubernetes version (1.28, 1.29, 1.30, 1.31, 1.32) | --kubernetes-version 1.28 |
--pod-network-cidr | Pod network CIDR | --pod-network-cidr 192.168.0.0/16 |
--apiserver-advertise-address | API server address | --apiserver-advertise-address 192.168.1.10 |
--control-plane-endpoint | Control plane endpoint | --control-plane-endpoint cluster.example.com |
--service-cidr | Service CIDR | --service-cidr 10.96.0.0/12 |
--cri | Container runtime (containerd or crio) | --cri containerd |
--join-token | Worker join token | --join-token abcdef.1234567890abcdef |
--join-address | Master address | --join-address 192.168.1.10:6443 |
--discovery-token-hash | Discovery token hash | --discovery-token-hash sha256:abc... |
--enable-completion | Enable shell completion setup (default: true) | --enable-completion false |
--completion-shells | Shells to configure (auto/bash/zsh/fish) | --completion-shells bash,zsh |
--install-helm | Install Helm package manager | --install-helm true |
Execute the cleanup script:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/cleanup-k8s.sh | sudo bash -s -- [options]
- Drain the node (run on master):
kubectl drain <worker-node-name> --ignore-daemonsets
kubectl delete node <worker-node-name>
- Run cleanup on worker:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/cleanup-k8s.sh | sudo bash -s -- --node-type worker
Warning: This will destroy your entire cluster.
- Ensure all worker nodes are removed first
- Run cleanup:
curl -fsSL https://raw.githubusercontent.com/MuNeNICK/setup-k8s/main/cleanup-k8s.sh | sudo bash -s -- --node-type master
Option | Description | Example |
---|---|---|
--node-type | Type of node (master/worker) | --node-type master |
--force | Skip confirmation prompts | --force |
--preserve-cni | Keep CNI configurations | --preserve-cni |
--help | Show help message | --help |
The script supports three kube-proxy modes:
- Default mode that uses iptables for service proxy
- Works on all Kubernetes versions
- Lower CPU usage for small to medium clusters
- Most compatible with existing network plugins
- High-performance mode using IP Virtual Server
- Better for large clusters (1000+ services)
- Provides multiple load balancing algorithms
- Works on all Kubernetes versions
- Requires kernel modules: ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh
- Requires packages: ipvsadm, ipset
- Next-generation packet filtering framework
- Best performance for large clusters (5000+ services)
- Significantly faster rule updates than iptables
- More efficient packet processing in kernel
- Alpha in K8s 1.29-1.30, Beta in K8s 1.31+
- Requires kernel >= 3.13 (>= 4.14 recommended)
- Requires packages: nftables
- Note: NodePort services only accessible on default IPs (security improvement)
In clusters with 5000-10000 services:
- nftables p50 latency matches iptables p01 latency
- nftables processes rule changes 10x faster than iptables
IPVS mode:
./setup-k8s.sh --node-type master --proxy-mode ipvs
nftables mode (requires K8s 1.29+):
./setup-k8s.sh --node-type master --proxy-mode nftables --kubernetes-version 1.31
Note: If prerequisites are not met, the script will automatically fall back to iptables mode.
Install a Container Network Interface (CNI) plugin:
# Example: Installing Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Remove control-plane taint for single-node clusters:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Check cluster status:
kubectl get nodes
kubectl get pods --all-namespaces
- Uses apt/apt-get for package management
- Packages are held using apt-mark to prevent automatic updates
- Automatically detects and uses dnf or yum
- Package version locking handled via versionlock plugin
- Uses zypper for package management
- SLES may require subscription for repositories
- Uses pacman and AUR (Arch User Repository)
- Automatically installs yay AUR helper if needed
- Check kubelet logs:
journalctl -xeu kubelet
- Verify system requirements
- Confirm network connectivity
- Check distribution-specific logs for package management issues
- Verify network connectivity
- Check token expiration (24-hour default)
- Confirm firewall settings
- Check kernel module availability:
lsmod | grep ip_vs
- Verify ipvsadm is installed:
which ipvsadm
- Check kube-proxy mode:
kubectl get configmap -n kube-system kube-proxy -o yaml | grep mode
- View IPVS rules:
sudo ipvsadm -Ln
- Check kernel module availability:
lsmod | grep nf_tables
- Verify nft is installed:
which nft
- Check kernel version (>= 3.13 required):
uname -r
- View nftables rules:
sudo nft list ruleset
- Check Kubernetes version supports nftables (>= 1.29):
kubectl version --short
- If NodePort services are not accessible:
- nftables mode only allows NodePort on default IPs
- Use the node's primary IP address, not 127.0.0.1
- Ensure proper node drainage
- Verify permissions
- Check system logs:
journalctl -xe
If the script fails to detect your distribution correctly:
- Check if
/etc/os-release
exists and contains valid information - You may need to manually specify some steps for unsupported distributions
- Issues and feature requests: Open an issue in the repository
- Additional assistance: Contact the maintainer
- Documentation updates: Submit a pull request
Distribution | Version | Test Date | Status | Notes |
---|---|---|---|---|
Ubuntu | 24.04 LTS | 2025-08-25 | ✅ Tested | |
Ubuntu | 22.04 LTS | 2025-08-25 | ✅ Tested | |
Ubuntu | 20.04 LTS | 2025-08-25 | ✅ Tested | |
Debian | 12 (Bookworm) | 2025-08-25 | ✅ Tested | |
Debian | 11 (Bullseye) | 2025-08-25 | ✅ Tested | |
RHEL | 9 | 2025-08-25 | 🚫 Untested | Subscription required |
RHEL | 8 | 2025-08-25 | 🚫 Untested | Subscription required |
CentOS | 7 | 2025-08-25 | 🚫 Untested | EOL |
CentOS Stream | 9 | 2025-08-25 | ✅ Tested | |
CentOS Stream | 8 | 2025-08-25 | 🚫 Untested | EOL |
Rocky Linux | 9 | 2025-08-25 | ✅ Tested | |
Rocky Linux | 8 | 2025-08-25 | Kernel 4.18 - K8s 1.28 only¹ | |
AlmaLinux | 9 | 2025-08-25 | ✅ Tested | |
AlmaLinux | 8 | 2025-08-25 | Kernel 4.18 - K8s 1.28 only¹ | |
Fedora | 41 | 2025-08-25 | ✅ Tested | |
Fedora | 39 | 2025-08-25 | 🚫 Untested | EOL |
openSUSE | Leap 15.5 | 2025-08-25 | ✅ Tested | |
SLES | 15 SP5 | 2025-08-25 | 🚫 Untested | Subscription required |
Arch Linux | Rolling | 2025-08-25 | ✅ Tested | |
Manjaro | Rolling | 2025-08-25 | 🚫 Untested | No cloud image |
Status Legend:
- ✅ Tested: Fully tested and working
⚠️ Partial: Works with some limitations or manual steps- ❌ Failed: Not working or major issues
- 🚫 Untested: Not yet tested
Notes:
¹ Rocky Linux 8 and AlmaLinux 8 have kernel 4.18 which is not supported by Kubernetes 1.29+. Use --kubernetes-version 1.28
or upgrade kernel.
Note: Test dates and results should be updated regularly. Please submit your test results via issues or pull requests.