Kubernetes cluster at the edge deployed on Raspberry Pi, utilizing the lightweight k3s, and orchestrated with the assistance of k3sup
-
Master Nodes
- 2x Ubuntu 22.04 live-server installed VM
-
Worker Nodes
- 4x Raspberry Pi 4
-
Networking
- TL-WR841N Router
- 4x LAN Cable
via curl
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/via brew
brew install k3supIn these project, I setup router as WISP mode, for receiving internet hotspot from my phone, and distribute it to all nodes, so that they can access internet.
Following this Documents to do on your own.
Then, you can continue reserving static IP for each node, by matching thier MAC address to IP address.
Nothing special, just make sure you have IP address on Bridged mode network, so they can acccess to internet and communicate with each other in the same network as Raspberry Pi.
change hostname via GUI, as every node must have unique hostname.
sudo raspi-configpermit user pi to not use password when using sudo by
sudo visudothen append these below lines at the end of file,
pi ALL=(ALL) NOPASSWD: ALLEnable container features in the kernel, by editing /boot/cmdline.txt
Add the following to the end of the line:
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memorythen, reboot it.
sudo rebootfor coveninence, we will use ssh-copy-id to copy ssh-key to all nodes, since k3sup does not support password input or variable.
So, your need to copy all ssh-key in every instances to local, the machine that run k3sup.
ssh-copy-id <user>@<ip>if it's error, then you need to generate ssh-key first.
ssh-keygenConfig node.json likes,
[
{
"hostname": "master1",
"ip": "192.168.0.104"
},
{
"hostname": "master2",
"ip": "192.168.0.105"
},
{
"hostname": "jindamanee",
"ip": "192.168.0.100"
},
{
"hostname": "cream",
"ip": "192.168.0.101"
},
{
"hostname": "earth",
"ip": "192.168.0.102"
},
{
"hostname": "singto",
"ip": "192.168.0.103"
}
]Run k3sup plan via makefile.
make planCustomize your bootstrap.sh, since k3sup plan api does not satisfy our setup, then run it.
./bootstrap.shMore detail on Customizing bootstrap.sh, you can use my Makefile as a reference. there is a top-level controller config, called server-args , that acheive these below.
- Detecting toleration of worker node from 5 min -> 10 s
- Taint master node to not allow application's pod to be scheduled on it, since they are ARM image.
If nothing failed, then copy kubeconfig to local, for monitoring cluster.
export KUBECONFIG=`pwd`/kubeconfig
kubectl get node -o wideFinish, you can deploy application now :)
for master nodes
sudo systemctl status k3sfor worker nodes
sudo systemctl status k3s-agentremove k3s over whole cluster
/usr/local/bin/k3s-killall.shUninstall server, master node
/usr/local/bin/k3s-uninstall.shUninstall agent, worker node
/usr/local/bin/k3s-agent-uninstall.shBuild Docker images of each services and push them to Docker Hub using these specific names. This process utilizes GitHub Actions for the automation of build processes and the pushing of images to the Docker Hub repository. The source code of configuration files for a GitHub Actions workflow are in .github -> workflows