Skip to content

Commit 5356e75

Browse files
Merge pull request #4 from TryToLearnProgramming/k8s-cluseter
K8s cluseter
2 parents d79f01d + e51a1e9 commit 5356e75

File tree

12 files changed

+949
-141
lines changed

12 files changed

+949
-141
lines changed

README.md

Lines changed: 57 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,11 @@ graph TB
5959
<td align="center">🚀</td>
6060
<td>Ready-to-use Kubernetes cluster setup</td>
6161
</tr>
62+
<tr>
63+
<td align="center">📜</td>
64+
<td>Easy-to-use Bash Scripts for Kubernetes cluster setup - reduce typing errors</td>
65+
</tr>
66+
6267
<tr>
6368
<td align="center">🔒</td>
6469
<td>Secure communication between nodes</td>
@@ -177,16 +182,32 @@ sudo kubeadm config images pull
177182

178183
Initialize the cluster:
179184
```bash
180-
sudo kubeadm init --pod-network-cidr=10.201.0.0/16 --apiserver-advertise-address=192.168.63.11
185+
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.63.11
181186
```
182187

183-
### 2. Install CNI (Container Network Interface)
188+
### 2a. Install Weave CNI (Container Network Interface)
184189

185190
After the cluster initialization, install Weave CNI:
186191
```bash
187192
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
188193
```
189194

195+
### NOTE: Weave CNI has been discontinued
196+
197+
With the shutdown of Weaveworks, Weave CNI has been effectively discontinued, the GitHub repo archived in June 2024. So a new CNI should be considered, and the first suggestion is Flannel
198+
199+
### 2b. Install Flannel CNI (Container Network Interface)
200+
201+
First, install Flannel CNI:
202+
```bash
203+
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
204+
```
205+
206+
Then, restart the Kublet service:
207+
```bash
208+
sudo service kubelet restart
209+
```
210+
190211
### NOTE: Control Plane script 'cluster_init.sh' wraps steps 1. and 2.
191212

192213
For ease of use, a single script `cluster_init.sh` was created as a function of the "vagrant up" command for the control plane(s) that performs all of the above steps:
@@ -229,6 +250,40 @@ worker Ready <none> 2m14s v1.30.x
229250

230251
> **Note**: The nodes may show `NotReady` status initially as the CNI (Container Network Interface) is being configured. Please wait a few minutes for the status to change to `Ready`.
231252

253+
### 5. (Optional) Set Role for Worker Node(s)
254+
255+
As the output above shows, there is no initial role set for worker nodes. You can set their role to "worker" with:
256+
257+
The Kubernetes Dashboard is Web UI that allows you to manage your cluster; configure and manage aspects of the system, troubleshoot, and to have an overview of applications running on your cluster
258+
259+
```bash
260+
vagrant ssh cplane -c "./set_worker_role.sh"
261+
```
262+
263+
This script can be run any time a new node is added.
264+
265+
### 6. (Optional) Kubernetes Dashboard Installation
266+
267+
The Kubernetes Dashboard is Web UI that allows you to manage your cluster; configure and manage aspects of the system, troubleshoot, and to have an overview of applications running on your cluster
268+
269+
First, log into the control plane node:
270+
```bash
271+
vagrant ssh cplane
272+
```
273+
274+
Execute the Dashboard setup script:
275+
```bash
276+
./kub_dashboard.sh <option>
277+
```
278+
279+
Where `<option>` is one of:
280+
* worker - to deploy dashboard on any worker node
281+
* cplane - to deploy dashboard on the control plane
282+
* token - to show the dashboard credentials token (and the dashboard url)
283+
284+
Normally, the Kubernetes Dashboard would be deployed to one of the worker nodes. This would always be the case in a production
285+
Kubernetes cluster. However for a small development cluster, it doesn't hurt to run the dashboard on the control plane
286+
232287
### Troubleshooting
233288

234289
If you encounter issues while joining the worker node, try these steps on both nodes:

Vagrantfile

Lines changed: 13 additions & 119 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
# Configuration parameters
2-
VAGRANT_BASE_OS = "bento/ubuntu-24.04" # "bento/ubuntu-22.04"
2+
VAGRANT_BASE_OS = "bento/ubuntu-24.04"
33
PRIVATE_NETWORK = "private_network" # For Host -> VM and VM <-> VM (within the network)
4-
BASE_CIDR = "10.201.0.0" # Base address for pods
54

65
# Create list of one or more Control Plane Nodes (but one is sufficient)
76
CPLANE_NODES = [
@@ -12,9 +11,11 @@ CPLANE_NODES = [
1211
# Mindful of the 'name' and 'ip' values for each
1312
WORKER_NODES = [
1413
{ name: "worker1", box: VAGRANT_BASE_OS, network: PRIVATE_NETWORK, ip: "192.168.63.12" }
14+
# { name: "worker1", box: VAGRANT_BASE_OS, network: PRIVATE_NETWORK, ip: "192.168.63.12" },
15+
# { name: "worker2", box: VAGRANT_BASE_OS, network: PRIVATE_NETWORK, ip: "192.168.63.13" }
1516
]
1617

17-
# Work out the "/etc/hosts" values to get copied in each node (cplanes and workers)
18+
# Work out the "/etc/hosts" values to be copied in each node (cplanes and workers)
1819
ALL_NODES = CPLANE_NODES + WORKER_NODES
1920
ETC_HOSTS = ALL_NODES.map { |n| "#{n[:ip]} #{n[:name]}" }.join("\n") + "\n"
2021

@@ -24,94 +25,20 @@ Vagrant.configure("2") do |config|
2425
config.vm.define node[:name] do |cplane|
2526
cplane.vm.box = node[:box]
2627
cplane.vm.network node[:network], ip: node[:ip]
27-
cplane.vm.network "forwarded_port", guest: 443, host: 8443 # Port Forward for k8s dashboard
2828
cplane.vm.hostname = node[:name]
2929
cplane.vm.provider "virtualbox" do |v|
3030
v.name = node[:name]
3131
v.memory = 2048
3232
v.cpus = 2
3333
end
3434
cplane.vm.provision "shell",
35-
env: {
36-
"ETC_HOSTS" => ETC_HOSTS,
37-
"BASE_CIDR" => BASE_CIDR,
38-
"API_SERVER_IP" => node[:ip] # API Server is the control plane host itself
39-
},
35+
env: { "ETC_HOSTS" => ETC_HOSTS },
4036
inline: <<-SHELL
41-
# Add Nodes to /etc/hosts
42-
sudo echo "# Added by Vagrant" >> /etc/hosts
43-
sudo echo "#" >> /etc/hosts
44-
echo -e "${ETC_HOSTS}" | while read -r hline; do
45-
sudo echo ${hline} >> /etc/hosts
46-
done
37+
# Provision the Control Plane (Base and Specific)
38+
/vagrant/scripts/provision/provision_base.sh
39+
[ -f "/vagrant/scripts/provision/provision_cplane.sh" ] && /vagrant/scripts/provision/provision_cplane.sh
4740
48-
# Create Cluster Init Script:
49-
echo "#!/bin/bash" > cluster_init.sh
50-
echo "echo 'Pulling k8s Images'" >> cluster_init.sh
51-
echo "sudo kubeadm config images pull" >> cluster_init.sh
52-
echo "echo ''" >> cluster_init.sh
53-
echo "echo 'Initializing Cluster'" >> cluster_init.sh
54-
echo "sudo kubeadm init --pod-network-cidr=${BASE_CIDR}/16 --apiserver-advertise-address=${API_SERVER_IP}" >> cluster_init.sh
55-
echo "if [ -f /etc/kubernetes/admin.conf ] ; then" >> cluster_init.sh
56-
echo " echo ''" >> cluster_init.sh
57-
echo " echo 'Create local .kube/config'" >> cluster_init.sh
58-
echo " mkdir -p \\\${HOME}/.kube" >> cluster_init.sh
59-
echo " sudo cp -i /etc/kubernetes/admin.conf \\\${HOME}/.kube/config" >> cluster_init.sh
60-
echo " sudo chown \\\$(id -u):\\\$(id -g) \\\${HOME}/.kube/config" >> cluster_init.sh
61-
echo " echo ''" >> cluster_init.sh
62-
echo " echo 'Install Weave'" >> cluster_init.sh
63-
echo " kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml" >> cluster_init.sh
64-
echo "fi" >> cluster_init.sh
65-
chmod a+rx cluster_init.sh
66-
67-
# Show the k8s join command:
68-
echo "#!/bin/bash" > join_cmd.sh
69-
echo "echo 'k8s worker join command (may need sudo):'" >> join_cmd.sh
70-
echo "echo ''" >> join_cmd.sh
71-
echo "kubeadm token create --print-join-command" >> join_cmd.sh
72-
echo "echo ''" >> join_cmd.sh
73-
sudo chmod a+rx join_cmd.sh
74-
75-
# Apt Stuff:
76-
sudo apt update
77-
sudo apt install ca-certificates curl
78-
sudo install -m 0755 -d /etc/apt/keyrings
79-
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
80-
sudo chmod a+r /etc/apt/keyrings/docker.asc
81-
82-
# Add the repository to Apt sources:
83-
echo \
84-
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
85-
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
86-
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
87-
sudo apt update
88-
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
89-
sudo systemctl enable docker
90-
sudo ufw disable
91-
sudo swapoff -a
92-
sudo apt update && sudo apt install -y apt-transport-https
93-
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
94-
sudo apt update
95-
96-
# apt-transport-https may be a dummy package; if so, you can skip that package
97-
sudo apt install -y apt-transport-https ca-certificates curl gpg
98-
# Helm Deployment Manager
99-
curl -fsSL https://packages.buildkite.com/helm-linux/helm-debian/gpgkey | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
100-
echo "deb [signed-by=/usr/share/keyrings/helm.gpg] https://packages.buildkite.com/helm-linux/helm-debian/any/ any main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
101-
sudo apt update
102-
sudo apt install -y helm
103-
104-
# Containerd
105-
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
106-
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
107-
sudo apt update
108-
sudo apt install -y kubelet kubeadm kubectl
109-
sudo apt-mark hold kubelet kubeadm kubectl
110-
sudo systemctl enable --now kubelet
111-
sudo containerd config default | sudo tee /etc/containerd/config.toml
112-
sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
113-
sudo sed -i 's|sandbox_image = "registry.k8s.io/pause:3.8"|sandbox_image = "registry.k8s.io/pause:3.9"|g' /etc/containerd/config.toml
114-
sudo systemctl restart containerd
41+
cp -f /vagrant/scripts/cplane/*.sh . 2>/dev/null || true
11542
SHELL
11643
end
11744
end
@@ -130,44 +57,11 @@ Vagrant.configure("2") do |config|
13057
worker.vm.provision "shell",
13158
env: {"ETC_HOSTS" => ETC_HOSTS},
13259
inline: <<-SHELL
133-
# Add Nodes to /etc/hosts
134-
sudo echo "# Added by Vagrant" >> /etc/hosts
135-
sudo echo "#" >> /etc/hosts
136-
echo -e "${ETC_HOSTS}" | while read -r hline; do
137-
sudo echo ${hline} >> /etc/hosts
138-
done
139-
# Apt Stuff:
140-
sudo apt update
141-
sudo apt install ca-certificates curl
142-
sudo install -m 0755 -d /etc/apt/keyrings
143-
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
144-
sudo chmod a+r /etc/apt/keyrings/docker.asc
60+
# Provision the Worker (Base and Specific)
61+
/vagrant/scripts/provision/provision_base.sh
62+
[ -f "/vagrant/scripts/provision/provision_worker.sh" ] && /vagrant/scripts/provision/provision_worker.sh
14563
146-
# Add the repository to Apt sources:
147-
echo \
148-
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
149-
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
150-
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
151-
sudo apt update
152-
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
153-
sudo systemctl enable docker
154-
sudo ufw disable
155-
sudo swapoff -a
156-
sudo apt update && sudo apt install -y apt-transport-https
157-
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
158-
sudo apt update
159-
# apt-transport-https may be a dummy package; if so, you can skip that package
160-
sudo apt install -y apt-transport-https ca-certificates curl gpg
161-
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
162-
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
163-
sudo apt update
164-
sudo apt install -y kubelet kubeadm kubectl
165-
sudo apt-mark hold kubelet kubeadm kubectl
166-
sudo systemctl enable --now kubelet
167-
sudo containerd config default | sudo tee /etc/containerd/config.toml
168-
sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
169-
sudo sed -i 's|sandbox_image = "registry.k8s.io/pause:3.8"|sandbox_image = "registry.k8s.io/pause:3.9"|g' /etc/containerd/config.toml
170-
sudo systemctl restart containerd
64+
cp -f /vagrant/scripts/worker/*.sh . 2>/dev/null || true
17165
SHELL
17266
end
17367
end

instructions.txt

Lines changed: 15 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,21 @@
1-
TO SPEED UP THE PROCESS 1ST RUN in the master NODE-
1+
TO SPEED UP THE PROCESS 1ST RUN in the master NODE:
22
- sudo kubeadm config images pull
33

4-
TO INIT, RUN in the master NODE-
5-
- sudo kubeadm init --pod-network-cidr=10.201.0.0/16 --apiserver-advertise-address=192.168.63.1
6-
- you wull get a kubeadm join command run that in you master node after Install the CNI
4+
TO INIT, RUN in the master NODE:
5+
- sudo kubeadm init --pod-network-cidr=10.201.0.0/16 --apiserver-advertise-address=192.168.63.11
6+
- you will get a kubeadm join command run that in you master node after Install the CNI
77

8-
TO INSTALL CNI (weave) -
9-
- kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
8+
TO INSTALL CNI (weave):
9+
- sudo kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
1010

11-
IF face proble while jioning worker node, reatart master node and run below commands-
12-
sudo kubeadm reset
11+
If you face problems while joining a worker node, restart master node and run below commands:
12+
- sudo kubeadm reset
13+
- sudo swapoff -a => all nodes.
14+
- sudo systemctl restart kubelet
15+
- sudo iptables -F
16+
- sudo rm -rf /var/lib/cni/
17+
- sudo systemctl restart containerd
18+
- sudo systemctl daemon-reload
1319

14-
sudo swapoff -a => all nodes.
20+
then try again to join... I hope it will work :')
1521

16-
sudo systemctl restart kubelet
17-
18-
sudo iptables -F
19-
20-
sudo rm -rf /var/lib/cni/
21-
22-
sudo systemctl restart containerd
23-
24-
sudo systemctl daemon-reload
25-
26-
then againg try to join... hope it will work :')

scripts/README.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# Vagrant Kubernetes Cluster
2+
3+
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4+
[![Vagrant](https://img.shields.io/badge/vagrant-%231563FF.svg?style=for-the-badge&logo=vagrant&logoColor=white)](https://www.vagrantup.com/)
5+
[![Kubernetes](https://img.shields.io/badge/kubernetes-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)](https://kubernetes.io/)
6+
[![Ubuntu](https://img.shields.io/badge/Ubuntu-E95420?style=for-the-badge&logo=ubuntu&logoColor=white)](https://ubuntu.com/)
7+
8+
This project sets up a local Kubernetes cluster using Vagrant and VirtualBox. It creates two Ubuntu 24.04 virtual machines: one control plane node and one worker node with automatic installation of Docker, Kubernetes components, and necessary configurations.
9+
10+
## Bash Scripts
11+
12+
These scripts are to make it easy to bring up and maintain the Kubernetes Cluster. Some are collections of manual provisioning commands from the original project (which reduces manual typing errors). Others are facilitators, to manage the Control Plane and Worker nodes in an easy and repeatable fashion.
13+
14+
<table>
15+
<tr>
16+
<td>🚜&nbsp;Provision</td>
17+
<td>Package Installation and Service Management of Machines</td>
18+
</tr>
19+
<tr>
20+
<td>🚀&nbsp;Cplane</td>
21+
<td>To spin up and manage the Control Plane</td>
22+
</tr>
23+
<tr>
24+
<td>🛠&nbsp;Worker</td>
25+
<td>To join up and manage the Worker nodes</td>
26+
</tr>
27+
</table>
28+
29+
## License
30+
31+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
32+
33+
Copyright (c) 2024 Vagrant Kubernetes Cluster
34+
35+
## 📫 Support & Contribution
36+
37+
If you encounter any issues or need assistance:
38+
39+
[![Create Issue](https://img.shields.io/badge/Create-Issue-green.svg)](https://github.com/yourusername/vagrant-kubernetes/issues/new)
40+
[![Pull Request](https://img.shields.io/badge/Pull-Request-blue.svg)](https://github.com/yourusername/vagrant-kubernetes/pulls)
41+
42+
## 📝 License
43+
44+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
45+
46+
---
47+
48+
<div align="center">
49+
Made with ❤️ for the Kubernetes community
50+
</div>

0 commit comments

Comments
 (0)