This project provisions a 3-node Kubernetes cluster on your local machine using Vagrant (libvirt provider) and Kubespray.
- Linux host with KVM+Vagrant enabled
- Verify:
egrep -c '(vmx|svm)' /proc/cpuinfoshould be > 0 - Install/config guide: infra-misc vagrant
- Verify:
- Packages:
dockerto run the Kubespray container (Install Docker Engine on Ubuntu)make
- SSH key pair for Vagrant nodes:
- Private:
~/.ssh/id_vagrant - Public:
~/.ssh/id_vagrant.pub - The public key is injected to VMs as their
authorized_keys(seeconf/Vagrantfile)
- Private:
Create the id_vagrant key pair in your ~/.ssh directory:
ssh-keygen -t ed25519 -C "vagrant@dev-cluster" -f ~/.ssh/id_vagrant -N ""Ensure permissions are correct (usually set by default):
chmod 600 ~/.ssh/id_vagrant
chmod 644 ~/.ssh/id_vagrant.pub- Create libvirt storage pool used by VMs:
bash scripts/pool.sh- Create libvirt network
vagrant-10-10.8with subnet10.10.8.0/24:
bash scripts/net.shThis will define a bridge virbr-k8s and DHCP range 10.10.8.100-200. Nodes will use static IPs:
- dev-kubernetes-1: 10.10.8.11
- dev-kubernetes-2: 10.10.8.12
- dev-kubernetes-3: 10.10.8.13
From the repo root:
make kubernetesThis performs:
vagrant upwith provider${PROVIDER:-libvirt}to create 3 Debian 12 VMs (seeconf/Vagrantfile)- Runs Kubespray in a container to configure Kubernetes using the inventory in
inventory/k8s_cluster
You can also run steps individually:
make nodes-up # start VMs only
make kubespray # run kubespray only- Kubespray is executed via
scripts/kubespray.sh, which runs the imagequay.io/kubespray/kubespray:v2.28.0with host networking and binds:inventory/k8s_cluster→/inventoryconf/ssh.conf→/root/.ssh/config.orig(used for host aliasing)~/.ssh/id_vagrant→/root/.ssh/id_vagrant.orig(private key for Ansible)- Credentials are generated under
/var/tmp/kube-certsand mounted to/inventory/k8s_cluster/credentials
- Inside the container,
scripts/entrypoint.shruns:
ansible-playbook -i /inventory/k8s_cluster/inventory.ini cluster.ymlmake nodes-status
make login # SSH into dev-kubernetes-1 (Password : vagrant)
make nodes-ssh-dev-kubernetes-2- Inventory:
inventory/k8s_cluster/inventory.ini - Global vars:
inventory/k8s_cluster/group_vars/all/*.yml - Cluster vars:
inventory/k8s_cluster/group_vars/k8s_cluster/*.yml
Defaults include:
- Container runtime:
containerd - CNI:
calico(kube_network_plugin: calico) - API server LB port:
6443
Adjust as needed before running make kubernetes.
Note: The cluster configuration files are copied from upstream Kubespray. You can modify them or completely replace them with a newer Kubespray release (Don't forget docker image version!); just make sure to update inventory/k8s_cluster/inventory.ini to match your nodes, IPs, and SSH settings.
make nodes-down # stop VMs
make nodes-destroy # destroy VMs- Ensure the libvirt network
vagrant-10-10.8exists:
virsh net-list --all | grep vagrant-10-10.8 || bash scripts/net.sh- Ensure the storage pool
vagrant_poolexists and is active:
virsh pool-list --all | grep vagrant_pool || bash scripts/pool.sh-
Verify your
~/.ssh/id_vagrant(.pub)exist and have proper permissions. -
If
vagrant upfails due to provider issues, installvagrant-libvirtplugin:
vagrant plugin install vagrant-libvirt- Box:
generic/debian12 - Each VM: 4 vCPU, 4GB RAM, 20GB disk (see
conf/Vagrantfile) - Host DNS inside VMs is set to Cloudflare (
1.1.1.1) during provisioning.