This repository contains the files utilized during the tutorial presented in the dedicated IsItObservable episode related to use the cluster API with a proxmox cluster
This tutorial will setup the boostrap cluster and how to observe it.
We will also utilize the following components :
-
the OpenTelemetry Operator
-
Dynatrace Operator to report the health of the boostrap cluster
-
All the observability data generated by the environment would be sent to Dynatrace.
The following tools need to be install on your machine :
- jq
- kubectl
- git
- curl
- Helm
- kind
- ssh-keygen
- Proxmox cluster
If you don't have any Dynatrace tenant , then I suggest to create a trial using the following link : Dynatrace Trial
Once you have your Tenant save the Dynatrace tenant url in the variable DT_TENANT_URL
(for example : https://dedededfrf.live.dynatrace.com)
DT_TENANT_URL=<YOUR TENANT Host>
The dynatrace operator will require to have several tokens:
- Token to deploy and configure the various components
- Token to ingest metrics and Traces
One for the operator having the following scope:
- Create ActiveGate tokens
- Read entities
- Read Settings
- Write Settings
- Access problem and event feed, metrics and topology
- Read configuration
- Write configuration
- Paas integration - installer downloader
Save the value of the token . We will use it later to store in a k8S secret
API_TOKEN=<YOUR TOKEN VALUE>
Create a Dynatrace token with the following scope:
- Ingest metrics (metrics.ingest)
- Ingest logs (logs.ingest)
- Ingest events (events.ingest)
- Ingest OpenTelemetry
- Read metrics
DATA_INGEST_TOKEN=<YOUR TOKEN VALUE>
The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.
{{#tabs name:"install-clusterctl" tabs:"Linux,macOS,homebrew,Windows"}} {{#tab Linux}}
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-amd64" version:"1.10.x"}} -o clusterctl
Download for ARM64:
curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-arm64" version:"1.10.x"}} -o clusterctl
Download for PPC64LE:
curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-ppc64le" version:"1.10.x"}} -o clusterctl
Install clusterctl:
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
{{#/tab }} {{#tab macOS}}
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-amd64" version:"1.10.x"}} -o clusterctl
Download for M1 CPU ("Apple Silicon") / ARM64:
curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-arm64" version:"1.10.x"}} -o clusterctl
Make the clusterctl binary executable.
chmod +x ./clusterctl
Move the binary in to your PATH.
sudo mv ./clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
{{#/tab }} {{#tab homebrew}}
Install the latest release using homebrew:
brew install clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
{{#/tab }} {{#tab windows}}
Go to the working directory where you want clusterctl downloaded.
Download the latest release; on Windows, type:
curl.exe -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-windows-amd64.exe" version:"1.10.x"}} -o clusterctl.exe
Append or prepend the path of that directory to the PATH
environment variable.
Test to ensure the version you installed is up-to-date:
clusterctl.exe version
{{#/tab }} {{#/tabs }}
Clone the following repository: https://github.com/kubernetes-sigs/image-builder
git clone https://github.com/kubernetes-sigs/image-builder
cd image-builder
Make sure to have all the requirements described in the following page
when building the template make sure the disk format is not using qcow2
but raw
by using the following variable :
export PACKER_FLAGS="-var disk_format=raw cpu_type=host"
Then follow the dedicated doc to build the image for you proxmox cluster
git clone https://github.com/isitobservable/clusterAPI
cd clusterAPI
This will require several step:
- create a certificate for to connect to the future proxmox machines
- create a kind cluster
- Deploy the cluster API in the kind cluster with the proxmox server
- Create a cluster using
clusterctl
- And finally move the cluster api in proxmox using
clusterctl
Create the local certificate using the following cmd:
ssh-keygen -t rsa -b 4096
Then copy the public certificate .pub
to the local directory
To achieve this you can either use the UI or the cmdline. with the command line, connect to one of the proxmox node and run :
pveum user add capmox@pve
pveum aclmod / -user capmox@pve -role PVEAdmin
pveum user token add capmox@pve capi -privsep 0
once the User create save the username, tokenid and secret in the following variables :
the token id should look like this capmox@pve!capmox-pve'
and your token should be an UUID
export PROXMOX_TOKEN='<YOUR TOKEN ID>'
export PROXMOX_SECRET='<YOUR TOKEN SECRET>'
export PROXMOX_URL="https://<your proxmox url or ip>:8006"
All the steps to configure the proxmox cluster will require to creaate a set of variables related to your proxmox instance
export PROXMOX_SOURCENODE="<proxmox node name having the template vm build by the image builder>"
# The template VM ID used for cloning VMs
export TEMPLATE_VMID=<ID of you template created by the image builder>
# The ssh authorized keys used to ssh to the machines.
export VM_SSH_KEYS="id_rsa"
# The future ip used for the k8s control plane
export CONTROL_PLANE_ENDPOINT_IP=10.0.0.4
# The IP ranges for Cluster nodes
export NODE_IP_RANGES="[10.0.0.5-10.0.0.50, 10.0.0.55-10.0.0.70]"
# The gateway for the machines network-config.
export GATEWAY="10.0.0.1"
# Subnet Mask in CIDR notation for your node IP ranges
export IP_PREFIX=24
# The Proxmox network device for VMs
export BRIDGE="vmbr0"
# The dns nameservers for the machines network-config.
export DNS_SERVERS="[10.0.0.1]"
# The Proxmox nodes used for VM deployments
export ALLOWED_NODES="[<List of the proxmox nodes>]"
configure the settings for the future vm created by clusterctl:
export BOOT_VOLUME_DEVICE= "scsi0"
export BOOT_VOLUME_SIZE="15"
export NUM_SOCKETS="2"
export NUM_CORES="2"
export MEMORY_MIB="8048"
export EXP_CLUSTER_RESOURCE_SET="true"
export CLUSTER_TOPOLOGY="true"
kind create cluster
clusterctl init --infrastructure proxmox --ipam in-cluster
clusterctl generate cluster capi-host \
--kubernetes-version v1.32.0 \
--control-plane-machine-count=1 \
--worker-machine-count=2 \
> capi.yaml
The capi.yaml file generate will use by default proxmox disk format qcow2
. depending on you proxmox version qcow2
could not be supported.
Let's replace qcow2
by raw
in the generate manifest file.
sed -i '' "s,qcow2,raw," capi.yaml
Now we can create our first cluster in Proxmox:
kubectl apply -f capi.yaml
we need to check the deployment prgress using the following command:
clusterctl describe cluster capi-host
Once the control plane is deployed and the worker node hardware as well, we would need to deploy a CNI to get the control plane and the various worker node ready.
Let's retrieve the kubeconfig of our cluster:
clusterctl get kubeconfig capi-host > capi-host.kubeconfig
Then we can install the control plane of your choice, in my case i'm deploying calico :
kubectl --kubeconfig=./capi-quickstart.kubeconfig apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
let's check the node until they are all ready
kubectl --kubeconfig=./capi-host.kubeconfig get nodes -w
clusterctl init --infrastructure proxmox --ipam in-cluster --kubeconfig ./capi-host.kubeconfig
clusterctl move --to-kubeconfig ./capi-host.kubeconfig
Once the bootstrap cluster moved , we can delete our kind cluster
kind delete cluster
cp ~/.kube/config ~/.kube/config-bck
export KUBECONFIG=~/.kube/config:./capi-host.kubeconfig
kubectl config view
kubectl config view --flatten > one-config.yaml
mv one-config.yaml ~/.kube/config
NAME="PROXMOX-BOOTSTRAP"
chmod 777 deployment.sh
./deployment.sh --clustername "${NAME}" --dturl "${DT_TENANT_URL}" --dtingesttoken "${DATA_INGEST_TOKEN}" --dtoperatortoken "${API_TOKEN}"
Use the kubeconfig context of our cluster api now stored in our Proxmox cluster and run :
clusterctl generate cluster capi-test \
--kubernetes-version v1.32.0 \
--control-plane-machine-count=1 \
--worker-machine-count=2 \
> capi_test.yaml
kubetl apply -f capi_test.yaml
Let's deploy the dashboard located : dynatrace/clusterapi.jsonn
In dynatrace , Open The Dashboard application and click on upload
This dashboard will keep track on the Cluster API using:
- Metrics
- Logs
- and the status shared in the Cluster API CRDs