Skip to content
Merged
57 changes: 41 additions & 16 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ make -C capi-lab node-network firewall
A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target.

```bash
make apply-sample-cluster
make -C capi-lab apply-sample-cluster
```

Once the control plane node has phoned home, run:
Expand All @@ -43,20 +43,14 @@ make -C capi-lab mtu-fix
When the control plane node was provisioned, you can obtain the kubeconfig like:

```bash
kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > .capms-cluster-kubeconfig.yaml
```

For now, the provider ID has to be manually added to the node object because we did not integrate the [metal-ccm](https://github.com/metal-stack/metal-ccm) yet:

```bash
kubectl --kubeconfig=.capms-cluster-kubeconfig.yaml patch node <control-plane-node-name> --patch='{"spec":{"providerID": "metal://<machine-id>"}}'
kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml
```

It is now expected to deploy a CNI to the cluster:

```bash
kubectl --kubeconfig=.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
cat <<EOF | kubectl --kubeconfig=.capms-cluster-kubeconfig.yaml create -f -
kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
Expand All @@ -81,10 +75,42 @@ EOF
> [!note]
> Actually, Calico should be configured using BGP (no overlay), eBPF and DSR. An example will be proposed in this repository at a later point in time.

As soon as the worker node was provisioned, the same provider ID patch as above is required:
The node's provider ID is provided by the [metal-ccm](https://github.com/metal-stack/metal-ccm), which needs to be deployed into the cluster:

```bash
make -C capi-lab deploy-metal-ccm
```

If you want to provide service's of type load balancer through MetalLB by the metal-ccm, you need to deploy MetalLB:

```bash
kubectl --kubeconfig=.capms-cluster-kubeconfig.yaml patch node <worker-node-name> --patch='{"spec":{"providerID": "metal://<machine-id>"}}'
kubectl --kubeconfig capi-lab/.capms-cluster-kubeconfig.yaml apply --kustomize capi-lab/metallb
```

For each node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{
NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node.

```bash
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: ${NODE_HOSTNAME}
namespace: metallb-system
spec:
holdTime: 1m30s
keepaliveTime: 0s
myASN: ${NODE_ASN}
nodeSelectors:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ${NODE_HOSTNAME}
passwordSecret: {}
peerASN: ${NODE_ASN}
peerAddress: ${NODE_ROUTER_ID}
EOF
```

That's it!
Expand Down Expand Up @@ -112,21 +138,20 @@ make install
make deploy IMG=<some-registry>/cluster-api-provider-metal-stack:tag
```

> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin
privileges or be logged in as admin.
> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.

**Create instances of your solution**
You can apply the sample cluster configuration:

```sh
make apply-sample-cluster
make -C capi-lab apply-sample-cluster
```

### To Uninstall
**Delete the instances (CRs) from the cluster:**

```sh
make delete-sample-cluster
make -C capi-lab delete-sample-cluster
```

**Delete the APIs(CRDs) from the cluster:**
Expand Down
33 changes: 0 additions & 33 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -241,36 +241,3 @@ mv $(1) $(1)-$(3) ;\
} ;\
ln -sf $(1)-$(3) $(1)
endef

# mini-lab developer environment

export METAL_PARTITION ?= mini-lab
export METAL_PROJECT_ID ?= 00000000-0000-0000-0000-000000000001
export METAL_NODE_NETWORK_ID ?= $(shell metalctl network list --name metal-test -o template --template '{{ .id }}')
export CONTROL_PLANE_MACHINE_IMAGE ?= ubuntu-24.04
export CONTROL_PLANE_MACHINE_SIZE ?= v1-small-x86
export WORKER_MACHINE_IMAGE ?= ubuntu-24.04
export WORKER_MACHINE_SIZE ?= v1-small-x86

.PHONY: up
up: bake deploy-cloud-stack

.PHONY: apply-sample-cluster
apply-sample-cluster: generate manifests
clusterctl generate cluster metal-test \
--kubeconfig=$(KUBECONFIG) \
--worker-machine-count 1 \
--control-plane-machine-count 1 \
--kubernetes-version 1.30.6 \
--from config/clusterctl-templates/cluster-template.yaml \
| kubectl --kubeconfig=$(KUBECONFIG) apply -f -

.PHONY: delete-sample-cluster
delete-sample-cluster: generate manifests
clusterctl generate cluster metal-test \
--kubeconfig=$(KUBECONFIG) \
--worker-machine-count 1 \
--control-plane-machine-count 1 \
--kubernetes-version 1.30.6 \
--from config/clusterctl-templates/cluster-template.yaml \
| kubectl --kubeconfig=$(KUBECONFIG) delete -f -
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,5 @@ clusterctl generate cluster example --kubernetes-version v1.30.6 --infrastructur
> Due to the early development stage the following manual actions are needed for the cluster to operate.

1. The firewall needs to be created manually.
2. You need to install your CNI of choice. This is required due to CAPI.
3. Control plane and worker nodes need to be patched.

```bash
kubectl patch node <worker-node-name> --patch='{"spec":{"providerID": "metal://<machine-id>"}}'
```
2. The metal-ccm has to be deployed
3. You need to install your CNI of choice. This is required due to CAPI.
39 changes: 39 additions & 0 deletions capi-lab/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,20 @@ ANSIBLE_EXTRA_VARS_FILE=$(shell pwd)/mini-lab-overrides/extra-vars.yaml
KIND_EXPERIMENTAL_DOCKER_NETWORK=mini_lab_ext
KUBECONFIG := $(shell pwd)/mini-lab/.kubeconfig
MINI_LAB_FLAVOR=capms

METAL_API_URL=http://metal.203.0.113.1.nip.io:8080
METAL_API_HMAC=metal-admin
METALCTL_API_URL=http://metal.203.0.113.1.nip.io:8080
METALCTL_HMAC=metal-admin

METAL_PARTITION ?= mini-lab
METAL_PROJECT_ID ?= 00000000-0000-0000-0000-000000000001

CONTROL_PLANE_MACHINE_IMAGE ?= ubuntu-24.04
CONTROL_PLANE_MACHINE_SIZE ?= v1-small-x86
WORKER_MACHINE_IMAGE ?= ubuntu-24.04
WORKER_MACHINE_SIZE ?= v1-small-x86

IMG ?= ghcr.io/metal-stack/cluster-api-metal-stack-controller:latest

.PHONY: up
Expand Down Expand Up @@ -47,7 +58,35 @@ firewall:
node-network:
metalctl network allocate --description "node network for metal-test cluster" --name metal-test --project 00000000-0000-0000-0000-000000000001 --partition mini-lab

.PHONY: apply-sample-cluster
apply-sample-cluster:
$(eval METAL_NODE_NETWORK_ID = $(shell metalctl network list --name metal-test -o template --template '{{ .id }}'))
clusterctl generate cluster metal-test \
--kubeconfig=$(KUBECONFIG) \
--worker-machine-count 1 \
--control-plane-machine-count 1 \
--kubernetes-version 1.30.6 \
--from ../config/clusterctl-templates/cluster-template.yaml \
| kubectl --kubeconfig=$(KUBECONFIG) apply -f -

.PHONY: delete-sample-cluster
delete-sample-cluster:
$(eval METAL_NODE_NETWORK_ID = $(shell metalctl network list --name metal-test -o template --template '{{ .id }}'))
clusterctl generate cluster metal-test \
--kubeconfig=$(KUBECONFIG) \
--worker-machine-count 1 \
--control-plane-machine-count 1 \
--kubernetes-version 1.30.6 \
--from ../config/clusterctl-templates/cluster-template.yaml \
| kubectl --kubeconfig=$(KUBECONFIG) delete -f -

.PHONY: mtu-fix
mtu-fix:
cd mini-lab && ssh -F files/ssh/config leaf01 'ip link set dev vtep-1001 mtu 9100 && echo done'
cd mini-lab && ssh -F files/ssh/config leaf02 'ip link set dev vtep-1001 mtu 9100 && echo done'

.PHONY: deploy-metal-ccm
deploy-metal-ccm:
$(eval METAL_CLUSTER_ID = $(shell kubectl get metalstackclusters.infrastructure.cluster.x-k8s.io metal-test -ojsonpath='{.metadata.uid}'))
$(eval METAL_NODE_NETWORK_ID = $(shell metalctl network list --name metal-test -o template --template '{{ .id }}'))
cat metal-ccm.yaml | envsubst | kubectl --kubeconfig=.capms-cluster-kubeconfig.yaml apply -f -
6 changes: 6 additions & 0 deletions capi-lab/firewall-rules.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,12 @@ egress:
protocol: TCP
to:
- 0.0.0.0/0
- comment: allow outgoing traffic to control plane for ccm
ports:
- 8080
protocol: TCP
to:
- 203.0.113.0/24
- comment: allow outgoing DNS and NTP traffic via UDP
ports:
- 53
Expand Down
Loading
Loading