diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 941c975..c4e153f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -44,6 +44,8 @@ When the control plane node was provisioned, you can obtain the kubeconfig like: ```bash kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml +# alternatively: +clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml ``` It is now expected to deploy a CNI to the cluster: diff --git a/README.md b/README.md index c5df2ec..25c741f 100644 --- a/README.md +++ b/README.md @@ -5,11 +5,11 @@ The Cluster API provider for metal-stack (CAPMS) implements the declarative mana > [!CAUTION] > This project is currently under heavy development and is not advised to be used in production any time soon. > Please use our stack on top of [Gardener](https://docs.metal-stack.io/stable/installation/deployment/#Gardener-with-metal-stack) instead. -> User documentation will follow as soon. Until then head to our [CONTRIBUTING.md](/CONTRIBUTING.md) +> User documentation will follow as soon. Until then, head to our [CONTRIBUTING.md](/CONTRIBUTING.md). -Currently we provide the following custom resources: +Currently, we provide the following custom resources: -- [`MetalStackCluster`](./api/v1alpha1/metalstackcluster_types.go) can be used as [infrastructure cluster](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-cluster) and ensures that the metal-stack network and firewall are being prepared. +- [`MetalStackCluster`](./api/v1alpha1/metalstackcluster_types.go) can be used as [infrastructure cluster](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-cluster) and ensures that there is a control plane IP for the cluster. - [`MetalStackMachine`](./api/v1alpha1/metalstackmachine_types.go) bridges between [infrastructure machines](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-machine) and metal-stack machines. > [!note] @@ -20,42 +20,172 @@ Currently we provide the following custom resources: **Prerequisites:** -- a running metal-stack installation -- CRDs for Prometheus -- CRDs for the Firewall Controller Manager +- Running metal-stack installation. See our [installation](https://docs.metal-stack.io/stable/installation/deployment/) section on how to get started with metal-stack. +- Management cluster (with network access to the metal-stack infrastructure). +- CLI metalctl installed for communicating with the metal-api. Installation instructions can be found in the corresponding [repository](https://github.com/metal-stack/metalctl). +- CLI clusterctl -First add the metal-stack infrastructure provider to your `clusterctl.yaml`: +First, add the metal-stack infrastructure provider to your `clusterctl.yaml`: ```yaml # ~/.config/cluster-api/clusterctl.yaml providers: - name: "metal-stack" - url: "https://github.com/metal-stack/cluster-api-provider-metal-stack/releases/latest/infrastructure-components.yaml" + url: "https://github.com/metal-stack/cluster-api-provider-metal-stack/releases/latest/download/infrastructure-components.yaml" type: InfrastructureProvider ``` -Now you are able to install the CAPMS into your cluster: +Now, you are able to install the CAPMS into your management cluster: ```bash -export METAL_API_URL=http://metal.203.0.113.1.nip.io:8080 -export METAL_API_HMAC=metal-admin +# export the following environment variables +export METAL_API_URL= +export METAL_API_HMAC= export EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true +# initialize the management cluster clusterctl init --infrastructure metal-stack ``` -Now you should be able to create Clusters on top of metal-stack. -For your first cluster it is advised to start with our generated template. +> [!CAUTION] +> **Manual steps needed:** +> Due to the early development stage, manual actions are needed for the cluster to operate. Some metal-stack resources need to be created manually. +A node network needs to be created. ```bash -# to display all env variables that need to be set -clusterctl generate cluster example --kubernetes-version v1.30.6 --infrastructure metal-stack --list-variables +export METAL_PARTITION= +export METAL_PROJECT_ID= +metalctl network allocate --description "" --name --project $METAL_PROJECT_ID --partition $METAL_PARTITION + +# export environment variable for use in the next steps +export METAL_NODE_NETWORK_ID=$(metalctl network list --name -o template --template '{{ .id }}') ``` -> [!CAUTION] -> **Manual steps needed:** -> Due to the early development stage the following manual actions are needed for the cluster to operate. +A firewall needs to be created with appropriate firewall rules. An example can be found at [firewall-rules.yaml](capi-lab/firewall-rules.yaml). +```bash +# export environment variable for the firewall image and size +export FIREWALL_MACHINE_IMAGE= +export FIREWALL_MACHINE_SIZE= + +metalctl firewall create --description --name --hostname --project $METAL_PROJECT_ID --partition $METAL_PARTITION --image $FIREWALL_MACHINE_IMAGE --size $FIREWALL_MACHINE_SIZE --firewall-rules-file= --networks internet,$METAL_NODE_NETWORK_ID +``` + +For your first cluster, it is advised to start with our generated template. + +```bash +# display required environment variables +clusterctl generate cluster --infrastructure metal-stack --list-variables + +# set additional environment variables +export CONTROL_PLANE_MACHINE_IMAGE= +export CONTROL_PLANE_MACHINE_SIZE= +export WORKER_MACHINE_IMAGE= +export WORKER_MACHINE_SIZE= + +# generate manifest +clusterctl generate cluster --kubernetes-version v1.30.6 --infrastructure metal-stack +``` + +Apply the generated manifest from the `clusterctl` output. + +```bash +kubectl apply -f +``` + +Once your control plane and worker machines have been provisioned, you need to install your CNI of choice into your created cluster. This is required due to CAPI. An example is provided below: + +```bash +# get the kubeconfig +clusterctl get kubeconfig metal-test > capms-cluster.kubeconfig + +# install the calico operator +kubectl --kubeconfig=capms-cluster.kubeconfig create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml + +# install the calico CNI +cat < +export NODE_HOSTNAME=$(metalctl machine describe $NODE_ID -o template --template '{{ .allocation.hostname }}') +export NODE_ASN=$(metalctl machine describe $NODE_ID -o template --template '{{ printf "%.0f" (index .allocation.networks 0).asn }}') +export NODE_ROUTER_ID=$(metalctl machine describe $NODE_ID -o template --template '{{ (index (index .allocation.networks 0).ips 0) }}') + +# for each worker machine generate and apply the BGPPeer resource +cat <.spec.controlPlaneIP`. + +```bash +metalctl network ip create --name --project $METAL_PROJECT_ID --type static +``` + +### I'd like to have a specific Pod CIDR. How can I achieve this? + +When generating your cluster, set `POD_CIDR` to your desired value. + +```bash +export POD_CIDR=["10.240.0.0/12"] +```