From f146963cbdf3a854bef5141053fd2ed1e1ef3524 Mon Sep 17 00:00:00 2001 From: Gerrit Date: Thu, 13 Feb 2025 11:14:05 +0100 Subject: [PATCH 1/2] Split development guide from contributing.md. The CONTRIBUTING.md is a bit special in our Github org. Iwould like to separate the development guide from this docuement. --- CONTRIBUTING.md | 191 +----------------------------------------------- DEVELOPMENT.md | 188 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 189 insertions(+), 190 deletions(-) create mode 100644 DEVELOPMENT.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c4e153f..f2859ed 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,192 +1,3 @@ -# Contributing to CAPMS +# Contributing Please check out the [contributing section](https://docs.metal-stack.io/stable/development/contributing/) in our [docs](https://docs.metal-stack.io/). - -## Getting Started - -### Local Development - -This project comes with a preconfigured version of the [mini-lab](https://github.com/metal-stack/mini-lab) in [capi-lab](./capi-lab) which runs a local metal-stack instance and all prerequisites required by this provider. - -```bash -make -C capi-lab - -# allows access using metalctl and kubectl -eval $(make -C capi-lab --silent dev-env) -``` - -Next install our CAPMS provider into the cluster. - -```bash -# repeat this whenever you make changes -make push-to-capi-lab -``` - -Before creating a cluster some manual steps are required beforehand: you need to allocate a node network and a firewall. - -```bash -make -C capi-lab node-network firewall -``` - -A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target. - -```bash -make -C capi-lab apply-sample-cluster -``` - -Once the control plane node has phoned home, run: - -```bash -make -C capi-lab mtu-fix -``` - -When the control plane node was provisioned, you can obtain the kubeconfig like: - -```bash -kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml -# alternatively: -clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml -``` - -It is now expected to deploy a CNI to the cluster: - -```bash -kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml -cat < [!note] -> Actually, Calico should be configured using BGP (no overlay), eBPF and DSR. An example will be proposed in this repository at a later point in time. - -The node's provider ID is provided by the [metal-ccm](https://github.com/metal-stack/metal-ccm), which needs to be deployed into the cluster: - -```bash -make -C capi-lab deploy-metal-ccm -``` - -If you want to provide service's of type load balancer through MetalLB by the metal-ccm, you need to deploy MetalLB: - -```bash -kubectl --kubeconfig capi-lab/.capms-cluster-kubeconfig.yaml apply --kustomize capi-lab/metallb -``` - -For each node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{ -NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node. - -```bash -cat </cluster-api-provider-metal-stack:tag -``` - -**NOTE:** This image ought to be published in the personal registry you specified. -And it is required to have access to pull the image from the working environment. -Make sure you have the proper permission to the registry if the above commands don’t work. - -**Install the CRDs into the cluster:** - -```sh -make install -``` - -**Deploy the Manager to the cluster with the image specified by `IMG`:** - -```sh -make deploy IMG=/cluster-api-provider-metal-stack:tag -``` - -> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. - -**Create instances of your solution** -You can apply the sample cluster configuration: - -```sh -make -C capi-lab apply-sample-cluster -``` - -### To Uninstall -**Delete the instances (CRs) from the cluster:** - -```sh -make -C capi-lab delete-sample-cluster -``` - -**Delete the APIs(CRDs) from the cluster:** - -```sh -make uninstall -``` - -**UnDeploy the controller from the cluster:** - -```sh -make undeploy -``` - -## Project Distribution - -Following are the steps to build the installer and distribute this project to users. - -1. Build the installer for the image built and published in the registry: - -```sh -make build-installer IMG=/cluster-api-provider-metal-stack:tag -``` - -NOTE: The makefile target mentioned above generates an 'install.yaml' -file in the dist directory. This file contains all the resources built -with Kustomize, which are necessary to install this project without -its dependencies. - -2. Using the installer - -Users can just run kubectl apply -f to install the project, i.e.: - -```sh -kubectl apply -f https://raw.githubusercontent.com//cluster-api-provider-metal-stack//dist/install.yaml -``` diff --git a/DEVELOPMENT.md b/DEVELOPMENT.md new file mode 100644 index 0000000..5487aa9 --- /dev/null +++ b/DEVELOPMENT.md @@ -0,0 +1,188 @@ +# Development + +## Getting Started Locally + +This project comes with a preconfigured version of the [mini-lab](https://github.com/metal-stack/mini-lab) in [capi-lab](./capi-lab) which runs a local metal-stack instance and all prerequisites required by this provider. + +```bash +make -C capi-lab + +# allows access using metalctl and kubectl +eval $(make -C capi-lab --silent dev-env) +``` + +Next install our CAPMS provider into the cluster. + +```bash +# repeat this whenever you make changes +make push-to-capi-lab +``` + +Before creating a cluster some manual steps are required beforehand: you need to allocate a node network and a firewall. + +```bash +make -C capi-lab node-network firewall +``` + +A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target. + +```bash +make -C capi-lab apply-sample-cluster +``` + +Once the control plane node has phoned home, run: + +```bash +make -C capi-lab mtu-fix +``` + +When the control plane node was provisioned, you can obtain the kubeconfig like: + +```bash +kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml +# alternatively: +clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml +``` + +It is now expected to deploy a CNI to the cluster: + +```bash +kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml +cat < [!note] +> Actually, Calico should be configured using BGP (no overlay), eBPF and DSR. An example will be proposed in this repository at a later point in time. + +The node's provider ID is provided by the [metal-ccm](https://github.com/metal-stack/metal-ccm), which needs to be deployed into the cluster: + +```bash +make -C capi-lab deploy-metal-ccm +``` + +If you want to provide service's of type load balancer through MetalLB by the metal-ccm, you need to deploy MetalLB: + +```bash +kubectl --kubeconfig capi-lab/.capms-cluster-kubeconfig.yaml apply --kustomize capi-lab/metallb +``` + +For each node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{ +NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node. + +```bash +cat </cluster-api-provider-metal-stack:tag +``` + +**NOTE:** This image ought to be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. +Make sure you have the proper permission to the registry if the above commands don’t work. + +**Install the CRDs into the cluster:** + +```sh +make install +``` + +**Deploy the Manager to the cluster with the image specified by `IMG`:** + +```sh +make deploy IMG=/cluster-api-provider-metal-stack:tag +``` + +> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin. + +**Create instances of your solution** +You can apply the sample cluster configuration: + +```sh +make -C capi-lab apply-sample-cluster +``` + +### To Uninstall +**Delete the instances (CRs) from the cluster:** + +```sh +make -C capi-lab delete-sample-cluster +``` + +**Delete the APIs(CRDs) from the cluster:** + +```sh +make uninstall +``` + +**UnDeploy the controller from the cluster:** + +```sh +make undeploy +``` + +## Project Distribution + +Following are the steps to build the installer and distribute this project to users. + +1. Build the installer for the image built and published in the registry: + +```sh +make build-installer IMG=/cluster-api-provider-metal-stack:tag +``` + +NOTE: The makefile target mentioned above generates an 'install.yaml' +file in the dist directory. This file contains all the resources built +with Kustomize, which are necessary to install this project without +its dependencies. + +2. Using the installer + +Users can just run kubectl apply -f to install the project, i.e.: + +```sh +kubectl apply -f https://raw.githubusercontent.com//cluster-api-provider-metal-stack//dist/install.yaml +``` From d3a1a5d8b0ae802f6141c7d15a8270eedf097afd Mon Sep 17 00:00:00 2001 From: Gerrit Date: Thu, 20 Feb 2025 11:24:44 +0100 Subject: [PATCH 2/2] Update. --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 17c9de9..ad75b39 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ The Cluster API provider for metal-stack (CAPMS) implements the declarative mana > [!CAUTION] > This project is currently under heavy development and is not advised to be used in production any time soon. > Please use our stack on top of [Gardener](https://docs.metal-stack.io/stable/installation/deployment/#Gardener-with-metal-stack) instead. -> User documentation will follow as soon. Until then, head to our [CONTRIBUTING.md](/CONTRIBUTING.md). +> For developing this project head to our [DEVELOPMENT.md](/DEVELOPMENT.md). Currently, we provide the following custom resources: @@ -20,7 +20,7 @@ Currently, we provide the following custom resources: **Prerequisites:** -- Running metal-stack installation. See our [installation](https://docs.metal-stack.io/stable/installation/deployment/) section on how to get started with metal-stack. +- Running metal-stack installation. See our [installation](https://docs.metal-stack.io/stable/installation/deployment/) section on how to get started with metal-stack. - Management cluster (with network access to the metal-stack infrastructure). - CLI metalctl installed for communicating with the metal-api. Installation instructions can be found in the corresponding [repository](https://github.com/metal-stack/metalctl). - CLI clusterctl @@ -93,15 +93,15 @@ Apply the generated manifest from the `clusterctl` output. kubectl apply -f ``` -Once your control plane and worker machines have been provisioned, you need to install your CNI of choice into your created cluster. This is required due to CAPI. An example is provided below: +Once your control plane and worker machines have been provisioned, you need to install your CNI of choice into your created cluster. This is required due to CAPI. An example is provided below: ```bash # get the kubeconfig clusterctl get kubeconfig metal-test > capms-cluster.kubeconfig - + # install the calico operator kubectl --kubeconfig=capms-cluster.kubeconfig create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml - + # install the calico CNI cat < export NODE_HOSTNAME=$(metalctl machine describe $NODE_ID -o template --template '{{ .allocation.hostname }}') export NODE_ASN=$(metalctl machine describe $NODE_ID -o template --template '{{ printf "%.0f" (index .allocation.networks 0).asn }}') export NODE_ROUTER_ID=$(metalctl machine describe $NODE_ID -o template --template '{{ (index (index .allocation.networks 0).ips 0) }}') - + # for each worker machine generate and apply the BGPPeer resource cat <