Skip to content

Commit cf3b482

Browse files
vknabelrobertvolkmannsimcod
authored
Update docs for latest version (#65)
Co-authored-by: Robert Volkmann <[email protected]> Co-authored-by: Simon Mayer <[email protected]>
1 parent e15bc72 commit cf3b482

File tree

2 files changed

+153
-21
lines changed

2 files changed

+153
-21
lines changed

CONTRIBUTING.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,8 @@ When the control plane node was provisioned, you can obtain the kubeconfig like:
4444

4545
```bash
4646
kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml
47+
# alternatively:
48+
clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml
4749
```
4850

4951
It is now expected to deploy a CNI to the cluster:

README.md

Lines changed: 151 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ The Cluster API provider for metal-stack (CAPMS) implements the declarative mana
55
> [!CAUTION]
66
> This project is currently under heavy development and is not advised to be used in production any time soon.
77
> Please use our stack on top of [Gardener](https://docs.metal-stack.io/stable/installation/deployment/#Gardener-with-metal-stack) instead.
8-
> User documentation will follow as soon. Until then head to our [CONTRIBUTING.md](/CONTRIBUTING.md)
8+
> User documentation will follow as soon. Until then, head to our [CONTRIBUTING.md](/CONTRIBUTING.md).
99
10-
Currently we provide the following custom resources:
10+
Currently, we provide the following custom resources:
1111

12-
- [`MetalStackCluster`](./api/v1alpha1/metalstackcluster_types.go) can be used as [infrastructure cluster](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-cluster) and ensures that the metal-stack network and firewall are being prepared.
12+
- [`MetalStackCluster`](./api/v1alpha1/metalstackcluster_types.go) can be used as [infrastructure cluster](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-cluster) and ensures that there is a control plane IP for the cluster.
1313
- [`MetalStackMachine`](./api/v1alpha1/metalstackmachine_types.go) bridges between [infrastructure machines](https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-machine) and metal-stack machines.
1414

1515
> [!note]
@@ -20,42 +20,172 @@ Currently we provide the following custom resources:
2020

2121
**Prerequisites:**
2222

23-
- a running metal-stack installation
24-
- CRDs for Prometheus
25-
- CRDs for the Firewall Controller Manager
23+
- Running metal-stack installation. See our [installation](https://docs.metal-stack.io/stable/installation/deployment/) section on how to get started with metal-stack.
24+
- Management cluster (with network access to the metal-stack infrastructure).
25+
- CLI metalctl installed for communicating with the metal-api. Installation instructions can be found in the corresponding [repository](https://github.com/metal-stack/metalctl).
26+
- CLI clusterctl
2627

27-
First add the metal-stack infrastructure provider to your `clusterctl.yaml`:
28+
First, add the metal-stack infrastructure provider to your `clusterctl.yaml`:
2829

2930
```yaml
3031
# ~/.config/cluster-api/clusterctl.yaml
3132
providers:
3233
- name: "metal-stack"
33-
url: "https://github.com/metal-stack/cluster-api-provider-metal-stack/releases/latest/infrastructure-components.yaml"
34+
url: "https://github.com/metal-stack/cluster-api-provider-metal-stack/releases/latest/download/infrastructure-components.yaml"
3435
type: InfrastructureProvider
3536
```
3637
37-
Now you are able to install the CAPMS into your cluster:
38+
Now, you are able to install the CAPMS into your management cluster:
3839
3940
```bash
40-
export METAL_API_URL=http://metal.203.0.113.1.nip.io:8080
41-
export METAL_API_HMAC=metal-admin
41+
# export the following environment variables
42+
export METAL_API_URL=<url>
43+
export METAL_API_HMAC=<hmac>
4244
export EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION=true
4345

46+
# initialize the management cluster
4447
clusterctl init --infrastructure metal-stack
4548
```
4649

47-
Now you should be able to create Clusters on top of metal-stack.
48-
For your first cluster it is advised to start with our generated template.
50+
> [!CAUTION]
51+
> **Manual steps needed:**
52+
> Due to the early development stage, manual actions are needed for the cluster to operate. Some metal-stack resources need to be created manually.
4953
54+
A node network needs to be created.
5055
```bash
51-
# to display all env variables that need to be set
52-
clusterctl generate cluster example --kubernetes-version v1.30.6 --infrastructure metal-stack --list-variables
56+
export METAL_PARTITION=<partition>
57+
export METAL_PROJECT_ID=<project-id>
58+
metalctl network allocate --description "<description>" --name <name> --project $METAL_PROJECT_ID --partition $METAL_PARTITION
59+
60+
# export environment variable for use in the next steps
61+
export METAL_NODE_NETWORK_ID=$(metalctl network list --name <name> -o template --template '{{ .id }}')
5362
```
5463

55-
> [!CAUTION]
56-
> **Manual steps needed:**
57-
> Due to the early development stage the following manual actions are needed for the cluster to operate.
64+
A firewall needs to be created with appropriate firewall rules. An example can be found at [firewall-rules.yaml](capi-lab/firewall-rules.yaml).
65+
```bash
66+
# export environment variable for the firewall image and size
67+
export FIREWALL_MACHINE_IMAGE=<firewall-image>
68+
export FIREWALL_MACHINE_SIZE=<machine-size>
69+
70+
metalctl firewall create --description <description> --name <name> --hostname <hostname> --project $METAL_PROJECT_ID --partition $METAL_PARTITION --image $FIREWALL_MACHINE_IMAGE --size $FIREWALL_MACHINE_SIZE --firewall-rules-file=<rules.yaml> --networks internet,$METAL_NODE_NETWORK_ID
71+
```
72+
73+
For your first cluster, it is advised to start with our generated template.
74+
75+
```bash
76+
# display required environment variables
77+
clusterctl generate cluster <cluster-name> --infrastructure metal-stack --list-variables
78+
79+
# set additional environment variables
80+
export CONTROL_PLANE_MACHINE_IMAGE=<machine-image>
81+
export CONTROL_PLANE_MACHINE_SIZE=<machine-size>
82+
export WORKER_MACHINE_IMAGE=<machine-image>
83+
export WORKER_MACHINE_SIZE=<machine-size>
84+
85+
# generate manifest
86+
clusterctl generate cluster <cluster-name> --kubernetes-version v1.30.6 --infrastructure metal-stack
87+
```
88+
89+
Apply the generated manifest from the `clusterctl` output.
90+
91+
```bash
92+
kubectl apply -f <manifest>
93+
```
94+
95+
Once your control plane and worker machines have been provisioned, you need to install your CNI of choice into your created cluster. This is required due to CAPI. An example is provided below:
96+
97+
```bash
98+
# get the kubeconfig
99+
clusterctl get kubeconfig metal-test > capms-cluster.kubeconfig
100+
101+
# install the calico operator
102+
kubectl --kubeconfig=capms-cluster.kubeconfig create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
103+
104+
# install the calico CNI
105+
cat <<EOF | kubectl --kubeconfig=capms-cluster.kubeconfig create -f -
106+
apiVersion: operator.tigera.io/v1
107+
kind: Installation
108+
metadata:
109+
name: default
110+
spec:
111+
# Configures Calico networking.
112+
calicoNetwork:
113+
bgp: Disabled
114+
ipPools:
115+
- name: default-ipv4-ippool
116+
blockSize: 26
117+
cidr: 10.240.0.0/12
118+
encapsulation: None
119+
mtu: 1440
120+
cni:
121+
ipam:
122+
type: HostLocal
123+
type: Calico
124+
EOF
125+
```
58126

59-
1. The firewall needs to be created manually.
60-
2. The metal-ccm has to be deployed
61-
3. You need to install your CNI of choice. This is required due to CAPI.
127+
Additionally, the `metal-ccm` has to be deployed for the machines to reach `Running` phase. For this use the [template](capi-lab/metal-ccm.yaml) and fill in the required variables.
128+
129+
```bash
130+
cat capi-lab/metal-ccm.yaml | envsubst | kubectl --kubeconfig capms-cluster.kubeconfig apply -f -
131+
```
132+
133+
If you want to provide service's of type `LoadBalancer` through MetalLB by the `metal-ccm`, you need to deploy MetalLB:
134+
135+
```bash
136+
kubectl --kubeconfig capms-cluster.kubeconfig apply --kustomize capi-lab/metallb
137+
```
138+
139+
For each worker node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{
140+
NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node.
141+
142+
```bash
143+
# in metal-stack, list all machines of your cluster
144+
metalctl machine ls --project $METAL_PROJECT_ID
145+
146+
# for each worker machine collect the information as follows
147+
export NODE_ID=<worker-machine-id>
148+
export NODE_HOSTNAME=$(metalctl machine describe $NODE_ID -o template --template '{{ .allocation.hostname }}')
149+
export NODE_ASN=$(metalctl machine describe $NODE_ID -o template --template '{{ printf "%.0f" (index .allocation.networks 0).asn }}')
150+
export NODE_ROUTER_ID=$(metalctl machine describe $NODE_ID -o template --template '{{ (index (index .allocation.networks 0).ips 0) }}')
151+
152+
# for each worker machine generate and apply the BGPPeer resource
153+
cat <<EOF | kubectl --kubeconfig=capms-cluster.kubeconfig create -f -
154+
apiVersion: metallb.io/v1beta2
155+
kind: BGPPeer
156+
metadata:
157+
name: ${NODE_HOSTNAME}
158+
namespace: metallb-system
159+
spec:
160+
holdTime: 1m30s
161+
keepaliveTime: 0s
162+
myASN: ${NODE_ASN}
163+
nodeSelectors:
164+
- matchExpressions:
165+
- key: kubernetes.io/hostname
166+
operator: In
167+
values:
168+
- ${NODE_HOSTNAME}
169+
passwordSecret: {}
170+
peerASN: ${NODE_ASN}
171+
peerAddress: ${NODE_ROUTER_ID}
172+
EOF
173+
```
174+
175+
## Frequently Asked Questions
176+
177+
### I need to know the Control Plane IP address in advance. Can I provide a static IP address in advance?
178+
179+
Yes, simply create a static IP address and set it to `metalstackcluster/<name>.spec.controlPlaneIP`.
180+
181+
```bash
182+
metalctl network ip create --name <name> --project $METAL_PROJECT_ID --type static
183+
```
184+
185+
### I'd like to have a specific Pod CIDR. How can I achieve this?
186+
187+
When generating your cluster, set `POD_CIDR` to your desired value.
188+
189+
```bash
190+
export POD_CIDR=["10.240.0.0/12"]
191+
```

0 commit comments

Comments
 (0)