Skip to content

Commit 3aa8cbb

Browse files
authored
Split development guide from contributing.md. (#74)
1 parent c22d4c0 commit 3aa8cbb

File tree

3 files changed

+196
-197
lines changed

3 files changed

+196
-197
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 190 deletions
Original file line numberDiff line numberDiff line change
@@ -1,192 +1,3 @@
1-
# Contributing to CAPMS
1+
# Contributing
22

33
Please check out the [contributing section](https://docs.metal-stack.io/stable/development/contributing/) in our [docs](https://docs.metal-stack.io/).
4-
5-
## Getting Started
6-
7-
### Local Development
8-
9-
This project comes with a preconfigured version of the [mini-lab](https://github.com/metal-stack/mini-lab) in [capi-lab](./capi-lab) which runs a local metal-stack instance and all prerequisites required by this provider.
10-
11-
```bash
12-
make -C capi-lab
13-
14-
# allows access using metalctl and kubectl
15-
eval $(make -C capi-lab --silent dev-env)
16-
```
17-
18-
Next install our CAPMS provider into the cluster.
19-
20-
```bash
21-
# repeat this whenever you make changes
22-
make push-to-capi-lab
23-
```
24-
25-
Before creating a cluster some manual steps are required beforehand: you need to allocate a node network and a firewall.
26-
27-
```bash
28-
make -C capi-lab node-network firewall
29-
```
30-
31-
A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target.
32-
33-
```bash
34-
make -C capi-lab apply-sample-cluster
35-
```
36-
37-
Once the control plane node has phoned home, run:
38-
39-
```bash
40-
make -C capi-lab mtu-fix
41-
```
42-
43-
When the control plane node was provisioned, you can obtain the kubeconfig like:
44-
45-
```bash
46-
kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml
47-
# alternatively:
48-
clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml
49-
```
50-
51-
It is now expected to deploy a CNI to the cluster:
52-
53-
```bash
54-
kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
55-
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
56-
apiVersion: operator.tigera.io/v1
57-
kind: Installation
58-
metadata:
59-
name: default
60-
spec:
61-
# Configures Calico networking.
62-
calicoNetwork:
63-
bgp: Disabled
64-
ipPools:
65-
- name: default-ipv4-ippool
66-
blockSize: 26
67-
cidr: 10.240.0.0/12
68-
encapsulation: None
69-
mtu: 1440
70-
cni:
71-
ipam:
72-
type: HostLocal
73-
type: Calico
74-
EOF
75-
```
76-
77-
> [!note]
78-
> Actually, Calico should be configured using BGP (no overlay), eBPF and DSR. An example will be proposed in this repository at a later point in time.
79-
80-
The node's provider ID is provided by the [metal-ccm](https://github.com/metal-stack/metal-ccm), which needs to be deployed into the cluster:
81-
82-
```bash
83-
make -C capi-lab deploy-metal-ccm
84-
```
85-
86-
If you want to provide service's of type load balancer through MetalLB by the metal-ccm, you need to deploy MetalLB:
87-
88-
```bash
89-
kubectl --kubeconfig capi-lab/.capms-cluster-kubeconfig.yaml apply --kustomize capi-lab/metallb
90-
```
91-
92-
For each node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{
93-
NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node.
94-
95-
```bash
96-
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
97-
apiVersion: metallb.io/v1beta2
98-
kind: BGPPeer
99-
metadata:
100-
name: ${NODE_HOSTNAME}
101-
namespace: metallb-system
102-
spec:
103-
holdTime: 1m30s
104-
keepaliveTime: 0s
105-
myASN: ${NODE_ASN}
106-
nodeSelectors:
107-
- matchExpressions:
108-
- key: kubernetes.io/hostname
109-
operator: In
110-
values:
111-
- ${NODE_HOSTNAME}
112-
passwordSecret: {}
113-
peerASN: ${NODE_ASN}
114-
peerAddress: ${NODE_ROUTER_ID}
115-
EOF
116-
```
117-
118-
That's it!
119-
120-
### To Deploy on the cluster
121-
**Build and push your image to the location specified by `IMG`:**
122-
123-
```sh
124-
make docker-build docker-push IMG=<some-registry>/cluster-api-provider-metal-stack:tag
125-
```
126-
127-
**NOTE:** This image ought to be published in the personal registry you specified.
128-
And it is required to have access to pull the image from the working environment.
129-
Make sure you have the proper permission to the registry if the above commands don’t work.
130-
131-
**Install the CRDs into the cluster:**
132-
133-
```sh
134-
make install
135-
```
136-
137-
**Deploy the Manager to the cluster with the image specified by `IMG`:**
138-
139-
```sh
140-
make deploy IMG=<some-registry>/cluster-api-provider-metal-stack:tag
141-
```
142-
143-
> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.
144-
145-
**Create instances of your solution**
146-
You can apply the sample cluster configuration:
147-
148-
```sh
149-
make -C capi-lab apply-sample-cluster
150-
```
151-
152-
### To Uninstall
153-
**Delete the instances (CRs) from the cluster:**
154-
155-
```sh
156-
make -C capi-lab delete-sample-cluster
157-
```
158-
159-
**Delete the APIs(CRDs) from the cluster:**
160-
161-
```sh
162-
make uninstall
163-
```
164-
165-
**UnDeploy the controller from the cluster:**
166-
167-
```sh
168-
make undeploy
169-
```
170-
171-
## Project Distribution
172-
173-
Following are the steps to build the installer and distribute this project to users.
174-
175-
1. Build the installer for the image built and published in the registry:
176-
177-
```sh
178-
make build-installer IMG=<some-registry>/cluster-api-provider-metal-stack:tag
179-
```
180-
181-
NOTE: The makefile target mentioned above generates an 'install.yaml'
182-
file in the dist directory. This file contains all the resources built
183-
with Kustomize, which are necessary to install this project without
184-
its dependencies.
185-
186-
2. Using the installer
187-
188-
Users can just run kubectl apply -f <URL for YAML BUNDLE> to install the project, i.e.:
189-
190-
```sh
191-
kubectl apply -f https://raw.githubusercontent.com/<org>/cluster-api-provider-metal-stack/<tag or branch>/dist/install.yaml
192-
```

DEVELOPMENT.md

Lines changed: 188 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,188 @@
1+
# Development
2+
3+
## Getting Started Locally
4+
5+
This project comes with a preconfigured version of the [mini-lab](https://github.com/metal-stack/mini-lab) in [capi-lab](./capi-lab) which runs a local metal-stack instance and all prerequisites required by this provider.
6+
7+
```bash
8+
make -C capi-lab
9+
10+
# allows access using metalctl and kubectl
11+
eval $(make -C capi-lab --silent dev-env)
12+
```
13+
14+
Next install our CAPMS provider into the cluster.
15+
16+
```bash
17+
# repeat this whenever you make changes
18+
make push-to-capi-lab
19+
```
20+
21+
Before creating a cluster some manual steps are required beforehand: you need to allocate a node network and a firewall.
22+
23+
```bash
24+
make -C capi-lab node-network firewall
25+
```
26+
27+
A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target.
28+
29+
```bash
30+
make -C capi-lab apply-sample-cluster
31+
```
32+
33+
Once the control plane node has phoned home, run:
34+
35+
```bash
36+
make -C capi-lab mtu-fix
37+
```
38+
39+
When the control plane node was provisioned, you can obtain the kubeconfig like:
40+
41+
```bash
42+
kubectl get secret metal-test-kubeconfig -o jsonpath='{.data.value}' | base64 -d > capi-lab/.capms-cluster-kubeconfig.yaml
43+
# alternatively:
44+
clusterctl get kubeconfig metal-test > capi-lab/.capms-cluster-kubeconfig.yaml
45+
```
46+
47+
It is now expected to deploy a CNI to the cluster:
48+
49+
```bash
50+
kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
51+
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
52+
apiVersion: operator.tigera.io/v1
53+
kind: Installation
54+
metadata:
55+
name: default
56+
spec:
57+
# Configures Calico networking.
58+
calicoNetwork:
59+
bgp: Disabled
60+
ipPools:
61+
- name: default-ipv4-ippool
62+
blockSize: 26
63+
cidr: 10.240.0.0/12
64+
encapsulation: None
65+
mtu: 1440
66+
cni:
67+
ipam:
68+
type: HostLocal
69+
type: Calico
70+
EOF
71+
```
72+
73+
> [!note]
74+
> Actually, Calico should be configured using BGP (no overlay), eBPF and DSR. An example will be proposed in this repository at a later point in time.
75+
76+
The node's provider ID is provided by the [metal-ccm](https://github.com/metal-stack/metal-ccm), which needs to be deployed into the cluster:
77+
78+
```bash
79+
make -C capi-lab deploy-metal-ccm
80+
```
81+
82+
If you want to provide service's of type load balancer through MetalLB by the metal-ccm, you need to deploy MetalLB:
83+
84+
```bash
85+
kubectl --kubeconfig capi-lab/.capms-cluster-kubeconfig.yaml apply --kustomize capi-lab/metallb
86+
```
87+
88+
For each node in your Kubernetes cluster, you need to create a BGP peer configuration. Replace the placeholders ({{
89+
NODE_ASN }}, {{ NODE_HOSTNAME }}, and {{ NODE_ROUTER_ID }}) with the appropriate values for each node.
90+
91+
```bash
92+
cat <<EOF | kubectl --kubeconfig=capi-lab/.capms-cluster-kubeconfig.yaml create -f -
93+
apiVersion: metallb.io/v1beta2
94+
kind: BGPPeer
95+
metadata:
96+
name: ${NODE_HOSTNAME}
97+
namespace: metallb-system
98+
spec:
99+
holdTime: 1m30s
100+
keepaliveTime: 0s
101+
myASN: ${NODE_ASN}
102+
nodeSelectors:
103+
- matchExpressions:
104+
- key: kubernetes.io/hostname
105+
operator: In
106+
values:
107+
- ${NODE_HOSTNAME}
108+
passwordSecret: {}
109+
peerASN: ${NODE_ASN}
110+
peerAddress: ${NODE_ROUTER_ID}
111+
EOF
112+
```
113+
114+
That's it!
115+
116+
### To Deploy on the cluster
117+
**Build and push your image to the location specified by `IMG`:**
118+
119+
```sh
120+
make docker-build docker-push IMG=<some-registry>/cluster-api-provider-metal-stack:tag
121+
```
122+
123+
**NOTE:** This image ought to be published in the personal registry you specified.
124+
And it is required to have access to pull the image from the working environment.
125+
Make sure you have the proper permission to the registry if the above commands don’t work.
126+
127+
**Install the CRDs into the cluster:**
128+
129+
```sh
130+
make install
131+
```
132+
133+
**Deploy the Manager to the cluster with the image specified by `IMG`:**
134+
135+
```sh
136+
make deploy IMG=<some-registry>/cluster-api-provider-metal-stack:tag
137+
```
138+
139+
> **NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as admin.
140+
141+
**Create instances of your solution**
142+
You can apply the sample cluster configuration:
143+
144+
```sh
145+
make -C capi-lab apply-sample-cluster
146+
```
147+
148+
### To Uninstall
149+
**Delete the instances (CRs) from the cluster:**
150+
151+
```sh
152+
make -C capi-lab delete-sample-cluster
153+
```
154+
155+
**Delete the APIs(CRDs) from the cluster:**
156+
157+
```sh
158+
make uninstall
159+
```
160+
161+
**UnDeploy the controller from the cluster:**
162+
163+
```sh
164+
make undeploy
165+
```
166+
167+
## Project Distribution
168+
169+
Following are the steps to build the installer and distribute this project to users.
170+
171+
1. Build the installer for the image built and published in the registry:
172+
173+
```sh
174+
make build-installer IMG=<some-registry>/cluster-api-provider-metal-stack:tag
175+
```
176+
177+
NOTE: The makefile target mentioned above generates an 'install.yaml'
178+
file in the dist directory. This file contains all the resources built
179+
with Kustomize, which are necessary to install this project without
180+
its dependencies.
181+
182+
2. Using the installer
183+
184+
Users can just run kubectl apply -f <URL for YAML BUNDLE> to install the project, i.e.:
185+
186+
```sh
187+
kubectl apply -f https://raw.githubusercontent.com/<org>/cluster-api-provider-metal-stack/<tag or branch>/dist/install.yaml
188+
```

0 commit comments

Comments
 (0)