Skip to content

Commit 1eec3b9

Browse files
authored
Merge pull request #11754 from fabriziopandini/CAPDev-proposal
📖 CAPD(Dev) Proposal
2 parents 07c0565 + 4acae25 commit 1eec3b9

File tree

1 file changed

+162
-0
lines changed

1 file changed

+162
-0
lines changed
Lines changed: 162 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,162 @@
1+
---
2+
title: From CAPD(docker) to CAPD(dev)
3+
authors:
4+
- "@fabrziopandini"
5+
reviewers:
6+
- ""
7+
creation-date: 2025-01-25
8+
last-updated: 2025-01-25
9+
status: implementable
10+
see-also: []
11+
replaces: []
12+
superseded-by: []
13+
---
14+
15+
# From CAPD(docker) to CAPD(dev)
16+
17+
## Table of Contents
18+
19+
- [Summary](#summary)
20+
- [Motivation](#motivation)
21+
- [Goals](#goals)
22+
- [Non-Goals](#non-goalsfuture-work)
23+
- [Proposal](#proposal)
24+
- [Implementation History](#implementation-history)
25+
26+
## Summary
27+
28+
This document proposes to evolve the CAPD(docker) provider into a more generic CAPD(dev) provider
29+
supporting multiple backends including docker, in-memory, may be also kubemark.
30+
31+
## Motivation
32+
33+
In the Cluster API core repository we currently have two infrastructure providers designed for use during development
34+
and test, CAPD(docker) and CAPIM(in-memory).
35+
36+
If we look to all the Kubernetes SIG cluster lifecycle sub projects, there is also a third infrastructure provider
37+
designed for development and test, CAPK(Kubemark).
38+
39+
Maintaining all those providers requires a certain amount or work, and this work mostly falls on a small set
40+
of maintainers taking care of CI/release signal.
41+
42+
This proposal aims to reduce above toil and reducing the size/complexity of test machinery in Cluster API
43+
by bringing together all the infrastructure providers designed for use during development and listed above.
44+
45+
### Goals
46+
47+
- Reduce toil and maintenance effort for infrastructure provider designed for development and test.
48+
49+
### Non-Goals
50+
51+
- Host any infrastructure provider designed for development and test outside of the scope defined above.
52+
53+
## Proposal
54+
55+
This proposal aims to evolve the CAPD(docker) provider into a more generic CAPD(dev) provider
56+
capable to support multiple backends including docker, in-memory, may be also kubemark.
57+
58+
The transition will happen in two phases:
59+
60+
In phase 1, targeting CAPI 1.10 (current release cycle), three new kinds will be introduced in CAPD: `DevCluster`, `DevMachine` and `DevMachinePool`.
61+
62+
`DevCluster`, `DevMachine` and `DevMachinePool` will have
63+
- A `docker` backend, functionally equivalent to `DockerCluster`, `DockerMachine` and `DockerMachinePool`.
64+
- An `inMemory` backend, functionally equivalent to CAPIM's `InMemoryCluster` and `InMemoryMachine` (there is no `InMemoryMachinePool`).
65+
- if and when Maintainers of CAPK(Kubemark) will choose to converge to CAPD(dev), a `kubemark` backend,
66+
functionally equivalent to `KubemarkMachine` (there is no `KubemarkCluster` nor a `KubemarkMachinePool`).
67+
68+
Below you can find an example of `DockerMachine` and corresponding `DevMachine` using the docker backend:
69+
70+
```yaml
71+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
72+
kind: DockerMachine
73+
metadata:
74+
name: controlplane
75+
spec:
76+
extraMounts:
77+
- containerPath: "/var/run/docker.sock"
78+
hostPath: "/var/run/docker.sock"
79+
```
80+
81+
```yaml
82+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
83+
kind: DevMachine
84+
metadata:
85+
name: controlplane
86+
spec:
87+
backend:
88+
docker:
89+
extraMounts:
90+
- containerPath: "/var/run/docker.sock"
91+
hostPath: "/var/run/docker.sock"
92+
```
93+
94+
And an example of `InMemoryMachine` and corresponding `DevMachine` using the in memory backend:
95+
96+
```yaml
97+
apiVersion: infrastructure.cluster.x-k8s.io/alpha1
98+
kind: InMemoryMachine
99+
metadata:
100+
name: in-memory-control-plane
101+
spec:
102+
behaviour:
103+
inMemory:
104+
vm:
105+
provisioning:
106+
startupDuration: "10s"
107+
startupJitter: "0.2"
108+
node:
109+
provisioning:
110+
startupDuration: "2s"
111+
startupJitter: "0.2"
112+
apiServer:
113+
provisioning:
114+
startupDuration: "2s"
115+
startupJitter: "0.2"
116+
etcd:
117+
provisioning:
118+
startupDuration: "2s"
119+
startupJitter: "0.2"
120+
```
121+
122+
```yaml
123+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
124+
kind: DevMachine
125+
metadata:
126+
name: in-memory-control-plane
127+
spec:
128+
backend:
129+
inMemory:
130+
vm:
131+
provisioning:
132+
startupDuration: "10s"
133+
startupJitter: "0.2"
134+
node:
135+
provisioning:
136+
startupDuration: "2s"
137+
startupJitter: "0.2"
138+
apiServer:
139+
provisioning:
140+
startupDuration: "2s"
141+
startupJitter: "0.2"
142+
etcd:
143+
provisioning:
144+
startupDuration: "2s"
145+
startupJitter: "0.2"
146+
```
147+
148+
Please note that at the end of phase 1
149+
- `DockerCluster`, `DockerMachine` and `DockerMachinePool` will continue to exist, thus allowing maintainers and users
150+
time to migrate progressively to the new kinds; however, it is important to notice that
151+
`DockerCluster`, `DockerMachine` and `DockerMachinePool` should be considered as deprecated from now on.
152+
- `InMemoryCluster` and `InMemoryMachine` will be removed immediately, as well as the entire CAPIM provider.
153+
(CAPIM is used only for CAPI scale tests, so users are not/less impacted and migration can be executed faster).
154+
155+
Phase 2 consist of the actual removal of `DockerCluster`, `DockerMachine` and `DockerMachinePool`; this phase will happen after maintainers
156+
will complete the transition of all the E2E tests to `DevCluster`, `DevMachine` and `DevMachinePool`; considering we have upgrade tests using
157+
older releases of CAPD, completing this phase would likely require a few release cycles, targeting tentatively CAPI 1.13, Apr 2026.
158+
159+
## Implementation History
160+
161+
- [ ] 2025-01-25: Open proposal PR
162+
- [ ] 2025-01-29: Present proposal at a community meeting

0 commit comments

Comments
 (0)