@@ -20,14 +20,64 @@ workflow that offers easy deployments and rapid iterative builds.
2020## Getting started
2121
2222### Create a kind cluster
23- A script to create a KIND cluster along with a local Docker registry and the correct mounts to run CAPD is included in the hack/ folder.
2423
25- To create a pre-configured cluster run:
24+ The following CAPI infrastructure providers are suitable for local development:
25+
26+ - [ CAPD] ( https://github.com/kubernetes-sigs/cluster-api/blob/main/test/infrastructure/docker/README.md ) - uses Docker containers as workload cluster nodes
27+ - [ CAPK] ( https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt ) - uses KubeVirt VMs as workload cluster nodes
28+
29+ CAPD is the default as it's more lightweight and requires less setup. KubeVirt is useful when
30+ Docker isn't suitable for whatever reason.
31+
32+ {{#tabs name:"tab-management-cluster-creation" tabs:"Docker,KubeVirt"}}
33+ {{#tab Docker}}
34+
35+ To create a kind cluster with CAPD, run the following:
36+
37+ ``` bash
38+ make kind-cluster
39+ ```
40+
41+ {{#/tab }}
42+ {{#tab KubeVirt}}
43+
44+ To create a kind cluster with CAPD, run the following:
2645
2746``` bash
28- ./hack/ kind-install-for-capd.sh
47+ make kind-cluster-kubevirt
2948```
3049
50+ <aside class =" note " >
51+
52+ KubeVirt uses * container disks* to create VMs inside pods. These are special container images which
53+ need to be pulled from a registry. To support pulling container disks from private registries as
54+ well as avoid getting rate-limited by Docker Hub (if used), the CAPK script mounts your Docker
55+ config file inside the kind cluster to let the Kubelet access your credentials.
56+
57+ The script looks for the Docker config file at ` $HOME/.docker/config.json ` by default. To specify
58+ a different path, set the following variable before running Make above:
59+
60+ ``` bash
61+ export DOCKER_CONFIG_FILE=" /foo/config.json"
62+ ```
63+
64+ </aside >
65+
66+ <aside class =" note " >
67+
68+ The CAPK script needs to figure out your container runtime IP prefix to allow communication with
69+ workload clusters. The script assumes Docker is used. In case a different runtime is used, specify
70+ your container runtime's IP prefix manually (the first two octets only):
71+
72+ ``` bash
73+ export CAPI_IP_PREFIX=" 172.20"
74+ ```
75+
76+ </aside >
77+
78+ {{#/tab }}
79+ {{#/tabs }}
80+
3181You can see the status of the cluster with:
3282
3383``` bash
@@ -36,7 +86,10 @@ kubectl cluster-info --context kind-capi-test
3686
3787### Create a tilt-settings file
3888
39- Next, create a ` tilt-settings.yaml ` file and place it in your local copy of ` cluster-api ` . Here is an example that uses the components from the CAPI repo:
89+ Next, create a ` tilt-settings.yaml ` file and place it in your local copy of ` cluster-api ` .
90+
91+ {{#tabs name:"tab-tilt-settings" tabs:"Docker,KubeVirt"}}
92+ {{#tab Docker}}
4093
4194``` yaml
4295default_registry : gcr.io/your-project-name-here
@@ -46,7 +99,33 @@ enable_providers:
4699- kubeadm-control-plane
47100` ` `
48101
49- To use tilt to launch a provider with its own repo, using Cluster API Provider AWS here, ` tilt-settings.yaml` should look like:
102+ {{#/tab }}
103+ {{#tab KubeVirt}}
104+
105+ ` ` ` yaml
106+ enable_providers :
107+ - kubevirt
108+ - kubeadm-bootstrap
109+ - kubeadm-control-plane
110+ provider_repos :
111+ # Path to a local clone of CAPK (replace with actual path)
112+ - ../cluster-api-provider-kubevirt
113+ kustomize_substitutions :
114+ # CAPK needs access to the containerd socket (replace with actual path)
115+ CRI_PATH : " /var/run/containerd/containerd.sock"
116+ KUBERNETES_VERSION : " v1.30.1"
117+ # An example - replace with an appropriate container disk image for the desired k8s version
118+ NODE_VM_IMAGE_TEMPLATE : " quay.io/capk/ubuntu-2204-container-disk:v1.30.1"
119+ # Allow deploying CAPK workload clusters from the Tilt UI (optional)
120+ template_dirs :
121+ kubevirt :
122+ - ../cluster-api-provider-kubevirt/templates
123+ ` ` `
124+
125+ {{#/tab }}
126+ {{#/tabs }}
127+
128+ Other infrastructure providers may be added to the cluster using local clones and a configuration similar to the following:
50129
51130` ` ` yaml
52131default_registry : gcr.io/your-project-name-here
0 commit comments