You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+82-8Lines changed: 82 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,26 @@
4
4
5
5
## About this project
6
6
7
-
A cluster provider for OpenMCP that uses [kind](https://kind.sigs.k8s.io/) to provision clusters. Ideal for local development and E2E tests.
7
+
A cluster provider for [OpenMCP](https://github.com/openmcp-project/openmcp-operator) that uses [kind](https://kind.sigs.k8s.io/)(Kubernetes IN Docker) to provision and manage Kubernetes clusters. This provider enables you to create and manage multiple Kubernetes clusters running as Docker containers, making it ideal for:
8
8
9
-
## Requirements and Setup
9
+
-**Local Development**: Quickly spin up multiple clusters for testing multi-cluster scenarios
10
+
-**E2E Testing**: Automated testing of multi-cluster applications and operators
11
+
-**CI/CD Pipelines**: Lightweight cluster provisioning for testing environments
12
+
13
+
## Prerequisites
14
+
15
+
Before using this cluster provider, ensure you have:
16
+
17
+
-**Docker**: Running Docker daemon with socket accessible
-**kubectl**: For interacting with Kubernetes clusters
20
+
21
+
## Installation
22
+
23
+
### Production Deployment
24
+
25
+
In combination with the [OpenMCP Operator](https://github.com/openmcp-project/openmcp-operator), this operator can be deployed via a simple Kubernetes resource:
10
26
11
-
In combination with the [openMCP Operator](https://github.com/openmcp-project/openmcp-operator), this operator can be deployed via a simple k8s resource:
12
27
```yaml
13
28
apiVersion: openmcp.cloud/v1alpha1
14
29
kind: ClusterProvider
@@ -19,17 +34,71 @@ spec:
19
34
```
20
35
21
36
### Local Development
22
-
To run it locally, please make sure to have a running [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) cluster with kubectl context set to the kind cluster that serves as the platform cluster. You can then run the operator locally by executing:
37
+
38
+
To run the operator locally for development:
39
+
40
+
1. **Start a platform kind cluster**:
41
+
```shell
42
+
kind create cluster --name platform
43
+
kubectl config use-context kind-platform
44
+
```
45
+
2. Install the Platform CRDs of the openmcp-operator:
46
+
Apply the CRDs from the OpenMCP operator repository [here](https://github.com/openmcp-project/openmcp-operator/tree/main/api/crds/manifests).
47
+
48
+
3.**Initialize the CRDs**:
23
49
```shell
24
50
go run ./cmd/cluster-provider-kind/main.go init
25
51
```
26
-
to deploy the CRDs that are required for the operator and then
52
+
53
+
1.**Run the operator**:
27
54
```shell
28
55
go run ./cmd/cluster-provider-kind/main.go run
29
56
```
30
57
58
+
## Usage Examples
59
+
60
+
### Creating a Cluster
61
+
62
+
Create a new kind cluster by applying a Cluster resource:
63
+
64
+
```yaml
65
+
apiVersion: clusters.openmcp.cloud/v1alpha1
66
+
kind: Cluster
67
+
metadata:
68
+
name: my-managedcontrolplane
69
+
namespace: default
70
+
spec:
71
+
profile: kind # This tells the kind provider to handle this cluster
72
+
tenancy: Exclusive
73
+
```
74
+
75
+
```shell
76
+
kubectl apply -f cluster.yaml
77
+
```
78
+
79
+
### Requesting Access to a Cluster
80
+
81
+
Create an AccessRequest to get kubeconfig for a cluster:
82
+
83
+
```yaml
84
+
apiVersion: clusters.openmcp.cloud/v1alpha1
85
+
kind: AccessRequest
86
+
metadata:
87
+
name: my-access
88
+
namespace: default
89
+
spec:
90
+
clusterRef:
91
+
name: my-managedcontrolplane
92
+
namespace: default
93
+
permissions: []
94
+
```
95
+
96
+
The kubeconfig will be stored in a Secret in the same namespace as the `AccessRequest`.
97
+
31
98
## How it works
32
99
100
+
### Docker Socket Access
101
+
33
102
In order to create new kind clusters from within a kind cluster, the Docker socket (usually `/var/run/docker.sock`) needs to be available to the `cluster-provider-kind` pod. As a prerequisite, the Docker socket of the host machine must be mounted into the nodes of the platform kind cluster. In this case, there is only a single node (`platform-control-plane`). The socket can then be mounted by the cluster-provider-kind pod using a `hostPath` volume.
34
103
35
104
```mermaid
@@ -59,6 +128,8 @@ end
59
128
style HostMachine fill:#eee
60
129
```
61
130
131
+
### Platform Cluster Configuration
132
+
62
133
The kind configuration for the platform cluster may look like this:
63
134
64
135
```yaml
@@ -68,9 +139,12 @@ nodes:
68
139
- role: control-plane
69
140
extraMounts:
70
141
- hostPath: /var/run/docker.sock
71
-
containerPath: /var/run/host-docker.sock
142
+
containerPath: /var/run/docker.sock
72
143
```
73
-
In order to test that the socket is functional, a simple pod can be deployed:
144
+
145
+
### Testing Docker Socket Access
146
+
147
+
In order to test that the socket is functional, a simple pod can be deployed:
0 commit comments