You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+98-2Lines changed: 98 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,101 @@
4
4
5
5
## About this project
6
6
7
-
A cluster provider for OpenMCP that uses [kind](https://kind.sigs.k8s.io/) to provision clusters. Ideal for local development and E2E tests.
7
+
A cluster provider for [OpenMCP](https://github.com/openmcp-project/openmcp-operator) that uses [kind](https://kind.sigs.k8s.io/) (Kubernetes IN Docker) to provision and manage Kubernetes clusters. This provider enables you to create and manage multiple Kubernetes clusters running as Docker containers, making it ideal for:
8
+
9
+
-**Local Development**: Quickly spin up multiple clusters for testing multi-cluster scenarios
10
+
-**E2E Testing**: Automated testing of multi-cluster applications and operators
11
+
-**CI/CD Pipelines**: Lightweight cluster provisioning for testing environments
12
+
13
+
## Prerequisites
14
+
15
+
Before using this cluster provider, ensure you have:
16
+
17
+
-**Docker**: Running Docker daemon with socket accessible
-**kubectl**: For interacting with Kubernetes clusters
20
+
21
+
## Installation
22
+
23
+
### Production Deployment
24
+
25
+
In combination with the [OpenMCP Operator](https://github.com/openmcp-project/openmcp-operator), this operator can be deployed via a simple Kubernetes resource:
2. Install the Platform CRDs of the openmcp-operator:
46
+
Apply the CRDs from the OpenMCP operator repository [here](https://github.com/openmcp-project/openmcp-operator/tree/main/api/crds/manifests).
47
+
48
+
3.**Initialize the CRDs**:
49
+
```shell
50
+
go run ./cmd/cluster-provider-kind/main.go init
51
+
```
52
+
53
+
4.**Run the operator**:
54
+
```shell
55
+
go run ./cmd/cluster-provider-kind/main.go run
56
+
```
57
+
58
+
## Usage Examples
59
+
60
+
### Creating a Cluster
61
+
62
+
Create a new kind cluster by applying a Cluster resource:
63
+
64
+
```yaml
65
+
apiVersion: clusters.openmcp.cloud/v1alpha1
66
+
kind: Cluster
67
+
metadata:
68
+
name: my-managedcontrolplane
69
+
namespace: default
70
+
spec:
71
+
profile: kind # This tells the kind provider to handle this cluster
72
+
tenancy: Exclusive
73
+
```
74
+
75
+
```shell
76
+
kubectl apply -f cluster.yaml
77
+
```
78
+
79
+
### Requesting Access to a Cluster
80
+
81
+
Create an AccessRequest to get kubeconfig for a cluster:
82
+
83
+
```yaml
84
+
apiVersion: clusters.openmcp.cloud/v1alpha1
85
+
kind: AccessRequest
86
+
metadata:
87
+
name: my-access
88
+
namespace: default
89
+
spec:
90
+
clusterRef:
91
+
name: my-managedcontrolplane
92
+
namespace: default
93
+
permissions: []
94
+
```
95
+
96
+
The kubeconfig will be stored in a Secret in the same namespace as the `AccessRequest`.
8
97
9
98
## How it works
10
99
100
+
### Docker Socket Access
101
+
11
102
In order to create new kind clusters from within a kind cluster, the Docker socket (usually `/var/run/docker.sock`) needs to be available to the `cluster-provider-kind` pod. As a prerequisite, the Docker socket of the host machine must be mounted into the nodes of the platform kind cluster. In this case, there is only a single node (`platform-control-plane`). The socket can then be mounted by the cluster-provider-kind pod using a `hostPath` volume.
12
103
13
104
```mermaid
@@ -37,6 +128,8 @@ end
37
128
style HostMachine fill:#eee
38
129
```
39
130
131
+
### Platform Cluster Configuration
132
+
40
133
The kind configuration for the platform cluster may look like this:
41
134
42
135
```yaml
@@ -48,7 +141,10 @@ nodes:
48
141
- hostPath: /var/run/docker.sock
49
142
containerPath: /var/run/host-docker.sock
50
143
```
51
-
In order to test that the socket is functional, a simple pod can be deployed:
144
+
145
+
### Testing Docker Socket Access
146
+
147
+
In order to test that the socket is functional, a simple pod can be deployed:
0 commit comments