You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In addition, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI CAs. You can create a self-signing one:
21
+
## Create Root Shard
22
+
23
+
In addition to a running etcd, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI. Create a self-signing one:
22
24
23
25
```yaml
24
26
apiVersion: cert-manager.io/v1
@@ -29,7 +31,9 @@ spec:
29
31
selfSigned: {}
30
32
```
31
33
32
-
Afterward, create a `RootShard` object. You can find documentation for it in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md).
34
+
Afterward, create the first `RootShard` object. API documentation is available in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md).
35
+
36
+
The main change to make is replacing `example.operator.kcp.io` with a hostname to be used for the kcp instance. The DNS entry should not be set yet.
33
37
34
38
```yaml
35
39
apiVersion: operator.kcp.io/v1alpha1
@@ -42,16 +46,143 @@ spec:
42
46
hostname: example.operator.kcp.io
43
47
port: 6443
44
48
certificates:
49
+
# this references the issuer created above
45
50
issuerRef:
46
51
group: cert-manager.io
47
52
kind: Issuer
48
53
name: selfsigned
49
54
cache:
50
55
embedded:
56
+
# kcp comes with a cache server accessible to all shards,
57
+
# in this case it is fine to enable the embedded instance
51
58
enabled: true
52
59
etcd:
53
60
endpoints:
61
+
# this is the service URL to etcd. Replace if Helm chart was
62
+
# installed under a different name or the namespace is not "default"
54
63
- http://etcd.default.svc.cluster.local:2379
55
64
```
56
65
57
-
kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard.
66
+
kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard and the necessary PKI infrastructure (via cert-manager).
67
+
68
+
## Set up Front Proxy
69
+
70
+
Every kcp instance deployed with kcp-operator needs at least one instance of kcp-front-proxy to be fully functional. Multiple front-proxy instances can exist to provide access to a complex, multi-shard geo-distributed setup.
71
+
72
+
For getting started, a `FrontProxy` object can look like this:
73
+
74
+
```yaml
75
+
apiVersion: operator.kcp.io/v1alpha1
76
+
kind: FrontProxy
77
+
metadata:
78
+
name: frontproxy
79
+
spec:
80
+
rootShard:
81
+
ref:
82
+
# the name of the RootShard object created before
83
+
name: root
84
+
serviceTemplate:
85
+
spec:
86
+
# expose this front-proxy via a load balancer
87
+
type: LoadBalancer
88
+
```
89
+
90
+
kcp-operator will deploy a kcp-front-proxy installation based on this and connect it to the `root` root shard created before.
91
+
92
+
### DNS Setup
93
+
94
+
Once the `Service` `<Object Name>-front-proxy` has successfully been reconciled, it should have either an IP address or a DNS name (depending on which load balancing integration is active on the Kubernetes cluster). A DNS entry for the chosen external hostname (this was set in the `RootShard`) has to be set and should point to the IP address (with an A/AAAA DNS entry) or the DNS name (with a CNAME DNS entry).
95
+
96
+
Assuming this is what the `frontproxy-front-proxy` `Service` looks like:
Now a CNAME entry from `example.operator.kcp.io` to `XYZ.eu-central-1.elb.amazonaws.com` is required.
110
+
111
+
!!! hint
112
+
Tools like [external-dns](https://github.com/kubernetes-sigs/external-dns) can help with automating this step to avoid manual DNS configuration.
113
+
114
+
## Initial Access
115
+
116
+
Once deployed, a `Kubeconfig` object can be created to generate credentials to initially access the kcp setup. An admin kubeconfig can be generated like this:
117
+
118
+
```yaml
119
+
apiVersion: operator.kcp.io/v1alpha1
120
+
kind: Kubeconfig
121
+
metadata:
122
+
name: kubeconfig-kcp-admin
123
+
spec:
124
+
# the user name embedded in the kubeconfig
125
+
username: kcp-admin
126
+
groups:
127
+
# system:kcp:admin is a special privileged group in kcp.
128
+
# the kubeconfig generated from this should be kept secure at all times
129
+
- system:kcp:admin
130
+
# the kubeconfig will be valid for 365d but will be automatically refreshed
131
+
validity: 8766h
132
+
secretRef:
133
+
# the name of the secret that the assembled kubeconfig should be written to
134
+
name: admin-kubeconfig
135
+
target:
136
+
# a reference to the frontproxy deployed previously so the kubeconfig is accepted by it
137
+
frontProxyRef:
138
+
name: frontproxy
139
+
```
140
+
141
+
Once `admin-kubeconfig` has been created, the generated kubeconfig can be fetched:
To use this kubeconfig, set the `KUBECONFIG` environment variable appropriately:
148
+
149
+
```sh
150
+
export KUBECONFIG=$(pwd)/admin.kubeconfig
151
+
```
152
+
153
+
It is now possible to connect to the kcp instance and create new workspaces via [kubectl create-workspace](https://docs.kcp.io/kcp/latest/setup/kubectl-plugin/):
154
+
155
+
```sh
156
+
kubectl get ws
157
+
```
158
+
159
+
Initially, the command should return that no workspaces exist yet:
160
+
161
+
```
162
+
No resources found
163
+
```
164
+
165
+
To create a workspace, run:
166
+
167
+
```sh
168
+
kubectl create-workspace test
169
+
```
170
+
171
+
Output should look like this:
172
+
173
+
```
174
+
Workspace "test" (type root:organization) created. Waiting for it to be ready...
175
+
Workspace "test" (type root:organization) is ready to use.
176
+
```
177
+
178
+
Congratulations, you've successfully set up kcp and connected to it! :tada:
179
+
180
+
<!-- TODO(embik):
181
+
## Optional: Additional Shards
182
+
183
+
kcp can be sharded, so kcp-operator supports joining additional kcp instances to the existing setup to act as shards.
184
+
-->
185
+
186
+
## Further Reading
187
+
188
+
- Check out the [CRD documentation](../reference/index.md) for all configuration options.
0 commit comments