Skip to content

Commit 16d3a85

Browse files
authored
Merge pull request #65 from embik/getting-started-guide
Add root shard, front proxy and kubeconfig to getting started guide
2 parents 9cb8391 + fcf4ae2 commit 16d3a85

File tree

4 files changed

+148
-14
lines changed

4 files changed

+148
-14
lines changed

config/samples/operator.kcp.io_v1alpha1_kubeconfig_frontproxy.yaml renamed to config/samples/operator.kcp.io_v1alpha1_kubeconfig.yaml

File renamed without changes.

docs/content/setup/index.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Requirements
44

5-
- [cert-manager](https://cert-manager.io/)
5+
- [cert-manager](https://cert-manager.io/) (see [Installing with Helm](https://cert-manager.io/docs/installation/helm/))
66

77
## Helm Chart
88

@@ -15,9 +15,11 @@ helm repo add kcp https://kcp-dev.github.io/helm-charts
1515
And then install the chart:
1616

1717
```sh
18-
helm upgrade --install --create-namespace --namespace kcp-operator kcp-operator kcp/kcp-operator
18+
helm install --create-namespace --namespace kcp-operator kcp-operator kcp/kcp-operator
1919
```
2020

21+
For full configuration options, check out the Chart [values](https://github.com/kcp-dev/helm-charts/blob/main/charts/kcp-operator/values.yaml).
22+
2123
## Further Reading
2224

2325
{% include "partials/section-overview.html" %}

docs/content/setup/quickstart.md

Lines changed: 139 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,26 @@
11
---
22
description: >
3-
Take your first steps after installing kcp-operator.
3+
Create your first objects after installing kcp-operator.
44
---
55

66
# Quickstart
77

8-
Make sure you have kcp-operator installed according to the instructions given in [Setup](./index.md).
8+
kcp-operator has to be installed according to the instructions given in [Setup](./index.md) before starting the steps below.
99

10-
## RootShard
10+
## etcd
1111

1212
!!! warning
1313
Never deploy etcd like below in production as it sets up an etcd instance without authentication or TLS.
1414

15-
Running a root shard requires a running etcd instance/cluster. You can set up a simple one via Helm:
15+
Running a root shard requires a running etcd instance/cluster. A simple one can be set up with Helm and the Bitnami etcd chart:
1616

1717
```sh
18-
$ helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd --set auth.rbac.enabled=false --set auth.rbac.create=false
18+
helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd --set auth.rbac.enabled=false --set auth.rbac.create=false
1919
```
2020

21-
In addition, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI CAs. You can create a self-signing one:
21+
## Create Root Shard
22+
23+
In addition to a running etcd, the root shard requires a reference to a cert-manager `Issuer` to issue its PKI. Create a self-signing one:
2224

2325
```yaml
2426
apiVersion: cert-manager.io/v1
@@ -29,7 +31,9 @@ spec:
2931
selfSigned: {}
3032
```
3133
32-
Afterward, create a `RootShard` object. You can find documentation for it in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md).
34+
Afterward, create the first `RootShard` object. API documentation is available in the [CRD reference](../reference/crd/operator.kcp.io/rootshards.md).
35+
36+
The main change to make is replacing `example.operator.kcp.io` with a hostname to be used for the kcp instance. The DNS entry should not be set yet.
3337

3438
```yaml
3539
apiVersion: operator.kcp.io/v1alpha1
@@ -42,16 +46,143 @@ spec:
4246
hostname: example.operator.kcp.io
4347
port: 6443
4448
certificates:
49+
# this references the issuer created above
4550
issuerRef:
4651
group: cert-manager.io
4752
kind: Issuer
4853
name: selfsigned
4954
cache:
5055
embedded:
56+
# kcp comes with a cache server accessible to all shards,
57+
# in this case it is fine to enable the embedded instance
5158
enabled: true
5259
etcd:
5360
endpoints:
61+
# this is the service URL to etcd. Replace if Helm chart was
62+
# installed under a different name or the namespace is not "default"
5463
- http://etcd.default.svc.cluster.local:2379
5564
```
5665

57-
kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard.
66+
kcp-operator will create the necessary resources to start a `Deployment` of a kcp root shard and the necessary PKI infrastructure (via cert-manager).
67+
68+
## Set up Front Proxy
69+
70+
Every kcp instance deployed with kcp-operator needs at least one instance of kcp-front-proxy to be fully functional. Multiple front-proxy instances can exist to provide access to a complex, multi-shard geo-distributed setup.
71+
72+
For getting started, a `FrontProxy` object can look like this:
73+
74+
```yaml
75+
apiVersion: operator.kcp.io/v1alpha1
76+
kind: FrontProxy
77+
metadata:
78+
name: frontproxy
79+
spec:
80+
rootShard:
81+
ref:
82+
# the name of the RootShard object created before
83+
name: root
84+
serviceTemplate:
85+
spec:
86+
# expose this front-proxy via a load balancer
87+
type: LoadBalancer
88+
```
89+
90+
kcp-operator will deploy a kcp-front-proxy installation based on this and connect it to the `root` root shard created before.
91+
92+
### DNS Setup
93+
94+
Once the `Service` `<Object Name>-front-proxy` has successfully been reconciled, it should have either an IP address or a DNS name (depending on which load balancing integration is active on the Kubernetes cluster). A DNS entry for the chosen external hostname (this was set in the `RootShard`) has to be set and should point to the IP address (with an A/AAAA DNS entry) or the DNS name (with a CNAME DNS entry).
95+
96+
Assuming this is what the `frontproxy-front-proxy` `Service` looks like:
97+
98+
```sh
99+
kubectl get svc frontproxy-front-proxy
100+
```
101+
102+
Output should look like this:
103+
104+
```
105+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
106+
frontproxy-front-proxy LoadBalancer 10.240.30.54 XYZ.eu-central-1.elb.amazonaws.com 6443:32032/TCP 3m13s
107+
```
108+
109+
Now a CNAME entry from `example.operator.kcp.io` to `XYZ.eu-central-1.elb.amazonaws.com` is required.
110+
111+
!!! hint
112+
Tools like [external-dns](https://github.com/kubernetes-sigs/external-dns) can help with automating this step to avoid manual DNS configuration.
113+
114+
## Initial Access
115+
116+
Once deployed, a `Kubeconfig` object can be created to generate credentials to initially access the kcp setup. An admin kubeconfig can be generated like this:
117+
118+
```yaml
119+
apiVersion: operator.kcp.io/v1alpha1
120+
kind: Kubeconfig
121+
metadata:
122+
name: kubeconfig-kcp-admin
123+
spec:
124+
# the user name embedded in the kubeconfig
125+
username: kcp-admin
126+
groups:
127+
# system:kcp:admin is a special privileged group in kcp.
128+
# the kubeconfig generated from this should be kept secure at all times
129+
- system:kcp:admin
130+
# the kubeconfig will be valid for 365d but will be automatically refreshed
131+
validity: 8766h
132+
secretRef:
133+
# the name of the secret that the assembled kubeconfig should be written to
134+
name: admin-kubeconfig
135+
target:
136+
# a reference to the frontproxy deployed previously so the kubeconfig is accepted by it
137+
frontProxyRef:
138+
name: frontproxy
139+
```
140+
141+
Once `admin-kubeconfig` has been created, the generated kubeconfig can be fetched:
142+
143+
```sh
144+
kubectl get secret admin-kubeconfig -o jsonpath="{.data.kubeconfig}" | base64 -d > admin.kubeconfig
145+
```
146+
147+
To use this kubeconfig, set the `KUBECONFIG` environment variable appropriately:
148+
149+
```sh
150+
export KUBECONFIG=$(pwd)/admin.kubeconfig
151+
```
152+
153+
It is now possible to connect to the kcp instance and create new workspaces via [kubectl create-workspace](https://docs.kcp.io/kcp/latest/setup/kubectl-plugin/):
154+
155+
```sh
156+
kubectl get ws
157+
```
158+
159+
Initially, the command should return that no workspaces exist yet:
160+
161+
```
162+
No resources found
163+
```
164+
165+
To create a workspace, run:
166+
167+
```sh
168+
kubectl create-workspace test
169+
```
170+
171+
Output should look like this:
172+
173+
```
174+
Workspace "test" (type root:organization) created. Waiting for it to be ready...
175+
Workspace "test" (type root:organization) is ready to use.
176+
```
177+
178+
Congratulations, you've successfully set up kcp and connected to it! :tada:
179+
180+
<!-- TODO(embik):
181+
## Optional: Additional Shards
182+
183+
kcp can be sharded, so kcp-operator supports joining additional kcp instances to the existing setup to act as shards.
184+
-->
185+
186+
## Further Reading
187+
188+
- Check out the [CRD documentation](../reference/index.md) for all configuration options.

docs/main.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
# limitations under the License.
1414

1515
import copy
16+
import os.path
1617

1718
def define_env(env):
1819
"""
@@ -55,10 +56,10 @@ def section_items(page, nav, config):
5556

5657
# Copy so we don't modify the original
5758
child = copy.deepcopy(child)
58-
59-
# Subsection nesting that works across any level of nesting
60-
# Replaced mkdocs fix_url function
61-
child.file.url = child.url.replace(page.url, "./")
59+
60+
# mkdocs hates if a link in the generated Markdown (!) is already a fully-fledged URL
61+
# and not a link to a file anymore, so we replace the URL with the file path here.
62+
child.file.url = os.path.basename(child.file.src_uri)
6263
siblings.append(child)
6364

6465
return siblings

0 commit comments

Comments
 (0)