|
| 1 | +--- |
| 2 | +description: > |
| 3 | + A general overview over the resources provided by the kcp-operator. |
| 4 | +--- |
| 5 | + |
| 6 | +# Basics |
| 7 | + |
| 8 | +kcp-operator ships with a number of resources that, together, can be used to create a kcp installation. |
| 9 | + |
| 10 | +## Resource Relationships |
| 11 | + |
| 12 | +```mermaid |
| 13 | +flowchart TD |
| 14 | + FrontProxy -- ObjRef (n:1)--> RootShard |
| 15 | + Shard --ObjRef (n:1)--> RootShard |
| 16 | + RootShard --ObjRef (1:1, optional)--> CacheServer |
| 17 | + Shard --ObjRef (n:1, optional)--> CacheServer |
| 18 | + Kubeconfig --ObjRef (n:1)-->Shard |
| 19 | + Kubeconfig --ObjRef (n:1)-->RootShard |
| 20 | + Kubeconfig --ObjRef (n:1)-->FrontProxy |
| 21 | +``` |
| 22 | + |
| 23 | +### `RootShard` & `Shard` |
| 24 | + |
| 25 | +Each kcp installation consists at the minimum of one root shard and one [front-proxy](front-proxy.md), but you can add additional "regular" shards to scale kcp horizontally. |
| 26 | + |
| 27 | +Creating a new `RootShard` object means creating a new kcp installation, as you cannot have more than one root shard in one kcp installation, however multiple installations can run in the same Kubernetes namespace (though this is not necessarily recommended). The `RootShard` object's name is not really important inside kcp itself. |
| 28 | + |
| 29 | +Each additional shard is created by creating a `Shard` object, which will reference the root shard it belongs to. Shard names are relevant in kcp, as each shard will register itself on its root shard, using the name of the `Shard` object. |
| 30 | + |
| 31 | +### `FrontProxy` |
| 32 | + |
| 33 | +The kcp front-proxy can be used to provide access to either a whole or a subset of a kcp installation. Its main purpose is to act as a gateway, since it builds up a runtime map of all existing workspaces across all shards that it targets, so it knows where to route a request to `/clusters/root:my-team` to the shard where the logicalcluster for that workspace resides. |
| 34 | + |
| 35 | +A kcp installation can contain multiple front-proxies with different settings; one might perform additional authentication while another might pass unauthenticated requests to the shards (which will then perform their own authentication). |
| 36 | + |
| 37 | +For developing controllers against kcp, it is often necessary to access the shards directly, so front-proxies are not the only way to access kcp. |
| 38 | + |
| 39 | +### `Kubeconfig` |
| 40 | + |
| 41 | +Kubeconfigs allow the easy creation of credentials to access kcp. As a sharded system, kcp relies on client certificate authentication and the kcp-operator will ensure the correct certificates are generated and then neatly wrapped up in ready-to-use kubeconfig Secrets. |
| 42 | + |
| 43 | +Kubeconfigs can be configured to point to a specific shard or to a front-proxy instance, which affects which client CA is used to generate the certificates. |
| 44 | + |
| 45 | +## Cross-Namespace/Cluster References |
| 46 | + |
| 47 | +Due to the potential "global" nature of a kcp setup it might be necessary to run kcp-operator on multiple clusters while attempting to form one single kcp setup with multiple shards and front proxies. |
| 48 | + |
| 49 | +To make this possible, resources with object references (see above) could have a secondary way of reading necessary configuration (instead of a `corev1.LocalObjectReference`). This could be a reference to a `ConfigMap` or a `Secret` (to be determined) which are automatically generated for various resource types. A sync process (outside of the kcp-operator) could then sync the `ConfigMap` (or the `Secret`, or a custom resource type) across namespaces or even clusters, where e.g. a `Shard` object references a `Secret` which was generated for a `RootShard` on another cluster. |
0 commit comments