diff --git a/docs/content/GOALS.md b/docs/content/GOALS.md
index cb217fdbf89..6119cb8f08b 100644
--- a/docs/content/GOALS.md
+++ b/docs/content/GOALS.md
@@ -44,7 +44,7 @@ Not every idea below may bear fruit, but it's never the wrong time to look for n
Finally, the bar is still high to writing controllers. Lowering the friction of automation and integration is in everyone's benefit - whether that's a bash script, a Terraform configuration, or custom SRE services. If we can reduce the cost of both infrastructure as code and new infrastructure APIs we can potentially make operational investments more composable.
- See the [investigations doc for minimal API server](./developers/investigations/minimal-api-server.md) for more on
+ See the [investigations doc for minimal API server](./developers/investigations/minimal-api-server.md) for more on
improving the composability of the Kube API server.
@@ -79,4 +79,3 @@ Principles are the high level guiding rules we'd like to frame designs around. T
6. Consolidate efforts in the ecosystem into a more focused effort
Kubernetes is mature and changes to the core happen slowly. By concentrating use cases among a number of participants we can better articulate common needs, focus the design time spent in the core project into a smaller set of efforts, and bring new investment into common shared problems strategically. We should make fast progress and be able to suggest high-impact changes without derailing other important Kubernetes initiatives.
-
diff --git a/docs/content/concepts/apis/admission-webhooks.md b/docs/content/concepts/apis/admission-webhooks.md
index 46b2a2c4ab9..f1b581e5894 100644
--- a/docs/content/concepts/apis/admission-webhooks.md
+++ b/docs/content/concepts/apis/admission-webhooks.md
@@ -14,7 +14,7 @@ flowchart TD
schema["Widgets APIResourceSchema
(widgets.v1.example.org)"]
webhook["Mutating/ValidatingWebhookConfiguration
ValidatingAdmissionPolicy
for widgets.v1.example.org
Handle a from ws2 (APIResourceSchema)
Handle b from ws3 (APIResourceSchema)
Handle a from ws1 (CRD)"]
crd["Widgets CustomResourceDefinition
(widgets.v1.example.org)"]
-
+
export --> schema
schema --> webhook
webhook --> crd
@@ -64,7 +64,7 @@ Consider a scenario where:
- An `APIExport` for `cowboys.wildwest.dev`
- A `ValidatingAdmissionPolicy` that rejects cowboys with `intent: "bad"`
- A `ValidatingAdmissionPolicyBinding` that binds the policy
-
+
- **Consumer workspace** (`root:consumer`) has:
- An `APIBinding` that binds to the provider's `APIExport`
- A user trying to create a cowboy with `intent: "bad"`
diff --git a/docs/content/concepts/apis/rest-access-patterns.md b/docs/content/concepts/apis/rest-access-patterns.md
index 2ecafa65ca6..86c59157377 100644
--- a/docs/content/concepts/apis/rest-access-patterns.md
+++ b/docs/content/concepts/apis/rest-access-patterns.md
@@ -11,16 +11,16 @@ This describes the various REST access patterns the kcp apiserver supports.
These requests are all prefixed with `/clusters/`. Here are some example URLs:
-- `GET /clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces` - lists all kcp Workspaces in the
+- `GET /clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces` - lists all kcp Workspaces in the
`root` workspace.
- `GET /clusters/root:compute/api/v1/namespaces/test` - gets the namespace `test` from the `root:compute` workspace
-- `GET /clusters/yqzkjxmzl9turgsf/api/v1/namespaces/test` - same as above, using the logical cluster name for
+- `GET /clusters/yqzkjxmzl9turgsf/api/v1/namespaces/test` - same as above, using the logical cluster name for
`root:compute`
## Typical requests for resources through the APIExport virtual workspace
-An APIExport provides a view into workspaces that contain APIBindings that are bound to the APIExport. This allows
-the service provider - the owner of the APIExport - to access data in its consumers' workspaces. Here is an example
+An APIExport provides a view into workspaces that contain APIBindings that are bound to the APIExport. This allows
+the service provider - the owner of the APIExport - to access data in its consumers' workspaces. Here is an example
APIExport virtual workspace URL:
```
@@ -39,13 +39,13 @@ Let's break down the segments in the URL path:
## Setting up shared informers for a virtual workspace
-A virtual workspace typically allows the service provider to set up shared informers that can list and watch
-resources across all the consumer workspaces bound to or supported by the virtual workspace. For example, the
-APIExport virtual workspace lets you inform across all workspaces that have an APIBinding to your APIExport. The
-syncer virtual workspace lets a syncer inform across all workspaces that have a Placement on the syncer's associated
+A virtual workspace typically allows the service provider to set up shared informers that can list and watch
+resources across all the consumer workspaces bound to or supported by the virtual workspace. For example, the
+APIExport virtual workspace lets you inform across all workspaces that have an APIBinding to your APIExport. The
+syncer virtual workspace lets a syncer inform across all workspaces that have a Placement on the syncer's associated
SyncTarget.
-To set up shared informers to span multiple workspaces, you use a special cluster called the **wildcard cluster**,
+To set up shared informers to span multiple workspaces, you use a special cluster called the **wildcard cluster**,
denoted by `*`. An example URL you would use when constructing a shared informer in this manner might be:
```
diff --git a/docs/content/concepts/quickstart-tenancy-and-apis.md b/docs/content/concepts/quickstart-tenancy-and-apis.md
index 535dcd858c3..6406f7a874c 100644
--- a/docs/content/concepts/quickstart-tenancy-and-apis.md
+++ b/docs/content/concepts/quickstart-tenancy-and-apis.md
@@ -108,7 +108,7 @@ NAME TYPE PHASE URL
b universal Ready https://myhost:6443/clusters/root:a:b
```
-Here is a quick collection of commands showing the navigation between the workspaces you've just created.
+Here is a quick collection of commands showing the navigation between the workspaces you've just created.
Note the usage of `..` to switch to the parent workspace and `-` to the previously selected workspace.
```console
diff --git a/docs/content/concepts/sharding/index.md b/docs/content/concepts/sharding/index.md
index db047fd3603..13efc92763c 100644
--- a/docs/content/concepts/sharding/index.md
+++ b/docs/content/concepts/sharding/index.md
@@ -3,4 +3,3 @@
## Pages
{% include "partials/section-overview.html" %}
-
diff --git a/docs/content/concepts/workspaces/mounts.md b/docs/content/concepts/workspaces/mounts.md
index 8deea51c70d..2162ce640b4 100644
--- a/docs/content/concepts/workspaces/mounts.md
+++ b/docs/content/concepts/workspaces/mounts.md
@@ -56,7 +56,7 @@ root/
└── org1/
├── project-a/ # Traditional LogicalCluster workspace
│ ├── LogicalCluster object # ✓ Has backing logical cluster
- │ ├── /api/v1/configmaps # ✓ Served by kcp directly
+ │ ├── /api/v1/configmaps # ✓ Served by kcp directly
│ └── /api/v1/secrets # ✓ Standard Kubernetes APIs
│
└── project-b/ # Mounted workspace
@@ -119,7 +119,7 @@ While the mount object can be any Custom Resource, you still need a controller t
- Implement and run the actual API server/proxy that serves requests at the `status.URL`
- Handle authentication, authorization, and any request filtering if needed
-The kcp mounting machinery handles the workspace-to-mount routing, but the actual API implementation is entirely up to you.
+The kcp mounting machinery handles the workspace-to-mount routing, but the actual API implementation is entirely up to you.
### Creating a Mounted Workspace
@@ -141,7 +141,7 @@ spec:
#### Mount Field Requirements
- `ref.apiVersion`: The API version of the mount object
-- `ref.kind`: The kind of the mount object
+- `ref.kind`: The kind of the mount object
- `ref.name`: The name of the mount object
- `ref.namespace`: (Optional) The namespace of the mount object if it's namespaced
@@ -238,4 +238,4 @@ The workspace mounts controller (`kcp-workspace-mounts`) manages the integration
## References
-1. https://github.com/kcp-dev/contrib/tree/main/20241013-kubecon-saltlakecity/mounts-vw - Example mount controller and proxy implementation
\ No newline at end of file
+1. https://github.com/kcp-dev/contrib/tree/main/20241013-kubecon-saltlakecity/mounts-vw - Example mount controller and proxy implementation
diff --git a/docs/content/concepts/workspaces/workspace-termination.md b/docs/content/concepts/workspaces/workspace-termination.md
index c4a00741f1c..3ec36a2b786 100644
--- a/docs/content/concepts/workspaces/workspace-termination.md
+++ b/docs/content/concepts/workspaces/workspace-termination.md
@@ -106,5 +106,5 @@ You can use this url to construct a kubeconfig for your controller. To do so, us
When writing a custom terminator controller, the following needs to be taken into account:
-* We strongly recommend to use [multicluster-runtime](github.com/kcp-dev/multicluster-runtime) to build your controller in order to properly handle which `LogicalCluster` originates from which workspace
+* We strongly recommend to use [multicluster-runtime](https://github.com/kcp-dev/multicluster-runtime) to build your controller in order to properly handle which `LogicalCluster` originates from which workspace
* You need to update `LogicalClusters` using patches; They cannot be updated using the update api
diff --git a/docs/content/contributing/continuous-integration/index.md b/docs/content/contributing/continuous-integration/index.md
index c0c03721669..954b792f50c 100644
--- a/docs/content/contributing/continuous-integration/index.md
+++ b/docs/content/contributing/continuous-integration/index.md
@@ -56,4 +56,3 @@ Then, to have your test use that shared kcp server, you add `-args --use-default
```shell
go test ./test/e2e/apibinding -count 20 -failfast -args --use-default-kcp-server
```
-
diff --git a/docs/content/contributing/governance/general-technical-review.md b/docs/content/contributing/governance/general-technical-review.md
index 48d93098ee6..84a673d59b2 100644
--- a/docs/content/contributing/governance/general-technical-review.md
+++ b/docs/content/contributing/governance/general-technical-review.md
@@ -17,11 +17,11 @@ title: General Technical Review
### Scope
* **Describe the roadmap process, how scope is determined for mid to long term features, as well as how the roadmap maps back to current contributions and maintainer ladder?**
-
+
Our public roadmap is tracked in [GitHub milestones](https://github.com/kcp-dev/kcp/milestones). Scope is usually determined in the bi-weekly community calls, i.e. ideas with a larger impact on kcp as a project are brought there to be discussed and scheduled into the overall development roadmap for the next few releases.
-
+
Roadmap disputes are at worst solved by a maintainer vote on the public mailing list. If maintainers couldn't agree, they would seek an outside arbiter.
-
+
An enhancement proposal process has been decided upon but not yet implemented.
* **Describe the target persona or user(s) for the project?**
@@ -37,9 +37,9 @@ title: General Technical Review
The project directly supports the use case of publishing Kubernetes CRDs from multiple Kubernetes clusters into a central kcp instance as a global control plane through the [api-syncagent](https://github.com/kcp-dev/api-syncagent) project. *Service Consumers* can then create objects in kcp that get synchronized back to the target Kubernetes cluster.
* **Explain which use cases have been identified as unsupported by the project.**
-
+
In the past, kcp included a "transparent multi-cluster" (TMC) component in the project core. Since then, it has been identified as out-of-scope for the "core" kcp project, but it is feasible to be implemented as an application on top of kcp.
-
+
In general, adding application-specific logic into kcp itself is considered out of scope for the project. Specifically, container orchestration is out-of-scope for kcp.
* **Describe the intended types of organizations who would benefit from adopting this project. (i.e. financial services, any software manufacturer, organizations providing platform engineering services)?**
@@ -58,9 +58,9 @@ title: General Technical Review
### Usability
* **How should the target personas interact with your project?**
-
+
All personas primarily interact with kcp via `kubectl`, the Kubernetes command line client. kcp provides several [kubectl plugins](https://docs.kcp.io/kcp/latest/setup/kubectl-plugin/) for navigating multi-tenancy concepts not known to `kubectl`.
-
+
Navigation between workspaces happens with the `kubectl-ws` plugin, which allows changing workspaces similar to changing directories:
```bash
@@ -83,7 +83,7 @@ title: General Technical Review
* **User Interface**: kcp doesn't provide its own user interface and instead relies on users using `kubectl` or other user interfaces to interact with kcp through the Kubernetes Resource Model.
* **Describe how this project integrates with other projects in a production environment.**
-
+
kcp integrates with a variety of other projects when used in a production environment. Production setups are recommended to be installed on Kubernetes. Same as Kubernetes, it provides several interfaces that allow plugging in different projects. For example:
* [Admission Webhooks](https://docs.kcp.io/kcp/latest/concepts/apis/admission-webhooks/) to integrate with any project that supports Kubernetes admission webhooks like [OPA Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/).
@@ -94,7 +94,7 @@ title: General Technical Review
### Design
* **Explain the design principles and best practices the project is following.**
-
+
Design principles are documented [here](https://docs.kcp.io/kcp/latest/GOALS/#principles). Below is a list of them:
* Convention over configuration / optimize for the user's benefit.
@@ -105,7 +105,7 @@ title: General Technical Review
* Consolidate efforts in the ecosystem into a more focused effort.
* **Outline or link to the project’s architecture requirements? Describe how they differ for Proof of Concept, Development, Test and Production environments, as applicable.**
-
+
kcp can be installed on top of a Kubernetes cluster and primarily requires a mean to expose its API endpoint (e.g. load balancer support in the Kubernetes cluster). The requirements for that don't significantly change between environments. Specifically for development, the `kcp` binary has a "all-in-one" mode that makes local development against kcp possible.
For test and production environments it is strongly encouraged to run a sharded setup to validate that integrations correctly work with multiple shards.
@@ -121,7 +121,7 @@ title: General Technical Review
kcp builds on top of Kubernetes' kube-apiserver code and as such, implements similar authentication and authorization methods. Specifically, kcp supports Kubernetes' Role-Based Access Control (RBAC) to assign permissions to user identities. kcp adds a few verbs and subresources to "stock" Kubernetes RBAC, which are documented [here](https://docs.kcp.io/kcp/latest/concepts/authorization/authorizers/).
* **Describe how the project has addressed sovereignty.**
-
+
kcp can be entirely self-hosted on a Kubernetes cluster. All data is stored in an etcd instance, which can be fully managed by the installation owner. Except for container images, no access to internet resources is required, and thus a kcp setup can be run fully air-gapped to address any data sovereignty concerns.
* **Describe any compliance requirements addressed by the project.**
@@ -135,7 +135,7 @@ title: General Technical Review
* **Describe the project’s resource requirements, including CPU, Network and Memory.**
A default installation from the Helm chart requires at least:
-
+
* 1.5 cpu + 6GB RAM for three-node etcd cluster
* 0.1 cpu + 512MB RAM for kcp server
* 0.1 cpu + 128MB RAM for kcp-front-proxy
@@ -160,9 +160,9 @@ title: General Technical Review
* **Outline any additional configurations from default to make reasonable use of the project**
kcp provides a multitude of command line options to configure its behaviour. A complete list can be accessed by running `kcp start options`, with most of the options derived from kube-apiserver.
-
+
A few configuration options that would be useful are:
-
+
* `--authorization-webhook-config-file` allows referencing a [webhook configuration file](https://docs.kcp.io/kcp/latest/concepts/authorization/authorizers/#webhook-authorizer) for authorization via a webhook.
* Several `--oidc-*` flags exist to enable and configure OIDC authentication. Alternatively, `--authentication-config` can be used to reference a [structured authentication configuration file](https://docs.kcp.io/kcp/main/concepts/authentication/oidc/#configure-kcp-oidc-authentication-using-structured-authentication-configuration).
* `--audit-webhook-config-file` allows referencing a configuration file for an audit webhook endpoint. An audit policy can be configured via `--audit-policy-file`.
@@ -176,7 +176,7 @@ title: General Technical Review
* **Describe compatibility of any new or changed APIs with API servers, including the Kubernetes API server**
Since kcp implements the Kubernetes Resource Model and is in fact based on the kube-apiserver code, it is compatible with with most tools and clients meant for Kubernetes.
-
+
The main addition of kcp to a Kubernetes-style API is the concept of logical clusters, kcp's multi-tenancy unit. A kcp instance doesn't provide Kubernetes API resources under one unified endpoint, instead it provides access to multiple endpoints that each act as fully independent Kubernetes API endpoints. This means that e.g. `/clusters/a` and `/clusters/b` are both Kubernetes-compatible API endpoints, but they return different API resources and objects.
As such, each logical cluster can be accessed with a Kubernetes client (e.g. `kubectl`) and switching between them is possible via a `kubectl` plugin provided by the kcp project. It can also be done manually by updating server URLs (see the `/clusters/` schema above). Logical clusters have dedicated resources, objects and RBAC.
@@ -189,13 +189,13 @@ title: General Technical Review
kcp has its [release process publicly documented](https://docs.kcp.io/kcp/main/contributing/guides/publishing-a-new-kcp-release/).
Releases are published by the CI/CD pipelines (Prow and GitHub Actions) after a git tag has been pushed. As such, automation handles the majority of the release process.
-
+
Minor and patch releases are relatively uniform in their release process, the major difference is which branch the new release is cut from. New major releases have not been cut so far and would require bumping Go modules to include the version name, which would require changes across the codebase.
### Installation
* **Describe how the project is installed and initialized, e.g. a minimal install with a few lines of code or does it require more complex integration and configuration?**
-
+
* A Helm chart is available for installation on Kubernetes. A full installation walkthrough is available [here](https://github.com/kcp-dev/helm-charts/tree/main/charts/kcp), but generally speaking, installation is as easy as `helm install`.
The main consideration is how to make the kcp API endpoint accessible. Several expose strategies are documented, the primary task outside of configuring the Helm chart is setting up the proper DNS records for the chosen external DNS name.
@@ -238,9 +238,9 @@ title: General Technical Review
* **Describe how each of the cloud native principles apply to your project.**
* kcp is **secure** by default by encrypting (with TLS), authenticating (with mTLS or OIDC) and authorizing (with Kubernetes RBAC) requests made to it.
-
+
* kcp is **resilient** by supporting a High Availability setup, in which individual kcp processes can crash or restart without the kcp instance having reduced availability.
-
+
* kcp **manageable** by being KRM driven and exposing its main configuration primitives via its Kubernetes-like API, i.e. `Workspaces` allow creating new units of its multitenancy boundary.
* kcp is **sustainable** by avoiding a vendor lock-in and instead building on top of the Kubernetes Resource Model, which subsequently allows the project to build on top of Kubernetes technology both on the server and the client side (i.e. to interact with kcp, you can use known client tools or client libraries with some extensions that the kcp project develops).
@@ -254,11 +254,11 @@ title: General Technical Review
* **Security Hygiene**
* **Please describe the frameworks, practices and procedures the project uses to maintain the basic health and security of the project.**
- * Vulnerability Management: kcp has a defined security policy and a private process for reporting vulnerabilities, either through GitHub's security advisory feature or a dedicated private email address (kcp-dev-private@googlegroups.com). This allows for coordinated disclosure.
+ * Vulnerability Management: kcp has a defined security policy and a private process for reporting vulnerabilities, either through GitHub's security advisory feature or a dedicated private email address (kcp-dev-private@googlegroups.com). This allows for coordinated disclosure.
- * Security Response Committee: A formal committee of project maintainers is responsible for triaging and responding to security reports in a timely manner.
+ * Security Response Committee: A formal committee of project maintainers is responsible for triaging and responding to security reports in a timely manner.
- * Public Advisories: Once a vulnerability is addressed, kcp publishes public security advisories on GitHub to inform users. Past advisories for both "Critical" and "Moderate" severity issues are available, demonstrating the process is active.
+ * Public Advisories: Once a vulnerability is addressed, kcp publishes public security advisories on GitHub to inform users. Past advisories for both "Critical" and "Moderate" severity issues are available, demonstrating the process is active.
* Dependency Scanning: as shown in release notes, dependencies are regularly updated to address known CVEs, showing that dependency scanning is part of the release process. kcp uses GitHub's Dependabot feature to be informed about known dependency vulnerabilities.
@@ -278,9 +278,9 @@ title: General Technical Review
* **Describe how the project is following and implementing [secure software supply chain best practices](https://project.linuxfoundation.org/hubfs/CNCF\_SSCP\_v1.pdf)**
kcp secures the source code by ensuring minimal permissions of contributors on the GitHub repositories. Instead, PR merge automation in form of Prow is enabled. Through Prow configuration, it is not possible for the PR author to approve their own PR, enforcing a four-eyes principle. Branch protection is automated via configuration in the [kcp-dev/infra](https://github.com/kcp-dev/infra) repository. kcp uses GitHub features to track dependencies and vulnerabilities in them and ensure no secrets are pushed.
-
+
The kcp build infrastructure is deployed with OpenTofu, with deployment to it automated via the same kcp-dev/infra repository and minimal direct access (i.e. only a small subset of maintainers have access to troubleshoot issues in automation). Since Prow is based on containers, it is easy to reproduce build environments by using the same container image referenced in a Prow job definition. Job / pipeline definition is stored in code and is subject to the same review process as application code changes.
-
+
The kcp project uses GitHub to define teams and associate those teams with specific permissions. As such, only maintainers have elevated permissions on the GitHub organization, and most members have read-only access to the repositories and settings of [github.com/kcp-dev](https://github.com/kcp-dev).
## Day 1 – Installation and Deployment Phase
@@ -290,7 +290,7 @@ title: General Technical Review
* **Describe what project installation and configuration look like.**
Installation depends on the exact method chosen: Currently, kcp supports a [Helm chart](https://github.com/kcp-dev/helm-charts) and an [operator](https://github.com/kcp-dev/kcp-operator).
-
+
* The Helm chart is configured via a Helm values file. This file configures a variety of behavior for the deployed kcp installation, most importantly the external hostname under which the kcp instance will be accessible.
A minimal Helm values file would look like this:
@@ -311,7 +311,7 @@ title: General Technical Review
```
kcp would then start and look similar to this:
-
+
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
@@ -323,7 +323,7 @@ title: General Technical Review
kcp-front-proxy-7f6b7dfdbf-7fr4d 1/1 Running 0 1d
kcp-front-proxy-7f6b7dfdbf-7thwx 1/1 Running 0 1d
```
-
+
To generate credentials to access it, [a client certificate needs to be generated](https://github.com/kcp-dev/helm-charts/tree/main/charts/kcp#initial-access).
Eventually, accessing kcp is possible via `kubectl`:
@@ -430,7 +430,7 @@ title: General Technical Review
kcp is run like any service within a cluster, hence there isn't a way to enable or disable it beyond installing or removing its resources. In specific, it is its own control plane and therefore doesn't directly integrate with the Kubernetes API in any critical capacity.
* **Describe how enabling the project changes any default behavior of the cluster or running workloads.**
-
+
As a standalone control plane, kcp does not change behavior of the underlying cluster or workloads running alongside it.
* **Describe how the project tests enablement and disablement.**
@@ -438,13 +438,13 @@ title: General Technical Review
Since no enablement/disablement exists for kcp, this is also not tested.
* **How does the project clean up any resources created, including CRDs?**
-
+
The two installation methods (Helm chart or operator) both include cleanup logic (Helm cleans up resources created by it, the operator uses owner references) when uninstalling a kcp instance.
### Rollout, Upgrade and Rollback Planning
* **How does the project intend to provide and maintain compatibility with infrastructure and orchestration management tools like Kubernetes and with what frequency?**
-
+
kcp intends to maintain compatibility with all upstream supported Kubernetes minor versions at the time of a kcp release. kcp minor releases happen every 3-4 months, which is the frequency at which compatibility with Kubernetes is evaluated.
* **Describe how the project handles rollback procedures.**
@@ -456,7 +456,7 @@ title: General Technical Review
A rollout/rollback could fail if the new/old version of kcp cannot read the data format in etcd written by the other version (e.g. if a new API version has been used already). This would cause the control plane to become unavailable. However, this would not impact already running workloads on any physical clusters orchestrated by additional components such as api-syncagent, as they run independently. The impact would be an inability to schedule new service instances, update existing ones, or otherwise interact with the Kubernetes-like API of kcp until the control plane is restored.
* **Describe any specific metrics that should inform a rollback.**
-
+
Request level metrics derived from the kube-apiserver codebase that kcp is based on, in particular a high amount of 5xx HTTP errors reported.
* **Explain how upgrades and rollbacks were tested and how the upgrade-\>downgrade-\>upgrade path was tested.**
@@ -466,11 +466,11 @@ title: General Technical Review
* **Explain how the project informs users of deprecations and removals of features and APIs.**
APIs are marked as deprecated in API field descriptions and release notes. As this follows the same patterns that Kubernetes (and CRDs) use, tooling (like linters) will inform integrators that import the kcp API SDK about API field deprecations.
-
+
API removal happens after a grace period following deprecation and is communicated in the release notes.
* **Explain how the project permits utilization of alpha and beta capabilities as part of a rollout.**
-
+
kcp provides access to Kubernetes feature gates and adds its own feature gates to make sure such capabilities are only used with intention. Feature gates are configured via the flag `--feature-gates` on the kcp binary.
diff --git a/docs/content/contributing/governance/security-self-assessment.md b/docs/content/contributing/governance/security-self-assessment.md
index 1ce45e19c3d..500acd476fc 100644
--- a/docs/content/contributing/governance/security-self-assessment.md
+++ b/docs/content/contributing/governance/security-self-assessment.md
@@ -80,7 +80,7 @@ kcp provides a set of plugins for `kubectl`, the Kubernetes command line client.
kcp provides resources dedicated to managing available APIs in a Workspace.
* **APIExport**: Allows a service provider in one workspace to publish an API for consumption by other workspaces.
-
+
* **APIBinding**: Allows a service consumer in one workspace to bind to an APIExport from another workspace, making the published API available in the local workspace.
The ability to bind APIs across workspaces (a security boundary) is guarded by RBAC checks.
@@ -260,7 +260,7 @@ There is a template for incident response for reference [here](https://github.co
### Known Issues over Time
-* [GHSA-c7xh-gjv4-4jgv](https://github.com/kcp-dev/kcp/security/advisories/GHSA-c7xh-gjv4-4jgv): Impersonation allows access to global administrative groups
+* [GHSA-c7xh-gjv4-4jgv](https://github.com/kcp-dev/kcp/security/advisories/GHSA-c7xh-gjv4-4jgv): Impersonation allows access to global administrative groups
* [GHSA-w2rr-38wv-8rrp](https://github.com/kcp-dev/kcp/security/advisories/GHSA-w2rr-38wv-8rrp): Unauthorized creation and deletion of objects in arbitrary workspaces through APIExport Virtual Workspace
### OpenSSF Best Practices
diff --git a/docs/content/contributing/guides/publishing-a-new-kcp-release.md b/docs/content/contributing/guides/publishing-a-new-kcp-release.md
index f4c56a0f3ea..b274295d254 100644
--- a/docs/content/contributing/guides/publishing-a-new-kcp-release.md
+++ b/docs/content/contributing/guides/publishing-a-new-kcp-release.md
@@ -29,7 +29,7 @@ description: >
1. If your git remote for kcp-dev/kcp is named something other than `upstream`, change `REF` accordingly
2. If you are creating a release from a release branch, change `main` in `REF` accordingly, or you can
make `REF` a commit hash.
-
+
```shell
REF=upstream/main
TAG= # e.g. v1.2.3
@@ -114,7 +114,7 @@ release-notes \
--branch main \
--start-rev $PREV_TAG \
--end-rev $TAG \
- --output CHANGELOG.md
+ --output CHANGELOG.md
```
Don't commit the `CHANGELOG.md` to the repository, just keep it around to update the release on GitHub (next step).
diff --git a/docs/content/contributing/guides/replicate-new-resource.md b/docs/content/contributing/guides/replicate-new-resource.md
index 74121b773ed..2c743d75ee5 100644
--- a/docs/content/contributing/guides/replicate-new-resource.md
+++ b/docs/content/contributing/guides/replicate-new-resource.md
@@ -7,7 +7,7 @@ description: >
As of today adding a new resource for replication is a manual process that consists of the following steps:
-1. You need to register a new CRD in the cache server.
+1. You need to register a new CRD in the cache server.
Registration is required otherwise the cache server won’t be able to serve the new resource.
It boils down to adding a new entry into [an array](https://github.com/kcp-dev/kcp/blob/53fdaf580d46686686871f77e4a629bc3c234051/pkg/cache/server/bootstrap/bootstrap.go#L46).
If you don’t have a CRD definition file for your type, you can use [the crdpuller](https://github.com/kcp-dev/kcp/tree/53fdaf580d46686686871f77e4a629bc3c234051/cmd/crd-puller) against any kube-apiserver to create the required manifest.
diff --git a/docs/content/developers/internals/etcd-structure.md b/docs/content/developers/internals/etcd-structure.md
index 95dc8432174..9cd0ec8184b 100644
--- a/docs/content/developers/internals/etcd-structure.md
+++ b/docs/content/developers/internals/etcd-structure.md
@@ -4,7 +4,7 @@ description: Changes kcp has made to etcd storage paths.
# etcd structure
-kcp has made some changes to etcd storage paths to support logical clusters and APIExport identities. Please see
+kcp has made some changes to etcd storage paths to support logical clusters and APIExport identities. Please see
below for details.
## Built-in APIs
@@ -55,7 +55,7 @@ Let's break down the segments in the etcd path for this example APIBinding insta
## "Bound" custom resource instances
-Custom resource instances for an API provided by an APIExport, bound by an APIBinding, use the following storage
+Custom resource instances for an API provided by an APIExport, bound by an APIBinding, use the following storage
path structure:
```
diff --git a/docs/content/developers/investigations/transparent-multi-cluster.md b/docs/content/developers/investigations/transparent-multi-cluster.md
index 425a2405d56..f5ff091a5ae 100644
--- a/docs/content/developers/investigations/transparent-multi-cluster.md
+++ b/docs/content/developers/investigations/transparent-multi-cluster.md
@@ -7,7 +7,7 @@ description: >
!!! warning
- This was a prototype that was not continued. The ideas here are still valid and could be picked up by a future project.
+ This was a prototype that was not continued. The ideas here are still valid and could be picked up by a future project.
A key tenet of Kubernetes is that workload placement is node-agnostic until the user needs it to be - Kube offers a homogeneous compute surface that admins or app devs can "break-glass" and set constraints all the way down to writing software that deeply integrates with nodes. But for the majority of workloads a cluster is no more important than a node - it's a detail determined by some human or automated process.
@@ -15,7 +15,7 @@ A key area of investigation for `kcp` is exploring transparency of workloads to
## Goal: The majority of applications and teams should have workflows where cluster is a detail
-A number of projects have explored this since the beginning of Kubernetes - this prototype should explore in detail
+A number of projects have explored this since the beginning of Kubernetes - this prototype should explore in detail
how we can make a normal Kubernetes flow for most users be cluster-independent but still "break-glass" and describe placement in detail. Since this is a broad topic, and we want to benefit the majority of users, we need to also add constraints that maximize the chance of these approaches being adopted.
### Constraint: The workflows and practices teams use today should be minimally disrupted
diff --git a/docs/content/developers/storage-to-rest-patterns.md b/docs/content/developers/storage-to-rest-patterns.md
index 7cb56454df6..a0012562782 100644
--- a/docs/content/developers/storage-to-rest-patterns.md
+++ b/docs/content/developers/storage-to-rest-patterns.md
@@ -2,17 +2,17 @@
## Logical Clusters
-kcp promises to support 1 million logical clusters.
-A logical cluster is like a Kubernetes endpoint, i.e. an endpoint usual Kubernetes client tooling (client-go, controller-runtime and others)
+kcp promises to support 1 million logical clusters.
+A logical cluster is like a Kubernetes endpoint, i.e. an endpoint usual Kubernetes client tooling (client-go, controller-runtime and others)
and user interfaces (kubectl, helm, web console, ...) can talk to, just like to a Kubernetes cluster.
Thus creating a logical cluster must be efficient, both in terms of storage and compute.
It also must provide isolation, just like regular clusters.
## etcd
-etcd is the primary datastore used by kcp.
-It stores data in a key-value store.
-The store’s logical view is a flat binary key space.
+etcd is the primary datastore used by kcp.
+It stores data in a key-value store.
+The store’s logical view is a flat binary key space.
The key space has a lexically sorted index on byte string keys.
In order to create a logical hierarchy, keys are usually mixed with `/` e.g. `/company/branch/location`
@@ -45,32 +45,32 @@ foo2
foo
```
-Note that those queries are based on byte comparisons.
-We didn't create an `/acme` key.
+Note that those queries are based on byte comparisons.
+We didn't create an `/acme` key.
Everything matching the prefix will be returned when using the --prefix parameter.
-This is the key idea behind workspaces in kcp.
+This is the key idea behind workspaces in kcp.
Creating a workspace on the storage layer is very efficient because it boils down to concatenating a string.
It also provides isolation on the lowest possible level.
Data is filtered by the database engine.
## Generic Registry
-kcp is based on the generic apiserver library provided by Kubernetes.
-The central type provided by the library that interacts with a storage layer is called the generic registry.
-It connects an API endpoint (REST) with a database layer.
+kcp is based on the generic apiserver library provided by Kubernetes.
+The central type provided by the library that interacts with a storage layer is called the generic registry.
+It connects an API endpoint (REST) with a database layer.
Almost all API types make use of it.
-The generic apiserver library keeps an in-memory representation of the store for each resource in an API group.
+The generic apiserver library keeps an in-memory representation of the store for each resource in an API group.
For example, `secrets` resources in the `core` API group gets their own storage.
-From the perspective of this document, we can assume that the most important feature of generic storage
+From the perspective of this document, we can assume that the most important feature of generic storage
is to compute the key that is passed to the database to find the data the user wants.

-When the server starts it precomputes the `ResourcePrefix` with a group and a resource name.
-Everything else, like a workspace name, is added dynamically.
-For example, a request with a URL of `/clusters/acme/core/secrets` finds a storage responsible
-for `core/secrets` resources and adds `acme` to the `ResourcePrefix`.
+When the server starts it precomputes the `ResourcePrefix` with a group and a resource name.
+Everything else, like a workspace name, is added dynamically.
+For example, a request with a URL of `/clusters/acme/core/secrets` finds a storage responsible
+for `core/secrets` resources and adds `acme` to the `ResourcePrefix`.
The new string becomes a key that is passed to etcd to find only resources for `acme` cluster.
diff --git a/docs/content/setup/integrations.md b/docs/content/setup/integrations.md
index 308a9eeb10a..636e3374f66 100644
--- a/docs/content/setup/integrations.md
+++ b/docs/content/setup/integrations.md
@@ -174,4 +174,3 @@ workspaces ws tenancy.kcp.io/v1alpha1 false Workspace
logicalclusters core.kcp.io/v1alpha1 false LogicalCluster
...
```
-
diff --git a/docs/content/setup/production/.pages b/docs/content/setup/production/.pages
index 1cb80f3d6f7..680fcef1b94 100644
--- a/docs/content/setup/production/.pages
+++ b/docs/content/setup/production/.pages
@@ -1,8 +1,9 @@
title: Production Deployment
nav:
- index.md
- - overview.md
+ - overview.md
- prerequisites.md
- kcp-dekker.md
- kcp-vespucci.md
- - kcp-comer.md
\ No newline at end of file
+ - kcp-comer.md
+ - audit-logging.md
diff --git a/docs/content/setup/production/audit-logging.md b/docs/content/setup/production/audit-logging.md
index 6c593e6f2c7..20e44c373ae 100644
--- a/docs/content/setup/production/audit-logging.md
+++ b/docs/content/setup/production/audit-logging.md
@@ -71,7 +71,7 @@ This command lists configmaps in the `default` namespace within the `root:consum
## Cross-Workspace Audit Logging
-The workspace path annotations (`kcp.io/path`, `tenancy.kcp.io/workspace`) are especially important when accessing resources across workspaces via [APIExport and APIBinding](/docs/concepts/apis/). When you claim resources from another workspace through an APIBinding, audit events are generated for the consumer workspace, allowing you to track which workspace is accessing which resources.
+The workspace path annotations (`kcp.io/path`, `tenancy.kcp.io/workspace`) are especially important when accessing resources across workspaces via [APIExport and APIBinding](../../concepts/apis/index.md). When you claim resources from another workspace through an APIBinding, audit events are generated for the consumer workspace, allowing you to track which workspace is accessing which resources.
### Setting Up Cross-Workspace Event Logging
diff --git a/docs/content/setup/production/index.md b/docs/content/setup/production/index.md
index ef2dd711316..0c0289c54b0 100644
--- a/docs/content/setup/production/index.md
+++ b/docs/content/setup/production/index.md
@@ -8,7 +8,7 @@ description: >
This document provides comprehensive guidance for deploying kcp in production environments with enterprise-grade reliability, security, and scalability. If you are looking "hands on" deployment instructions, please refer to the specific deployment variant guides linked below [#deployment-variants](#deployment-variants).
!!! note
- We are working on extending this documentation further, to include multiple site deployment, where indivudual shards are deployed in the different regions.
+ We are working on extending this documentation further, to include multiple site deployment, where indivudual shards are deployed in the different regions.
This would allow for geo-distributed deployments to mimic real-world usage scenarios.
## Overview
@@ -144,7 +144,7 @@ See [Audit Logging](audit-logging.md) for details on the annotations (`kcp.io/pa
For specific deployment instructions, kcp production deployments require careful consideration of:
- **Certificate Management**: Self-signed, Let's Encrypt, or enterprise CA integration
-- **High Availability**: Multi-shard deployment with proper load distribution
+- **High Availability**: Multi-shard deployment with proper load distribution
- **Network Architecture**: Front-proxy configuration and shard accessibility patterns
- **Security**: TLS encryption, RBAC, and authentication integration
- **Observability**: Monitoring, logging, and alerting
@@ -159,7 +159,7 @@ We provide three reference deployment patterns:
- **Access pattern**: Only front-proxy is publicly accessible, shards are private
- **Network**: Simple single-cluster deployment
-### [kcp-vespucci](kcp-vespucci.md) - External Certificates
+### [kcp-vespucci](kcp-vespucci.md) - External Certificates
- **Best for**: Production environments requiring trusted certificates
- **Certificate approach**: Let's Encrypt for front-proxy, self-signed certificates for shards
- **Access pattern**: Both front-proxy and shards are publicly accessible
@@ -180,7 +180,7 @@ We provide three reference deployment patterns:
## Getting Started
1. **[Prerequisites](prerequisites.md)**: Install shared components (etcd-druid, cert-manager, kcp-operator, OIDC)
-2. **[Architecture Overview](overview.md)**: Understand kcp component communication patterns
+2. **[Architecture Overview](overview.md)**: Understand kcp component communication patterns
3. **Choose Deployment**: Select the appropriate variant for your environment
## Support Matrix
@@ -194,4 +194,4 @@ We provide three reference deployment patterns:
| Multi-region | ✓ | ✓ | ✓ | ✓ |
| OIDC authentication | ✓ | ✓ | ✓ | ✓ |
-Choose the deployment that best matches your security, compliance, and operational requirements.
\ No newline at end of file
+Choose the deployment that best matches your security, compliance, and operational requirements.
diff --git a/docs/content/setup/production/kcp-comer.md b/docs/content/setup/production/kcp-comer.md
index 90f73b1ebc3..40fd5435703 100644
--- a/docs/content/setup/production/kcp-comer.md
+++ b/docs/content/setup/production/kcp-comer.md
@@ -78,7 +78,7 @@ kubectl apply -f contrib/production/kcp-comer/kcp-front-proxy-internal.yaml
4.1. Get the LoadBalancer IP:
```bash
-kubectl get svc -n kcp-comer
+kubectl get svc -n kcp-comer
```
Configure DNS records in CloudFlare (or your chosen CDN).
@@ -90,10 +90,10 @@ nslookup api.comer.example.com
4.3 Verify deployment:
```bash
-kubectl get pods -n kcp-comer
+kubectl get pods -n kcp-comer
```
-### CloudFlare Configuration:
+### CloudFlare Configuration:
Configure your CloudFlare dashboard:
diff --git a/docs/content/setup/production/overview.md b/docs/content/setup/production/overview.md
index 21f49032c69..254db281c1e 100644
--- a/docs/content/setup/production/overview.md
+++ b/docs/content/setup/production/overview.md
@@ -44,7 +44,7 @@ kcp supports running virtual workspaces outside shards, but the recommended appr
After deployment, you can verify the configuration by checking shard objects:
```bash
-kubectl get shards
+kubectl get shards
```
Output example:
@@ -128,9 +128,9 @@ if shard.Spec.VirtualWorkspaceURL == "" {
- **Edge encryption**: CloudFlare integration
- **Certificate management**: Mixed (edge + internal)
-In this scenario we have two front-proxy. One secured by CloudFlare, but working only with OIDC auth, and another internal front-proxy
+In this scenario we have two front-proxy. One secured by CloudFlare, but working only with OIDC auth, and another internal front-proxy
secured by an internal CA for internal clients.

-Understanding these patterns will help you choose the appropriate deployment strategy and configure networking correctly for your environment.
\ No newline at end of file
+Understanding these patterns will help you choose the appropriate deployment strategy and configure networking correctly for your environment.
diff --git a/docs/content/setup/production/prerequisites.md b/docs/content/setup/production/prerequisites.md
index f1696f5c4a4..4ba204cefc7 100644
--- a/docs/content/setup/production/prerequisites.md
+++ b/docs/content/setup/production/prerequisites.md
@@ -8,7 +8,7 @@ description: >
Before deploying any kcp production variant, you must install shared components that all deployments depend on. This guide covers the installation and configuration of these foundational components.
- A Kubernetes cluster with sufficient resources
-- `kubectl` configured to access your cluster
+- `kubectl` configured to access your cluster
- `helm` CLI tool installed
- DNS management capability (manual or automated)
- (Optional) CloudFlare account for DNS01 challenges
@@ -18,7 +18,7 @@ Before deploying any kcp production variant, you must install shared components
All kcp production deployments require:
1. **etcd-druid operator** - Database storage management
-2. **cert-manager** - Certificate lifecycle management
+2. **cert-manager** - Certificate lifecycle management
3. **kcp-operator** - kcp resource lifecycle management
4. **OIDC provider (dex)** - Authentication services
5. **DNS configuration** - Domain name resolution
@@ -72,7 +72,7 @@ Optional:
We gonna use the CloudFlare DNS01 challenge solver for Let's Encrypt certificates in some deployment variants. If you plan to use CloudFlare, install the cert-manager CloudFlare DNS01 solver:
```bash
-cp contrib/production/cert-manager/cluster-issuer.yaml.template contrib/production/cert-manager/cluster-issuer.yaml
+cp contrib/production/cert-manager/cluster-issuer.yaml.template contrib/production/cert-manager/cluster-issuer.yaml
# Edit contrib/production/cert-manager/cluster-issuer.yaml to add your Email.
kubectl apply -f contrib/production/cert-manager/cluster-issuer.yaml
@@ -142,7 +142,7 @@ helm upgrade -i dex dex/dex \
--create-namespace \
--namespace oidc \
-f contrib/production/oidc-dex/values.yaml
-```
+```
### 5. DNS Configuration
@@ -156,7 +156,7 @@ api.dekker.example.com → LoadBalancer IP
#### kcp-vespucci (External Certs)
```
api.vespucci.example.com → LoadBalancer IP
-root.vespucci.example.com → LoadBalancer IP
+root.vespucci.example.com → LoadBalancer IP
alpha.vespucci.example.com → LoadBalancer IP
beta.vespucci.example.com → LoadBalancer IP - remote
```
@@ -187,7 +187,7 @@ Minimum recommended resources for shared components:
| Component | CPU | Memory | Storage |
|-----------|-----|--------|---------|
| etcd-druid | 100m | 128Mi | - |
-| cert-manager | 100m | 128Mi | - |
+| cert-manager | 100m | 128Mi | - |
| kcp-operator | 100m | 128Mi | - |
| dex | 100m | 64Mi | - |
| **Total** | **400m** | **448Mi** | - |
diff --git a/docs/content/setup/sharding.md b/docs/content/setup/sharding.md
index f9f80d70005..8c0d583d02b 100644
--- a/docs/content/setup/sharding.md
+++ b/docs/content/setup/sharding.md
@@ -96,8 +96,8 @@ Each shard has its own provider workspace. Consumers bind to their local shard's
### Strategy 3: Partitioned APIExportEndpointSlices
-Partitions combined with `APIExportEndpointSlice` resources provide a mechanism to distribute API access across shards,
-enabling continued operation even when the provider's home shard is unavailable.
+Partitions combined with `APIExportEndpointSlice` resources provide a mechanism to distribute API access across shards,
+enabling continued operation even when the provider's home shard is unavailable.
In this scenarion "home shard" refers to the shard where the provider workspace (and its APIExport) is located and hosted.
diff --git a/docs/generators/crd-ref/crd.template.md b/docs/generators/crd-ref/crd.template.md
index 7ac1aa50329..9b277bea300 100644
--- a/docs/generators/crd-ref/crd.template.md
+++ b/docs/generators/crd-ref/crd.template.md
@@ -4,8 +4,8 @@ description: |
{{- if .Description }}
{{ .Description | indent 2 }}
{{- else }}
- Custom resource definition (CRD) schema reference page for the {{ .Title }}
- resource ({{ .NamePlural }}.{{ .Group }}), as part of the Giant Swarm
+ Custom resource definition (CRD) schema reference page for the {{ .Title }}
+ resource ({{ .NamePlural }}.{{ .Group }}), as part of the Giant Swarm
Management API documentation.
{{- end }}
weight: {{ .Weight }}
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index 203e0310120..816dd16c189 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -52,8 +52,12 @@ theme:
name: Switch to system preference
extra_css:
+ - stylesheets/version_banner.css
- stylesheets/crd.css
+extra_javascript:
+ - scripts/version_banner.js
+
extra:
version:
# Enable mike for multi-version selection
diff --git a/docs/overrides/main.html b/docs/overrides/main.html
deleted file mode 100644
index b0f5d7fcf75..00000000000
--- a/docs/overrides/main.html
+++ /dev/null
@@ -1,155 +0,0 @@
-{% extends "base.html" %} {% block extrahead %} {{ super() }}
-
-
-
-{% endblock %}
diff --git a/docs/overrides/scripts/version_banner.js b/docs/overrides/scripts/version_banner.js
new file mode 100644
index 00000000000..42a3035f0c4
--- /dev/null
+++ b/docs/overrides/scripts/version_banner.js
@@ -0,0 +1,61 @@
+(function () {
+ "use strict";
+
+ const UNRELEASED_VERSION = "main";
+
+ function getCurrentVersion() {
+ const path = window.location.pathname;
+ const versionMatch = path.match(/\/(main|v\d+\.\d+[^\/]*)/);
+ return versionMatch ? versionMatch[1] : null;
+ }
+
+ function getAvailableVersions() {
+ const versionList = document.querySelector("ul.md-version__list");
+ if (!versionList) return [];
+
+ const links = Array.from(
+ versionList.querySelectorAll("a.md-version__link")
+ );
+
+ return links
+ .map((link) => {
+ const href = link.href || link.getAttribute("href");
+ const match = href.match(/\/(v\d+\.\d+[^\/]*)\//);
+ return match ? match[1] : null;
+ })
+ .filter((v) => v && /^v\d+\.\d+/.test(v))
+ .filter((v, i, arr) => arr.indexOf(v) === i);
+ }
+
+ function getLatestVersionPath() {
+ const versions = getAvailableVersions();
+ const latestVersion = versions[0];
+ const currentPath = window.location.pathname;
+
+ return (
+ currentPath.replace(/\/main(\/|$)/, `/${latestVersion}$1`) ||
+ `/${latestVersion}/`
+ );
+ }
+
+ function createBanner() {
+ const banner = document.createElement("div");
+ banner.id = "version-banner";
+ banner.innerHTML = `
+ You are viewing the docs for an unreleased version.
+ Click here to go to the latest stable version.
+ `;
+
+ banner.querySelector("#latest-version-link").href = getLatestVersionPath();
+
+ document.body.insertBefore(banner, document.body.firstChild);
+
+ return banner;
+ }
+
+ if (getCurrentVersion() !== UNRELEASED_VERSION) {
+ return;
+ }
+
+ createBanner();
+})();
diff --git a/docs/overrides/stylesheets/version_banner.css b/docs/overrides/stylesheets/version_banner.css
new file mode 100644
index 00000000000..92fd379496e
--- /dev/null
+++ b/docs/overrides/stylesheets/version_banner.css
@@ -0,0 +1,28 @@
+#version-banner {
+ display: block;
+ position: sticky;
+ top: 0;
+ left: 0;
+ right: 0;
+ width: 100%;
+ background-color: #448aff;
+ color: white;
+ padding: 8px 16px;
+ text-align: center;
+ font-size: 0.7rem;
+ z-index: 10000;
+}
+
+#version-banner a {
+ color: white;
+ text-decoration: underline;
+ margin-left: 8px;
+}
+
+#version-banner a:hover {
+ text-decoration: none;
+}
+
+.md-header {
+ transition: top 0.2s ease;
+}