Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions docs/content/GOALS.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Not every idea below may bear fruit, but it's never the wrong time to look for n

Finally, the bar is still high to writing controllers. Lowering the friction of automation and integration is in everyone's benefit - whether that's a bash script, a Terraform configuration, or custom SRE services. If we can reduce the cost of both infrastructure as code and new infrastructure APIs we can potentially make operational investments more composable.

See the [investigations doc for minimal API server](./developers/investigations/minimal-api-server.md) for more on
See the [investigations doc for minimal API server](./developers/investigations/minimal-api-server.md) for more on
improving the composability of the Kube API server.


Expand Down Expand Up @@ -79,4 +79,3 @@ Principles are the high level guiding rules we'd like to frame designs around. T
6. Consolidate efforts in the ecosystem into a more focused effort

Kubernetes is mature and changes to the core happen slowly. By concentrating use cases among a number of participants we can better articulate common needs, focus the design time spent in the core project into a smaller set of efforts, and bring new investment into common shared problems strategically. We should make fast progress and be able to suggest high-impact changes without derailing other important Kubernetes initiatives.

4 changes: 2 additions & 2 deletions docs/content/concepts/apis/admission-webhooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ flowchart TD
schema["Widgets APIResourceSchema<br/>(widgets.v1.example.org)"]
webhook["Mutating/ValidatingWebhookConfiguration<br/>ValidatingAdmissionPolicy<br/>for widgets.v1.example.org<br/><br/>Handle a from ws2 (APIResourceSchema)<br/>Handle b from ws3 (APIResourceSchema)<br/>Handle a from ws1 (CRD)"]
crd["Widgets CustomResourceDefinition<br/>(widgets.v1.example.org)"]

export --> schema
schema --> webhook
webhook --> crd
Expand Down Expand Up @@ -64,7 +64,7 @@ Consider a scenario where:
- An `APIExport` for `cowboys.wildwest.dev`
- A `ValidatingAdmissionPolicy` that rejects cowboys with `intent: "bad"`
- A `ValidatingAdmissionPolicyBinding` that binds the policy

- **Consumer workspace** (`root:consumer`) has:
- An `APIBinding` that binds to the provider's `APIExport`
- A user trying to create a cowboy with `intent: "bad"`
Expand Down
18 changes: 9 additions & 9 deletions docs/content/concepts/apis/rest-access-patterns.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ This describes the various REST access patterns the kcp apiserver supports.

These requests are all prefixed with `/clusters/<workspace path | logical cluster name>`. Here are some example URLs:

- `GET /clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces` - lists all kcp Workspaces in the
- `GET /clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces` - lists all kcp Workspaces in the
`root` workspace.
- `GET /clusters/root:compute/api/v1/namespaces/test` - gets the namespace `test` from the `root:compute` workspace
- `GET /clusters/yqzkjxmzl9turgsf/api/v1/namespaces/test` - same as above, using the logical cluster name for
- `GET /clusters/yqzkjxmzl9turgsf/api/v1/namespaces/test` - same as above, using the logical cluster name for
`root:compute`

## Typical requests for resources through the APIExport virtual workspace

An APIExport provides a view into workspaces that contain APIBindings that are bound to the APIExport. This allows
the service provider - the owner of the APIExport - to access data in its consumers' workspaces. Here is an example
An APIExport provides a view into workspaces that contain APIBindings that are bound to the APIExport. This allows
the service provider - the owner of the APIExport - to access data in its consumers' workspaces. Here is an example
APIExport virtual workspace URL:

```
Expand All @@ -39,13 +39,13 @@ Let's break down the segments in the URL path:

## Setting up shared informers for a virtual workspace

A virtual workspace typically allows the service provider to set up shared informers that can list and watch
resources across all the consumer workspaces bound to or supported by the virtual workspace. For example, the
APIExport virtual workspace lets you inform across all workspaces that have an APIBinding to your APIExport. The
syncer virtual workspace lets a syncer inform across all workspaces that have a Placement on the syncer's associated
A virtual workspace typically allows the service provider to set up shared informers that can list and watch
resources across all the consumer workspaces bound to or supported by the virtual workspace. For example, the
APIExport virtual workspace lets you inform across all workspaces that have an APIBinding to your APIExport. The
syncer virtual workspace lets a syncer inform across all workspaces that have a Placement on the syncer's associated
SyncTarget.

To set up shared informers to span multiple workspaces, you use a special cluster called the **wildcard cluster**,
To set up shared informers to span multiple workspaces, you use a special cluster called the **wildcard cluster**,
denoted by `*`. An example URL you would use when constructing a shared informer in this manner might be:

```
Expand Down
2 changes: 1 addition & 1 deletion docs/content/concepts/quickstart-tenancy-and-apis.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ NAME TYPE PHASE URL
b universal Ready https://myhost:6443/clusters/root:a:b
```

Here is a quick collection of commands showing the navigation between the workspaces you've just created.
Here is a quick collection of commands showing the navigation between the workspaces you've just created.
Note the usage of `..` to switch to the parent workspace and `-` to the previously selected workspace.

```console
Expand Down
1 change: 0 additions & 1 deletion docs/content/concepts/sharding/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@
## Pages

{% include "partials/section-overview.html" %}

8 changes: 4 additions & 4 deletions docs/content/concepts/workspaces/mounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ root/
└── org1/
├── project-a/ # Traditional LogicalCluster workspace
│ ├── LogicalCluster object # ✓ Has backing logical cluster
│ ├── /api/v1/configmaps # ✓ Served by kcp directly
│ ├── /api/v1/configmaps # ✓ Served by kcp directly
│ └── /api/v1/secrets # ✓ Standard Kubernetes APIs
└── project-b/ # Mounted workspace
Expand Down Expand Up @@ -119,7 +119,7 @@ While the mount object can be any Custom Resource, you still need a controller t
- Implement and run the actual API server/proxy that serves requests at the `status.URL`
- Handle authentication, authorization, and any request filtering if needed

The kcp mounting machinery handles the workspace-to-mount routing, but the actual API implementation is entirely up to you.
The kcp mounting machinery handles the workspace-to-mount routing, but the actual API implementation is entirely up to you.

### Creating a Mounted Workspace

Expand All @@ -141,7 +141,7 @@ spec:
#### Mount Field Requirements

- `ref.apiVersion`: The API version of the mount object
- `ref.kind`: The kind of the mount object
- `ref.kind`: The kind of the mount object
- `ref.name`: The name of the mount object
- `ref.namespace`: (Optional) The namespace of the mount object if it's namespaced

Expand Down Expand Up @@ -238,4 +238,4 @@ The workspace mounts controller (`kcp-workspace-mounts`) manages the integration

## References

1. https://github.com/kcp-dev/contrib/tree/main/20241013-kubecon-saltlakecity/mounts-vw - Example mount controller and proxy implementation
1. https://github.com/kcp-dev/contrib/tree/main/20241013-kubecon-saltlakecity/mounts-vw - Example mount controller and proxy implementation
2 changes: 1 addition & 1 deletion docs/content/concepts/workspaces/workspace-termination.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,5 +106,5 @@ You can use this url to construct a kubeconfig for your controller. To do so, us

When writing a custom terminator controller, the following needs to be taken into account:

* We strongly recommend to use [multicluster-runtime](github.com/kcp-dev/multicluster-runtime) to build your controller in order to properly handle which `LogicalCluster` originates from which workspace
* We strongly recommend to use [multicluster-runtime](https://github.com/kcp-dev/multicluster-runtime) to build your controller in order to properly handle which `LogicalCluster` originates from which workspace
* You need to update `LogicalClusters` using patches; They cannot be updated using the update api
1 change: 0 additions & 1 deletion docs/content/contributing/continuous-integration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,3 @@ Then, to have your test use that shared kcp server, you add `-args --use-default
```shell
go test ./test/e2e/apibinding -count 20 -failfast -args --use-default-kcp-server
```

Loading