diff --git a/KCP_RELATED_CHANGES.md b/KCP_RELATED_CHANGES.md
new file mode 100644
index 0000000000000..1fe2076592476
--- /dev/null
+++ b/KCP_RELATED_CHANGES.md
@@ -0,0 +1,97 @@
+# Why this forked repository ?
+
+This repository carries the prototype branch which accumulates the hacks, prototypes, proto-KEP experiments, and workarounds required to make [KCP](https://github.com/kcp-dev/kcp/blob/main/README.md) a reality.
+It is based on K8S 1.22 for now and commits are identified with basic labels like HACK/FEATURE/WORKAROUND.
+
+# Summary of changes
+
+The detailed explanation of the changes made on top of the Kubernetes code can be found in both the commit messages, and comments of the associated code.
+
+However here is a summary of the changes, with the underlying requirements and motivations. Reading the linked investigation document first will help.
+
+## A. Minimal API Server
+
+__Investigation document:__ [minimal-api-server.md](https://github.com/kcp-dev/kcp/blob/main/docs/investigations/minimal-api-server.md)
+
+1. New generic control plane based on kube api-server
+
+ It is mainly provided by code:
+
+ 1. initially duplicated from kube api-server main code, and legacy scheme (`DUPLICATE` commits),
+
+ 2. then stripped down from unnecessary things (ergress, api aggregation, webhooks) and APIs (Pods, Nodes, Deployments, etc ...) (`NEW` commits)
+
+2. Support adding K8S built-in resources (`core/Pod`, `apps/Deployment`, ...) as CRDs
+
+ This is required since the new generic control plane scheme doesn't contain those resources any more.
+
+ This is provided by:
+
+ - hacks (`HACK` commits) that:
+
+ 1. allow the go-restful server to be bypassed for those resources, and route them to the CRD handler
+ 2. allow the CRD handler, and opanapi publisher, to support resources of the `core` group
+ 3. convert the `protobuf` requests sent to those resources resources to requests in the `application/json` content type, before letting the CRD handler serve them
+ 4. replace the table converter of CRDs that bring back those resources, with the default table converter of the related built-in resource
+
+ - a new feature, or potential kube fix (`KUBEFIX` commit), that:
+
+ 5. introduces the support of strategic merge patch for CRDs.
+ This support uses the OpenAPI v3 schema of the CRD to drive the SMP execution, but only adds a minimal implementation and doesn't fully support OpenAPI schemas that don't have expected `patchStrategy` and `patchMergeKey` annotations.
+ In order to avoid changing the behavior of existing client tools, the support is only added for those K8S built-in resources
+
+## B. Logical clusters
+
+__Investigation document:__ [logical-clusters.md](https://github.com/kcp-dev/kcp/blob/main/docs/investigations/logical-clusters.md)
+
+1. Logical clusters represented as a prefix in etcd
+
+ It is mainly provided by hacks (`HACK` commits) that:
+
+ 1. allow intercepting the api server handler chain to set the expected logical cluster context value from either a given suffix in the request APIServer base URL, or a given header in the http request
+
+ 2. change etcd storage layer in order to use the logical cluster as a prefix in the etcd key
+
+ 3. allow wildcard watches that retrieve objects from all the logical clusters
+
+ 4. correctly get or set the `clusterName` metadata field in the storage layer operations based on the etcd key and its new prefix
+
+2. Support of logical clusters (== tenancy) in the CRD management, OpenAPI and discovery endpoints, and clients used by controllers
+
+ It is mainly provided by a hack (`HACK` commit) that adds CRD tenancy by ensuring that logical clusters are taken in account in:
+ - CRD-related controllers
+ - APIServices-related controllers
+ - Discovery + OpenAPI endpoints
+
+ In the current Kubernetes design, those 3 areas are highly coupled and intricated, which explains why this commit had to hack the code at various levels:
+ - client-go level
+ - controllers level,
+ - http handlers level.
+
+ While this gives a detailed idea of which code needs to be touched in order to enable CRD tenancy, a clean implementation would first require some refactoring, in order to build the required abstraction layers that would allow decoupling those areas.
+
+# Potential client problems
+
+Although these changes in the K8S codebase were made in order to keep the compatibility with Kuberntes client tools, there might be some problems:
+
+## Incomplete protobuf support for built-in resources
+
+In some contexts, like the `controller-runtime` library used by the Operator SDK, all the resources of the `client-go` scheme are created / updated using the `application/vnd.kubernetes.protobuf` content type.
+
+However when these resources are in fact added as CRDs, in the KCP minimal API server scenario, these resources cannot be created / updated since the protobuf (de)serialization is not (and probably cannot be) supported for CRDs.
+So for now in this case, the [A.2.3 hack mentioned above](#A-2-3) just converts the `protobuf` request to a `json` one, but this might not cover all the use-cases or corner cases.
+
+The clean solution would probably be the negotiation of serialization type in `client-go`, which we haven't implemented yet, but which would work like this:
+When a request for an unsupported serialization is returned, the server should reject it with a 406
+and provide a list of supported content types. `client-go` should then examine whether it can satisfy such a request by encoding the object with a different scheme.
+This would require a KEP but at least is in keeping with content negotiation on GET / WATCH in Kube
+
+## Incomplete Strategic merge patch support for built-in resources
+
+Client tools like `kubectl` assume that all K8S native resources (== `client-go` schema resources)
+support strategic merge patch, and use it by default when updating or patching a resource.
+
+In Kube, currently, strategic merge patch is not supported for CRDs, which would break compatibility with client tools for all the K8S natives resources that are in fact added as CRD in the KCP minimal api server.
+The [A-2-5 change mentioned above](#A-2-5) tries to fix this by using the CRD openAPI v3 schema as the source of the required information that will drive the strategic merge patch execution.
+
+While this fixes the problem in most cases, there might still be errors in case the OpenAPI v2 schema for such a resource is missing `x-kubernetes-patch-strategy` and `x-kubernetes-patch-merge-key` annotations when imported from the CRD OpenAPI v3 schema.
\ No newline at end of file
diff --git a/cmd/kube-apiserver/app/config.go b/cmd/kube-apiserver/app/config.go
index 7f03d42f9d8e1..8047fa65fc624 100644
--- a/cmd/kube-apiserver/app/config.go
+++ b/cmd/kube-apiserver/app/config.go
@@ -18,7 +18,9 @@ package app
import (
apiextensionsapiserver "k8s.io/apiextensions-apiserver/pkg/apiserver"
+ "k8s.io/apiextensions-apiserver/pkg/apiserver/conversion"
"k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
"k8s.io/apiserver/pkg/util/webhook"
aggregatorapiserver "k8s.io/kube-aggregator/pkg/apiserver"
aggregatorscheme "k8s.io/kube-aggregator/pkg/apiserver/scheme"
@@ -92,14 +94,23 @@ func NewConfig(opts options.CompletedOptions) (*Config, error) {
}
c.KubeAPIs = kubeAPIs
- apiExtensions, err := controlplaneapiserver.CreateAPIExtensionsConfig(*kubeAPIs.ControlPlane.Generic, kubeAPIs.ControlPlane.VersionedInformers, pluginInitializer, opts.CompletedOptions, opts.MasterCount,
- serviceResolver, webhook.NewDefaultAuthenticationInfoResolverWrapper(kubeAPIs.ControlPlane.ProxyTransport, kubeAPIs.ControlPlane.Generic.EgressSelector, kubeAPIs.ControlPlane.Generic.LoopbackClientConfig, kubeAPIs.ControlPlane.Generic.TracerProvider))
+ authInfoResolver := webhook.NewDefaultAuthenticationInfoResolverWrapper(kubeAPIs.ControlPlane.ProxyTransport, kubeAPIs.ControlPlane.Generic.EgressSelector, kubeAPIs.ControlPlane.Generic.LoopbackClientConfig, kubeAPIs.ControlPlane.Generic.TracerProvider)
+ conversionFactory, err := conversion.NewCRConverterFactory(serviceResolver, authInfoResolver)
+ if err != nil {
+ return nil, err
+ }
+
+ // TODO(ntnn): upstream uses kubeAPIs.ControlPlane.VersionedInformers instead of the versionedInformers returned by BuildGenericConfig.
+ // Check if these are equivalent or if this is a deliberate
+ // diversion.
+ apiExtensions, err := controlplaneapiserver.CreateAPIExtensionsConfig(*kubeAPIs.ControlPlane.Generic, informerfactoryhack.Wrap(versionedInformers), pluginInitializer, opts.CompletedOptions, opts.MasterCount, conversionFactory)
if err != nil {
return nil, err
}
c.ApiExtensions = apiExtensions
- aggregator, err := controlplaneapiserver.CreateAggregatorConfig(*kubeAPIs.ControlPlane.Generic, opts.CompletedOptions, kubeAPIs.ControlPlane.VersionedInformers, serviceResolver, kubeAPIs.ControlPlane.ProxyTransport, kubeAPIs.ControlPlane.Extra.PeerProxy, pluginInitializer)
+ // TODO(ntnn): Here as well.
+ aggregator, err := controlplaneapiserver.CreateAggregatorConfig(*kubeAPIs.ControlPlane.Generic, opts.CompletedOptions, informerfactoryhack.Wrap(versionedInformers), serviceResolver, kubeAPIs.ControlPlane.ProxyTransport, kubeAPIs.ControlPlane.Extra.PeerProxy, pluginInitializer)
if err != nil {
return nil, err
}
diff --git a/cmd/kube-apiserver/app/server.go b/cmd/kube-apiserver/app/server.go
index 042ddc9714a09..0ac23aac0e523 100644
--- a/cmd/kube-apiserver/app/server.go
+++ b/cmd/kube-apiserver/app/server.go
@@ -25,12 +25,14 @@ import (
"net/url"
"os"
+ kcpinformers "github.com/kcp-dev/client-go/informers"
"github.com/spf13/cobra"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apiserver/pkg/admission"
genericapifilters "k8s.io/apiserver/pkg/endpoints/filters"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/egressselector"
serverstorage "k8s.io/apiserver/pkg/server/storage"
@@ -184,7 +186,7 @@ func CreateServerChain(config CompletedConfig) (*aggregatorapiserver.APIAggregat
}
// aggregator comes last in the chain
- aggregatorServer, err := controlplaneapiserver.CreateAggregatorServer(config.Aggregator, kubeAPIServer.ControlPlane.GenericAPIServer, apiExtensionsServer.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdAPIEnabled, apiVersionPriorities)
+ aggregatorServer, err := controlplaneapiserver.CreateAggregatorServer(config.Aggregator, kubeAPIServer.ControlPlane.GenericAPIServer, apiExtensionsServer.Informers.Apiextensions().V1().CustomResourceDefinitions().Cluster(controlplaneapiserver.LocalAdminCluster), crdAPIEnabled, apiVersionPriorities)
if err != nil {
// we don't need special handling for innerStopCh because the aggregator server doesn't create any go routines
return nil, err
@@ -197,7 +199,7 @@ func CreateServerChain(config CompletedConfig) (*aggregatorapiserver.APIAggregat
func CreateKubeAPIServerConfig(
opts options.CompletedOptions,
genericConfig *genericapiserver.Config,
- versionedInformers clientgoinformers.SharedInformerFactory,
+ versionedInformers kcpinformers.SharedInformerFactory,
storageFactory *serverstorage.DefaultStorageFactory,
) (
*controlplane.Config,
@@ -215,7 +217,7 @@ func CreateKubeAPIServerConfig(
return nil, nil, nil, fmt.Errorf("failed to create admission plugin initializer: %w", err)
}
- serviceResolver := buildServiceResolver(opts.EnableAggregatorRouting, genericConfig.LoopbackClientConfig.Host, versionedInformers)
+ serviceResolver := buildServiceResolver(opts.EnableAggregatorRouting, genericConfig.LoopbackClientConfig.Host, informerfactoryhack.Wrap(versionedInformers))
controlplaneConfig, admissionInitializers, err := controlplaneapiserver.CreateConfig(opts.CompletedOptions, genericConfig, versionedInformers, storageFactory, serviceResolver, kubeInitializers)
if err != nil {
return nil, nil, nil, err
diff --git a/cmd/kube-controller-manager/app/core.go b/cmd/kube-controller-manager/app/core.go
index 54c4a1dd7df10..e221118a9fc6e 100644
--- a/cmd/kube-controller-manager/app/core.go
+++ b/cmd/kube-controller-manager/app/core.go
@@ -28,6 +28,8 @@ import (
"time"
v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime/schema"
genericfeatures "k8s.io/apiserver/pkg/features"
"k8s.io/apiserver/pkg/quota/v1/generic"
utilfeature "k8s.io/apiserver/pkg/util/feature"
@@ -577,7 +579,9 @@ func startModifiedNamespaceController(ctx context.Context, controllerContext Con
return nil, true, err
}
- discoverResourcesFn := namespaceKubeClient.Discovery().ServerPreferredNamespacedResources
+ discoverResourcesFn := func(clusterName logicalcluster.Name) ([]*metav1.APIResourceList, error) {
+ return namespaceKubeClient.Discovery().ServerPreferredNamespacedResources()
+ }
namespaceController := namespacecontroller.NewNamespaceController(
ctx,
diff --git a/hack/kcp/garbage_collector_patch.go b/hack/kcp/garbage_collector_patch.go
new file mode 100644
index 0000000000000..7d267a7bb1c81
--- /dev/null
+++ b/hack/kcp/garbage_collector_patch.go
@@ -0,0 +1,109 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/format"
+ "go/parser"
+ "go/token"
+ "log"
+ "strings"
+)
+
+/*
+Process:
+
+1. go run ./hack/kcp/garbage_collector_patch.go > pkg/controller/garbagecollector/garbagecollector_kcp.go
+(you may need to add -mod=readonly)
+
+2. goimports -w pkg/controller/garbagecollector/garbagecollector_kcp.go
+
+3. reapply patch for kcp to pkg/controller/garbagecollector/garbagecollector_kcp.go
+*/
+
+func main() {
+ fileSet := token.NewFileSet()
+
+ file, err := parser.ParseFile(fileSet, "pkg/controller/garbagecollector/garbagecollector.go", nil, parser.ParseComments)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // n stores a reference to the node for the function declaration for Sync
+ var n ast.Node
+
+ ast.Inspect(file, func(node ast.Node) bool {
+ switch x := node.(type) {
+ case *ast.FuncDecl:
+ if x.Name.Name == "Sync" {
+ // Store the reference
+ n = node
+ // Stop further inspection
+ return false
+ }
+ }
+
+ // Continue recursing
+ return true
+ })
+
+ startLine := fileSet.Position(n.Pos()).Line
+ endLine := fileSet.Position(n.End()).Line
+
+ // To preserve the comments from within the function body itself, we have to write out the entire file to a buffer,
+ // then extract only the lines we care about (the function body).
+ var buf bytes.Buffer
+ if err := format.Node(&buf, fileSet, file); err != nil {
+ log.Fatal(err)
+ }
+
+ // Convert the buffer to a slice of lines, so we can grab the portion we want
+ var lines []string
+ scanner := bufio.NewScanner(&buf)
+ for scanner.Scan() {
+ lines = append(lines, scanner.Text())
+ }
+ if err := scanner.Err(); err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(`/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package garbagecollector
+`)
+
+ // Finally, print the line range we need
+ fmt.Println(strings.Join(lines[startLine-1:endLine], "\n"))
+}
diff --git a/hack/kcp/resource_quota_controller_patch.go b/hack/kcp/resource_quota_controller_patch.go
new file mode 100644
index 0000000000000..173997c530e0e
--- /dev/null
+++ b/hack/kcp/resource_quota_controller_patch.go
@@ -0,0 +1,109 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/format"
+ "go/parser"
+ "go/token"
+ "log"
+ "strings"
+)
+
+/*
+Process:
+
+1. go run ./hack/kcp/resource_quota_controller_patch.go > pkg/controller/resourcequota/resource_quota_controller_kcp.go
+(you may need to add -mod=readonly)
+
+2. goimports -w pkg/controller/resourcequota/resource_quota_controller_kcp.go
+
+3. reapply patch for kcp to pkg/controller/resourcequota/resource_quota_controller_kcp.go
+*/
+
+func main() {
+ fileSet := token.NewFileSet()
+
+ file, err := parser.ParseFile(fileSet, "pkg/controller/resourcequota/resource_quota_controller.go", nil, parser.ParseComments)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // n stores a reference to the node for the function declaration for Sync
+ var n ast.Node
+
+ ast.Inspect(file, func(node ast.Node) bool {
+ switch x := node.(type) {
+ case *ast.FuncDecl:
+ if x.Name.Name == "Sync" {
+ // Store the reference
+ n = node
+ // Stop further inspection
+ return false
+ }
+ }
+
+ // Continue recursing
+ return true
+ })
+
+ startLine := fileSet.Position(n.Pos()).Line
+ endLine := fileSet.Position(n.End()).Line
+
+ // To preserve the comments from within the function body itself, we have to write out the entire file to a buffer,
+ // then extract only the lines we care about (the function body).
+ var buf bytes.Buffer
+ if err := format.Node(&buf, fileSet, file); err != nil {
+ log.Fatal(err)
+ }
+
+ // Convert the buffer to a slice of lines, so we can grab the portion we want
+ var lines []string
+ scanner := bufio.NewScanner(&buf)
+ for scanner.Scan() {
+ lines = append(lines, scanner.Text())
+ }
+ if err := scanner.Err(); err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(`/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package resourcequota
+`)
+
+ // Finally, print the line range we need
+ fmt.Println(strings.Join(lines[startLine-1:endLine], "\n"))
+}
diff --git a/hack/pin-dependency.sh b/hack/pin-dependency.sh
index 65440a9bb1122..a14ded830b394 100755
--- a/hack/pin-dependency.sh
+++ b/hack/pin-dependency.sh
@@ -69,6 +69,12 @@ if [[ -z "${dep}" || -z "${replacement}" || -z "${sha}" ]]; then
exit 1
fi
+replacementPath=""
+if [ -d "$replacement" ]; then
+ replacementPath="$(realpath "$replacement")"
+ replacement="$dep"
+fi
+
# Find the resolved version before trying to use it.
echo "Running: go mod download ${replacement}@${sha}"
if meta=$(go mod download -json "${replacement}@${sha}"); then
@@ -88,6 +94,9 @@ go mod edit -require "${dep}@${rev}"
if [ "${replacement}" != "${dep}" ]; then
echo "Running: go mod edit -replace ${dep}=${replacement}@${rev}"
go mod edit -replace "${dep}=${replacement}@${rev}"
+elif [ -d "$replacementPath" ]; then
+ echo "Running: go mod edit -replace ${dep}=${replacementPath}"
+ go mod edit -replace "${dep}=${replacementPath}"
fi
# Propagate pinned version to staging repos
@@ -103,7 +112,11 @@ for repo in $(kube::util::list_staging_repos); do
# isn't that important to get this exactly right.
if [ "${replacement}" != "${dep}" ]; then
find . -name go.mod -print | while read -r modfile; do
+ if [ -d "$replacementPath" ]; then
+ (cd "$(dirname "${modfile}")" && go mod edit -replace "${dep}=${replacementPath}")
+ else
(cd "$(dirname "${modfile}")" && go mod edit -replace "${dep}=${replacement}@${rev}")
+ fi
done
fi
popd >/dev/null 2>&1
diff --git a/hack/update-codegen.sh b/hack/update-codegen.sh
index 98e4048542078..56e2df6423fcf 100755
--- a/hack/update-codegen.sh
+++ b/hack/update-codegen.sh
@@ -776,6 +776,7 @@ function codegen::clients() {
|| true) \
| xargs -0 rm -f
+ # kcp: TODO(gman0) re-add `--prefers-protobuf` once kcp-dev/{client-go,kcp} supports protobuf codec.
client-gen \
-v "${KUBE_VERBOSE}" \
--go-header-file "${BOILERPLATE_FILENAME}" \
@@ -785,10 +786,10 @@ function codegen::clients() {
--input-base="k8s.io/api" \
--plural-exceptions "${PLURAL_EXCEPTIONS}" \
--apply-configuration-package "${APPLYCONFIG_PKG}" \
- --prefers-protobuf \
$(printf -- " --input %s" "${gv_dirs[@]}") \
"$@"
+
if [[ "${DBG_CODEGEN}" == 1 ]]; then
kube::log::status "Generated client code"
fi
diff --git a/pkg/controller/certificates/rootcacertpublisher/publisher.go b/pkg/controller/certificates/rootcacertpublisher/publisher.go
index 36127e883e3ad..81967ea355589 100644
--- a/pkg/controller/certificates/rootcacertpublisher/publisher.go
+++ b/pkg/controller/certificates/rootcacertpublisher/publisher.go
@@ -22,14 +22,16 @@ import (
"reflect"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpcorev1informers "github.com/kcp-dev/client-go/informers/core/v1"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
- coreinformers "k8s.io/client-go/informers/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- corelisters "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
@@ -51,7 +53,7 @@ func init() {
// NewPublisher construct a new controller which would manage the configmap
// which stores certificates in each namespace. It will make sure certificate
// configmap exists in each namespace.
-func NewPublisher(cmInformer coreinformers.ConfigMapInformer, nsInformer coreinformers.NamespaceInformer, cl clientset.Interface, rootCA []byte) (*Publisher, error) {
+func NewPublisher(cmInformer kcpcorev1informers.ConfigMapClusterInformer, nsInformer kcpcorev1informers.NamespaceClusterInformer, cl kcpkubernetesclientset.ClusterInterface, rootCA []byte) (*Publisher, error) {
e := &Publisher{
client: cl,
rootCA: rootCA,
@@ -84,13 +86,13 @@ func NewPublisher(cmInformer coreinformers.ConfigMapInformer, nsInformer coreinf
// Publisher manages certificate ConfigMap objects inside Namespaces
type Publisher struct {
- client clientset.Interface
+ client kcpkubernetesclientset.ClusterInterface
rootCA []byte
// To allow injection for testing.
syncHandler func(ctx context.Context, key string) error
- cmLister corelisters.ConfigMapLister
+ cmLister kcpcorev1listers.ConfigMapClusterLister
cmListerSynced cache.InformerSynced
nsListerSynced cache.InformerSynced
@@ -127,7 +129,12 @@ func (c *Publisher) configMapDeleted(obj interface{}) {
if cm.Name != RootCACertConfigMapName {
return
}
- c.queue.Add(cm.Namespace)
+
+ key := getNamespaceKey(cm)
+ if key == "" {
+ return
+ }
+ c.queue.Add(key)
}
func (c *Publisher) configMapUpdated(_, newObj interface{}) {
@@ -139,12 +146,24 @@ func (c *Publisher) configMapUpdated(_, newObj interface{}) {
if cm.Name != RootCACertConfigMapName {
return
}
- c.queue.Add(cm.Namespace)
+
+ key := getNamespaceKey(cm)
+ if key == "" {
+ return
+ }
+ c.queue.Add(key)
}
func (c *Publisher) namespaceAdded(obj interface{}) {
namespace := obj.(*v1.Namespace)
- c.queue.Add(namespace.Name)
+
+ key, err := kcpcache.MetaClusterNamespaceKeyFunc(namespace)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return
+ }
+
+ c.queue.Add(key)
}
func (c *Publisher) namespaceUpdated(oldObj interface{}, newObj interface{}) {
@@ -152,7 +171,14 @@ func (c *Publisher) namespaceUpdated(oldObj interface{}, newObj interface{}) {
if newNamespace.Status.Phase != v1.NamespaceActive {
return
}
- c.queue.Add(newNamespace.Name)
+
+ key, err := kcpcache.MetaClusterNamespaceKeyFunc(newNamespace)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return
+ }
+
+ c.queue.Add(key)
}
func (c *Publisher) runWorker(ctx context.Context) {
@@ -179,17 +205,24 @@ func (c *Publisher) processNextWorkItem(ctx context.Context) bool {
return true
}
-func (c *Publisher) syncNamespace(ctx context.Context, ns string) (err error) {
+func (c *Publisher) syncNamespace(ctx context.Context, key string) (err error) {
startTime := time.Now()
defer func() {
recordMetrics(startTime, err)
- klog.FromContext(ctx).V(4).Info("Finished syncing namespace", "namespace", ns, "elapsedTime", time.Since(startTime))
+ klog.FromContext(ctx).V(4).Info("Finished syncing namespace %q (%v)", key, time.Since(startTime))
}()
- cm, err := c.cmLister.ConfigMaps(ns).Get(RootCACertConfigMapName)
+ // Get the clusterName and name from the key.
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return err
+ }
+ cm, err := c.cmLister.Cluster(clusterName).ConfigMaps(name).Get(RootCACertConfigMapName)
+
switch {
case apierrors.IsNotFound(err):
- _, err = c.client.CoreV1().ConfigMaps(ns).Create(ctx, &v1.ConfigMap{
+ _, err = c.client.Cluster(clusterName.Path()).CoreV1().ConfigMaps(name).Create(ctx, &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: RootCACertConfigMapName,
Annotations: map[string]string{DescriptionAnnotation: Description},
@@ -224,7 +257,7 @@ func (c *Publisher) syncNamespace(ctx context.Context, ns string) (err error) {
}
cm.Annotations[DescriptionAnnotation] = Description
- _, err = c.client.CoreV1().ConfigMaps(ns).Update(ctx, cm, metav1.UpdateOptions{})
+ _, err = c.client.Cluster(clusterName.Path()).CoreV1().ConfigMaps(name).Update(ctx, cm, metav1.UpdateOptions{})
return err
}
@@ -242,3 +275,7 @@ func convertToCM(obj interface{}) (*v1.ConfigMap, error) {
}
return cm, nil
}
+
+func getNamespaceKey(configmap *v1.ConfigMap) string {
+ return kcpcache.ToClusterAwareKey(logicalcluster.From(configmap).String(), "", configmap.GetNamespace())
+}
diff --git a/pkg/controller/clusterroleaggregation/clusterroleaggregation_controller.go b/pkg/controller/clusterroleaggregation/clusterroleaggregation_controller.go
index bebc45160ebaa..a500abec4626b 100644
--- a/pkg/controller/clusterroleaggregation/clusterroleaggregation_controller.go
+++ b/pkg/controller/clusterroleaggregation/clusterroleaggregation_controller.go
@@ -22,7 +22,11 @@ import (
"sort"
"time"
- rbacv1ac "k8s.io/client-go/applyconfigurations/rbac/v1"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcprbacinformers "github.com/kcp-dev/client-go/informers/rbac/v1"
+ kcprbacclient "github.com/kcp-dev/client-go/kubernetes/typed/rbac/v1"
+ kcprbaclisters "github.com/kcp-dev/client-go/listers/rbac/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/klog/v2"
rbacv1 "k8s.io/api/rbac/v1"
@@ -32,19 +36,15 @@ import (
"k8s.io/apimachinery/pkg/labels"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
- rbacinformers "k8s.io/client-go/informers/rbac/v1"
- rbacclient "k8s.io/client-go/kubernetes/typed/rbac/v1"
- rbaclisters "k8s.io/client-go/listers/rbac/v1"
+ rbacv1ac "k8s.io/client-go/applyconfigurations/rbac/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
-
- "k8s.io/kubernetes/pkg/controller"
)
// ClusterRoleAggregationController is a controller to combine cluster roles
type ClusterRoleAggregationController struct {
- clusterRoleClient rbacclient.ClusterRolesGetter
- clusterRoleLister rbaclisters.ClusterRoleLister
+ clusterRoleClient kcprbacclient.ClusterRolesClusterGetter
+ clusterRoleLister kcprbaclisters.ClusterRoleClusterLister
clusterRolesSynced cache.InformerSynced
syncHandler func(ctx context.Context, key string) error
@@ -52,7 +52,7 @@ type ClusterRoleAggregationController struct {
}
// NewClusterRoleAggregation creates a new controller
-func NewClusterRoleAggregation(clusterRoleInformer rbacinformers.ClusterRoleInformer, clusterRoleClient rbacclient.ClusterRolesGetter) *ClusterRoleAggregationController {
+func NewClusterRoleAggregation(clusterRoleInformer kcprbacinformers.ClusterRoleClusterInformer, clusterRoleClient kcprbacclient.ClusterRolesClusterGetter) *ClusterRoleAggregationController {
c := &ClusterRoleAggregationController{
clusterRoleClient: clusterRoleClient,
clusterRoleLister: clusterRoleInformer.Lister(),
@@ -69,24 +69,24 @@ func NewClusterRoleAggregation(clusterRoleInformer rbacinformers.ClusterRoleInfo
clusterRoleInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
- c.enqueue()
+ c.enqueue(obj)
},
UpdateFunc: func(old, cur interface{}) {
- c.enqueue()
+ c.enqueue(cur)
},
DeleteFunc: func(uncast interface{}) {
- c.enqueue()
+ c.enqueue(uncast)
},
})
return c
}
func (c *ClusterRoleAggregationController) syncClusterRole(ctx context.Context, key string) error {
- _, name, err := cache.SplitMetaNamespaceKey(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
if err != nil {
return err
}
- sharedClusterRole, err := c.clusterRoleLister.Get(name)
+ sharedClusterRole, err := c.clusterRoleLister.Cluster(clusterName).Get(name)
if errors.IsNotFound(err) {
return nil
}
@@ -104,7 +104,7 @@ func (c *ClusterRoleAggregationController) syncClusterRole(ctx context.Context,
if err != nil {
return err
}
- clusterRoles, err := c.clusterRoleLister.List(runtimeLabelSelector)
+ clusterRoles, err := c.clusterRoleLister.Cluster(clusterName).List(runtimeLabelSelector)
if err != nil {
return err
}
@@ -128,7 +128,7 @@ func (c *ClusterRoleAggregationController) syncClusterRole(ctx context.Context,
return nil
}
- err = c.applyClusterRoles(ctx, sharedClusterRole.Name, newPolicyRules)
+ err = c.applyClusterRoles(ctx, sharedClusterRole, newPolicyRules)
if errors.IsUnsupportedMediaType(err) { // TODO: Remove this fallback at least one release after ServerSideApply GA
// When Server Side Apply is not enabled, fallback to Update. This is required when running
// 1.21 since api-server can be 1.20 during the upgrade/downgrade.
@@ -139,12 +139,12 @@ func (c *ClusterRoleAggregationController) syncClusterRole(ctx context.Context,
return err
}
-func (c *ClusterRoleAggregationController) applyClusterRoles(ctx context.Context, name string, newPolicyRules []rbacv1.PolicyRule) error {
- clusterRoleApply := rbacv1ac.ClusterRole(name).
+func (c *ClusterRoleAggregationController) applyClusterRoles(ctx context.Context, sharedClusterRole *rbacv1.ClusterRole, newPolicyRules []rbacv1.PolicyRule) error {
+ clusterRoleApply := rbacv1ac.ClusterRole(sharedClusterRole.Name).
WithRules(toApplyPolicyRules(newPolicyRules)...)
opts := metav1.ApplyOptions{FieldManager: "clusterrole-aggregation-controller", Force: true}
- _, err := c.clusterRoleClient.ClusterRoles().Apply(ctx, clusterRoleApply, opts)
+ _, err := c.clusterRoleClient.ClusterRoles().Cluster(logicalcluster.From(sharedClusterRole).Path()).Apply(ctx, clusterRoleApply, opts)
return err
}
@@ -154,7 +154,7 @@ func (c *ClusterRoleAggregationController) updateClusterRoles(ctx context.Contex
for _, rule := range newPolicyRules {
clusterRole.Rules = append(clusterRole.Rules, *rule.DeepCopy())
}
- _, err := c.clusterRoleClient.ClusterRoles().Update(ctx, clusterRole, metav1.UpdateOptions{})
+ _, err := c.clusterRoleClient.ClusterRoles().Cluster(logicalcluster.From(sharedClusterRole).Path()).Update(ctx, clusterRole, metav1.UpdateOptions{})
return err
}
@@ -229,11 +229,22 @@ func (c *ClusterRoleAggregationController) processNextWorkItem(ctx context.Conte
return true
}
-func (c *ClusterRoleAggregationController) enqueue() {
+func (c *ClusterRoleAggregationController) enqueue(obj interface{}) {
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return
+ }
+ clusterName, _, _, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return
+ }
+
// this is unusual, but since the set of all clusterroles is small and we don't know the dependency
// graph, just queue up every thing each time. This allows errors to be selectively retried if there
// is a problem updating a single role
- allClusterRoles, err := c.clusterRoleLister.List(labels.Everything())
+ allClusterRoles, err := c.clusterRoleLister.Cluster(clusterName).List(labels.Everything())
if err != nil {
utilruntime.HandleError(fmt.Errorf("Couldn't list all objects %v", err))
return
@@ -243,7 +254,7 @@ func (c *ClusterRoleAggregationController) enqueue() {
if clusterRole.AggregationRule == nil {
continue
}
- key, err := controller.KeyFunc(clusterRole)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(clusterRole)
if err != nil {
utilruntime.HandleError(fmt.Errorf("Couldn't get key for object %#v: %v", clusterRole, err))
return
diff --git a/pkg/controller/garbagecollector/garbagecollector.go b/pkg/controller/garbagecollector/garbagecollector.go
index d945b5769776b..39d18048b2366 100644
--- a/pkg/controller/garbagecollector/garbagecollector.go
+++ b/pkg/controller/garbagecollector/garbagecollector.go
@@ -74,6 +74,15 @@ type GarbageCollector struct {
kubeClient clientset.Interface
eventBroadcaster record.EventBroadcaster
+
+ // kcp: Partially reverting e8b1d7dc24713db99808028e0d02bacf6d48e01f.
+ // kcp's GC controller is event-based for now, and without locks we
+ // may miss events during monitor syncs.
+ // TODO(gman0): remove once we move our GC to poll-based.
+ // There are known issues that locking causes:
+ // * https://github.com/kubernetes/kubernetes/issues/101078
+ // * https://github.com/kubernetes/kubernetes/issues/127105
+ workerLock sync.RWMutex
}
var _ controller.Interface = (*GarbageCollector)(nil)
@@ -200,48 +209,85 @@ func (gc *GarbageCollector) Sync(ctx context.Context, discoveryClient discovery.
return
}
- logger.V(2).Info(
- "syncing garbage collector with updated resources from discovery",
- "diff", printDiff(oldResources, newResources),
- )
+ // Ensure workers are paused to avoid processing events before informers
+ // have resynced.
+ gc.workerLock.Lock()
+ defer gc.workerLock.Unlock()
- // Resetting the REST mapper will also invalidate the underlying discovery
- // client. This is a leaky abstraction and assumes behavior about the REST
- // mapper, but we'll deal with it for now.
- gc.restMapper.Reset()
- logger.V(4).Info("reset restmapper")
-
- // Perform the monitor resync and wait for controllers to report cache sync.
- //
- // NOTE: It's possible that newResources will diverge from the resources
- // discovered by restMapper during the call to Reset, since they are
- // distinct discovery clients invalidated at different times. For example,
- // newResources may contain resources not returned in the restMapper's
- // discovery call if the resources appeared in-between the calls. In that
- // case, the restMapper will fail to map some of newResources until the next
- // attempt.
- if err := gc.resyncMonitors(logger, newResources); err != nil {
- utilruntime.HandleError(fmt.Errorf("failed to sync resource monitors: %w", err))
- metrics.GarbageCollectorResourcesSyncError.Inc()
- return
- }
- logger.V(4).Info("resynced monitors")
+ // Once we get here, we should not unpause workers until we've successfully synced
+ attempt := 0
+ wait.PollUntilContextCancel(ctx, 100*time.Millisecond, true, func(ctx context.Context) (bool, error) {
+ attempt++
+
+ // On a reattempt, check if available resources have changed
+ if attempt > 1 {
+ newResources, err = GetDeletableResources(logger, discoveryClient)
+
+ if len(newResources) == 0 {
+ logger.V(2).Info("no resources reported by discovery", "attempt", attempt)
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+ if groupLookupFailures, isLookupFailure := discovery.GroupDiscoveryFailedErrorGroups(err); isLookupFailure {
+ // In partial discovery cases, preserve existing synced informers for resources in the failed groups, so resyncMonitors will only add informers for newly seen resources
+ for k, v := range oldResources {
+ if _, failed := groupLookupFailures[k.GroupVersion()]; failed && gc.dependencyGraphBuilder.IsResourceSynced(k) {
+ newResources[k] = v
+ }
+ }
+ }
+ }
+
+ logger.V(2).Info(
+ "syncing garbage collector with updated resources from discovery",
+ "attempt", attempt,
+ "diff", printDiff(oldResources, newResources),
+ )
- // gc worker no longer waits for cache to be synced, but we will keep the periodical check to provide logs & metrics
- cacheSynced := cache.WaitForNamedCacheSync("garbage collector", waitForStopOrTimeout(ctx.Done(), period), func() bool {
- return gc.dependencyGraphBuilder.IsSynced(logger)
+ // Resetting the REST mapper will also invalidate the underlying discovery
+ // client. This is a leaky abstraction and assumes behavior about the REST
+ // mapper, but we'll deal with it for now.
+ gc.restMapper.Reset()
+ logger.V(4).Info("reset restmapper")
+
+ // Perform the monitor resync and wait for controllers to report cache sync.
+ //
+ // NOTE: It's possible that newResources will diverge from the resources
+ // discovered by restMapper during the call to Reset, since they are
+ // distinct discovery clients invalidated at different times. For example,
+ // newResources may contain resources not returned in the restMapper's
+ // discovery call if the resources appeared in-between the calls. In that
+ // case, the restMapper will fail to map some of newResources until the next
+ // attempt.
+ if err := gc.resyncMonitors(logger, newResources); err != nil {
+ utilruntime.HandleError(fmt.Errorf("failed to sync resource monitors: %w", err))
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+ logger.V(4).Info("resynced monitors")
+
+ // wait for caches to fill for a while (our sync period) before attempting to rediscover resources and retry syncing.
+ // this protects us from deadlocks where available resources changed and one of our informer caches will never fill.
+ // informers keep attempting to sync in the background, so retrying doesn't interrupt them.
+ // the call to resyncMonitors on the reattempt will no-op for resources that still exist.
+ // note that workers stay paused until we successfully resync.
+ if !cache.WaitForNamedCacheSync("garbage collector", waitForStopOrTimeout(ctx.Done(), period), func() bool {
+ return gc.dependencyGraphBuilder.IsSynced(logger)
+ }) {
+ utilruntime.HandleError(fmt.Errorf("timed out waiting for dependency graph builder sync during GC sync (attempt %d)", attempt))
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+
+ // success, break out of the loop
+ return true, nil
})
- if cacheSynced {
- logger.V(2).Info("synced garbage collector")
- } else {
- utilruntime.HandleError(fmt.Errorf("timed out waiting for dependency graph builder sync during GC sync"))
- metrics.GarbageCollectorResourcesSyncError.Inc()
- }
// Finally, keep track of our new resource monitor state.
// Monitors where the cache sync times out are still tracked here as
// subsequent runs should stop them if their resources were removed.
oldResources = newResources
+ logger.V(2).Info("synced garbage collector")
}, period)
}
@@ -291,6 +337,8 @@ var namespacedOwnerOfClusterScopedObjectErr = goerrors.New("cluster-scoped objec
func (gc *GarbageCollector) processAttemptToDeleteWorker(ctx context.Context) bool {
item, quit := gc.attemptToDelete.Get()
+ gc.workerLock.RLock()
+ defer gc.workerLock.RUnlock()
if quit {
return false
}
@@ -715,6 +763,8 @@ func (gc *GarbageCollector) runAttemptToOrphanWorker(logger klog.Logger) {
// these steps fail.
func (gc *GarbageCollector) processAttemptToOrphanWorker(logger klog.Logger) bool {
item, quit := gc.attemptToOrphan.Get()
+ gc.workerLock.RLock()
+ defer gc.workerLock.RUnlock()
if quit {
return false
}
diff --git a/pkg/controller/garbagecollector/garbagecollector_kcp.go b/pkg/controller/garbagecollector/garbagecollector_kcp.go
new file mode 100644
index 0000000000000..2a87ca04664a7
--- /dev/null
+++ b/pkg/controller/garbagecollector/garbagecollector_kcp.go
@@ -0,0 +1,134 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package garbagecollector
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+ "time"
+
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/client-go/discovery"
+ "k8s.io/client-go/tools/cache"
+ "k8s.io/klog/v2"
+
+ "k8s.io/kubernetes/pkg/controller/garbagecollector/metrics"
+)
+
+func (gc *GarbageCollector) ResyncMonitors(ctx context.Context, discoveryClient discovery.ServerResourcesInterface) error {
+ oldResources := make(map[schema.GroupVersionResource]struct{})
+ return func() error {
+ logger := klog.FromContext(ctx)
+
+ // Get the current resource list from discovery.
+ newResources, err := GetDeletableResources(logger, discoveryClient)
+ if err != nil {
+ return err
+ }
+
+ if len(newResources) == 0 {
+ logger.V(2).Info("no resources reported by discovery, skipping garbage collector sync")
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return nil
+ }
+
+ // Decide whether discovery has reported a change.
+ if reflect.DeepEqual(oldResources, newResources) {
+ logger.V(5).Info("no resource updates from discovery, skipping garbage collector sync")
+ return nil
+ }
+
+ // Ensure workers are paused to avoid processing events before informers
+ // have resynced.
+ gc.workerLock.Lock()
+ defer gc.workerLock.Unlock()
+
+ // Once we get here, we should not unpause workers until we've successfully synced
+ attempt := 0
+ wait.PollUntilContextCancel(ctx, 100*time.Millisecond, true, func(ctx context.Context) (bool, error) {
+ attempt++
+
+ // On a reattempt, check if available resources have changed
+ if attempt > 1 {
+ newResources, err = GetDeletableResources(logger, discoveryClient)
+ if err != nil {
+ return false, err
+ }
+ if len(newResources) == 0 {
+ logger.V(2).Info("no resources reported by discovery", "attempt", attempt)
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+ }
+
+ logger.V(2).Info(
+ "syncing garbage collector with updated resources from discovery",
+ "attempt", attempt,
+ "diff", printDiff(oldResources, newResources),
+ )
+
+ // Resetting the REST mapper will also invalidate the underlying discovery
+ // client. This is a leaky abstraction and assumes behavior about the REST
+ // mapper, but we'll deal with it for now.
+ gc.restMapper.Reset()
+ logger.V(4).Info("reset restmapper")
+
+ // Perform the monitor resync and wait for controllers to report cache sync.
+ //
+ // NOTE: It's possible that newResources will diverge from the resources
+ // discovered by restMapper during the call to Reset, since they are
+ // distinct discovery clients invalidated at different times. For example,
+ // newResources may contain resources not returned in the restMapper's
+ // discovery call if the resources appeared in-between the calls. In that
+ // case, the restMapper will fail to map some of newResources until the next
+ // attempt.
+ if err := gc.resyncMonitors(logger, newResources); err != nil {
+ utilruntime.HandleError(fmt.Errorf("failed to sync resource monitors: %w", err))
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+ logger.V(4).Info("resynced monitors")
+
+ // wait for caches to fill for a while (our sync period) before attempting to rediscover resources and retry syncing.
+ // this protects us from deadlocks where available resources changed and one of our informer caches will never fill.
+ // informers keep attempting to sync in the background, so retrying doesn't interrupt them.
+ // the call to resyncMonitors on the reattempt will no-op for resources that still exist.
+ // note that workers stay paused until we successfully resync.
+ if !cache.WaitForNamedCacheSync("garbage collector", ctx.Done(), func() bool {
+ return gc.dependencyGraphBuilder.IsSynced(logger)
+ }) {
+ utilruntime.HandleError(fmt.Errorf("timed out waiting for dependency graph builder sync during GC sync (attempt %d)", attempt))
+ metrics.GarbageCollectorResourcesSyncError.Inc()
+ return false, nil
+ }
+
+ // success, break out of the loop
+ return true, nil
+ })
+
+ // Finally, keep track of our new resource monitor state.
+ // Monitors where the cache sync times out are still tracked here as
+ // subsequent runs should stop them if their resources were removed.
+ oldResources = newResources
+ logger.V(2).Info("synced garbage collector")
+
+ return nil
+ }()
+}
diff --git a/pkg/controller/garbagecollector/garbagecollector_test.go b/pkg/controller/garbagecollector/garbagecollector_test.go
index ba101ec9e25b9..01cfd4ad35aad 100644
--- a/pkg/controller/garbagecollector/garbagecollector_test.go
+++ b/pkg/controller/garbagecollector/garbagecollector_test.go
@@ -818,8 +818,7 @@ func TestGetDeletableResources(t *testing.T) {
}
// TestGarbageCollectorSync ensures that a discovery client error
-// or an informer sync error will not cause the garbage collector
-// to block infinitely.
+// will not cause the garbage collector to block infinitely.
func TestGarbageCollectorSync(t *testing.T) {
serverResources := []*metav1.APIResourceList{
{
@@ -850,6 +849,7 @@ func TestGarbageCollectorSync(t *testing.T) {
PreferredResources: serverResources,
Error: nil,
Lock: sync.Mutex{},
+ InterfaceUsedCount: 0,
}
testHandler := &fakeActionHandler{
@@ -935,9 +935,9 @@ func TestGarbageCollectorSync(t *testing.T) {
// Wait until the sync discovers the initial resources
time.Sleep(1 * time.Second)
- err = expectSyncNotBlocked(fakeDiscoveryClient)
+ err = expectSyncNotBlocked(fakeDiscoveryClient, &gc.workerLock)
if err != nil {
- t.Fatalf("Expected garbagecollector.Sync to still be running but it is blocked: %v", err)
+ t.Fatalf("Expected garbagecollector.Sync to be running but it is blocked: %v", err)
}
assertMonitors(t, gc, "pods", "deployments")
@@ -952,7 +952,7 @@ func TestGarbageCollectorSync(t *testing.T) {
// Remove the error from being returned and see if the garbage collector sync is still working
fakeDiscoveryClient.setPreferredResources(serverResources, nil)
- err = expectSyncNotBlocked(fakeDiscoveryClient)
+ err = expectSyncNotBlocked(fakeDiscoveryClient, &gc.workerLock)
if err != nil {
t.Fatalf("Expected garbagecollector.Sync to still be running but it is blocked: %v", err)
}
@@ -968,7 +968,7 @@ func TestGarbageCollectorSync(t *testing.T) {
// Put the resources back to normal and ensure garbage collector sync recovers
fakeDiscoveryClient.setPreferredResources(serverResources, nil)
- err = expectSyncNotBlocked(fakeDiscoveryClient)
+ err = expectSyncNotBlocked(fakeDiscoveryClient, &gc.workerLock)
if err != nil {
t.Fatalf("Expected garbagecollector.Sync to still be running but it is blocked: %v", err)
}
@@ -985,7 +985,7 @@ func TestGarbageCollectorSync(t *testing.T) {
fakeDiscoveryClient.setPreferredResources(serverResources, nil)
// Wait until sync discovers the change
time.Sleep(1 * time.Second)
- err = expectSyncNotBlocked(fakeDiscoveryClient)
+ err = expectSyncNotBlocked(fakeDiscoveryClient, &gc.workerLock)
if err != nil {
t.Fatalf("Expected garbagecollector.Sync to still be running but it is blocked: %v", err)
}
@@ -1026,15 +1026,27 @@ func assertMonitors(t *testing.T, gc *GarbageCollector, resources ...string) {
}
}
-func expectSyncNotBlocked(fakeDiscoveryClient *fakeServerResources) error {
+func expectSyncNotBlocked(fakeDiscoveryClient *fakeServerResources, workerLock *sync.RWMutex) error {
before := fakeDiscoveryClient.getInterfaceUsedCount()
t := 1 * time.Second
time.Sleep(t)
after := fakeDiscoveryClient.getInterfaceUsedCount()
if before == after {
- return fmt.Errorf("discoveryClient.ServerPreferredResources() not called over %v", t)
+ return fmt.Errorf("discoveryClient.ServerPreferredResources() called %d times over %v", after-before, t)
+ }
+
+ workerLockAcquired := make(chan struct{})
+ go func() {
+ workerLock.Lock()
+ defer workerLock.Unlock()
+ close(workerLockAcquired)
+ }()
+ select {
+ case <-workerLockAcquired:
+ return nil
+ case <-time.After(t):
+ return fmt.Errorf("workerLock blocked for at least %v", t)
}
- return nil
}
type fakeServerResources struct {
diff --git a/pkg/controller/namespace/deletion/namespaced_resources_deleter.go b/pkg/controller/namespace/deletion/namespaced_resources_deleter.go
index 8d9af4a235764..6de687d081e2e 100644
--- a/pkg/controller/namespace/deletion/namespaced_resources_deleter.go
+++ b/pkg/controller/namespace/deletion/namespaced_resources_deleter.go
@@ -23,6 +23,9 @@ import (
"sync"
"time"
+ kcpcorev1client "github.com/kcp-dev/client-go/kubernetes/typed/core/v1"
+ kcpmetadata "github.com/kcp-dev/client-go/metadata"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/klog/v2"
v1 "k8s.io/api/core/v1"
@@ -34,32 +37,27 @@ import (
"k8s.io/apimachinery/pkg/util/sets"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/client-go/discovery"
- v1clientset "k8s.io/client-go/kubernetes/typed/core/v1"
- "k8s.io/client-go/metadata"
"k8s.io/kubernetes/pkg/features"
)
// NamespacedResourcesDeleterInterface is the interface to delete a namespace with all resources in it.
type NamespacedResourcesDeleterInterface interface {
- Delete(ctx context.Context, nsName string) error
+ Delete(ctx context.Context, clusterName logicalcluster.Name, nsName string) error
}
// NewNamespacedResourcesDeleter returns a new NamespacedResourcesDeleter.
-func NewNamespacedResourcesDeleter(ctx context.Context, nsClient v1clientset.NamespaceInterface,
- metadataClient metadata.Interface, podsGetter v1clientset.PodsGetter,
- discoverResourcesFn func() ([]*metav1.APIResourceList, error),
+func NewNamespacedResourcesDeleter(ctx context.Context, nsClient kcpcorev1client.NamespaceClusterInterface,
+ metadataClient kcpmetadata.ClusterInterface, podsGetter kcpcorev1client.PodsClusterGetter,
+ discoverResourcesFn func(clusterName logicalcluster.Path) ([]*metav1.APIResourceList, error),
finalizerToken v1.FinalizerName) NamespacedResourcesDeleterInterface {
d := &namespacedResourcesDeleter{
- nsClient: nsClient,
- metadataClient: metadataClient,
- podsGetter: podsGetter,
- opCache: &operationNotSupportedCache{
- m: make(map[operationKey]bool),
- },
+ nsClient: nsClient,
+ metadataClient: metadataClient,
+ podsGetter: podsGetter,
+ opCaches: map[logicalcluster.Name]*operationNotSupportedCache{},
discoverResourcesFn: discoverResourcesFn,
finalizerToken: finalizerToken,
}
- d.initOpCache(ctx)
return d
}
@@ -68,14 +66,16 @@ var _ NamespacedResourcesDeleterInterface = &namespacedResourcesDeleter{}
// namespacedResourcesDeleter is used to delete all resources in a given namespace.
type namespacedResourcesDeleter struct {
// Client to manipulate the namespace.
- nsClient v1clientset.NamespaceInterface
+ nsClient kcpcorev1client.NamespaceClusterInterface
// Dynamic client to list and delete all namespaced resources.
- metadataClient metadata.Interface
+ metadataClient kcpmetadata.ClusterInterface
// Interface to get PodInterface.
- podsGetter v1clientset.PodsGetter
+ podsGetter kcpcorev1client.PodsClusterGetter
// Cache of what operations are not supported on each group version resource.
- opCache *operationNotSupportedCache
- discoverResourcesFn func() ([]*metav1.APIResourceList, error)
+ opCaches map[logicalcluster.Name]*operationNotSupportedCache
+ opCachesMutex sync.RWMutex
+
+ discoverResourcesFn func(clusterName logicalcluster.Path) ([]*metav1.APIResourceList, error)
// The finalizer token that should be removed from the namespace
// when all resources in that namespace have been deleted.
finalizerToken v1.FinalizerName
@@ -95,11 +95,11 @@ type namespacedResourcesDeleter struct {
// Returns ResourcesRemainingError if it deleted some resources but needs
// to wait for them to go away.
// Caller is expected to keep calling this until it succeeds.
-func (d *namespacedResourcesDeleter) Delete(ctx context.Context, nsName string) error {
+func (d *namespacedResourcesDeleter) Delete(ctx context.Context, clusterName logicalcluster.Name, nsName string) error {
// Multiple controllers may edit a namespace during termination
// first get the latest state of the namespace before proceeding
// if the namespace was deleted already, don't do anything
- namespace, err := d.nsClient.Get(ctx, nsName, metav1.GetOptions{})
+ namespace, err := d.nsClient.Cluster(clusterName.Path()).Get(ctx, nsName, metav1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
return nil
@@ -155,11 +155,11 @@ func (d *namespacedResourcesDeleter) Delete(ctx context.Context, nsName string)
return nil
}
-func (d *namespacedResourcesDeleter) initOpCache(ctx context.Context) {
+func (d *namespacedResourcesDeleter) initOpCache(ctx context.Context, clusterName logicalcluster.Name) {
// pre-fill opCache with the discovery info
//
// TODO(sttts): get rid of opCache and http 405 logic around it and trust discovery info
- resources, err := d.discoverResourcesFn()
+ resources, err := d.discoverResourcesFn(clusterName.Path())
if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to get all supported resources from server: %v", err))
}
@@ -186,7 +186,7 @@ func (d *namespacedResourcesDeleter) initOpCache(ctx context.Context) {
for _, op := range []operation{operationList, operationDeleteCollection} {
if !verbs.Has(string(op)) {
- d.opCache.setNotSupported(operationKey{operation: op, gvr: gvr})
+ d.opCaches[clusterName].setNotSupported(operationKey{operation: op, gvr: gvr})
}
}
}
@@ -238,6 +238,39 @@ func (o *operationNotSupportedCache) setNotSupported(key operationKey) {
o.m[key] = true
}
+// isSupported returns true if the operation is supported
+func (d *namespacedResourcesDeleter) isSupported(ctx context.Context, clusterName logicalcluster.Name, key operationKey) (bool, error) {
+ // Quick read-only check to see if the cache already exists
+ d.opCachesMutex.RLock()
+ cache, exists := d.opCaches[clusterName]
+ d.opCachesMutex.RUnlock()
+
+ if exists {
+ return cache.isSupported(key), nil
+ }
+
+ // Doesn't exist - may need to create
+ d.opCachesMutex.Lock()
+ defer d.opCachesMutex.Unlock()
+
+ // Check again, with the write lock held, to see if it exists. It's possible another goroutine set it in between
+ // when we checked with the read lock held, and now.
+ cache, exists = d.opCaches[clusterName]
+ if exists {
+ return cache.isSupported(key), nil
+ }
+
+ // Definitely doesn't exist - need to create it.
+ cache = &operationNotSupportedCache{
+ m: make(map[operationKey]bool),
+ }
+ d.opCaches[clusterName] = cache
+
+ d.initOpCache(ctx, clusterName)
+
+ return cache.isSupported(key), nil
+}
+
// updateNamespaceFunc is a function that makes an update to a namespace
type updateNamespaceFunc func(ctx context.Context, namespace *v1.Namespace) (*v1.Namespace, error)
@@ -255,7 +288,7 @@ func (d *namespacedResourcesDeleter) retryOnConflictError(ctx context.Context, n
return nil, err
}
prevNamespace := latestNamespace
- latestNamespace, err = d.nsClient.Get(ctx, latestNamespace.Name, metav1.GetOptions{})
+ latestNamespace, err = d.nsClient.Cluster(logicalcluster.From(latestNamespace).Path()).Get(ctx, latestNamespace.Name, metav1.GetOptions{})
if err != nil {
return nil, err
}
@@ -272,7 +305,7 @@ func (d *namespacedResourcesDeleter) updateNamespaceStatusFunc(ctx context.Conte
}
newNamespace := namespace.DeepCopy()
newNamespace.Status.Phase = v1.NamespaceTerminating
- return d.nsClient.UpdateStatus(ctx, newNamespace, metav1.UpdateOptions{})
+ return d.nsClient.Cluster(logicalcluster.From(namespace).Path()).UpdateStatus(ctx, newNamespace, metav1.UpdateOptions{})
}
// finalized returns true if the namespace.Spec.Finalizers is an empty list
@@ -295,7 +328,7 @@ func (d *namespacedResourcesDeleter) finalizeNamespace(ctx context.Context, name
for _, value := range finalizerSet.List() {
namespaceFinalize.Spec.Finalizers = append(namespaceFinalize.Spec.Finalizers, v1.FinalizerName(value))
}
- namespace, err := d.nsClient.Finalize(ctx, &namespaceFinalize, metav1.UpdateOptions{})
+ namespace, err := d.nsClient.Cluster(logicalcluster.From(namespace).Path()).Finalize(ctx, &namespaceFinalize, metav1.UpdateOptions{})
if err != nil {
// it was removed already, so life is good
if errors.IsNotFound(err) {
@@ -308,13 +341,15 @@ func (d *namespacedResourcesDeleter) finalizeNamespace(ctx context.Context, name
// deleteCollection is a helper function that will delete the collection of resources
// it returns true if the operation was supported on the server.
// it returns an error if the operation was supported on the server but was unable to complete.
-func (d *namespacedResourcesDeleter) deleteCollection(ctx context.Context, gvr schema.GroupVersionResource, namespace string) (bool, error) {
+func (d *namespacedResourcesDeleter) deleteCollection(ctx context.Context, clusterName logicalcluster.Name, gvr schema.GroupVersionResource, namespace string) (bool, error) {
logger := klog.FromContext(ctx)
logger.V(5).Info("Namespace controller - deleteCollection", "namespace", namespace, "resource", gvr)
key := operationKey{operation: operationDeleteCollection, gvr: gvr}
- if !d.opCache.isSupported(key) {
- logger.V(5).Info("Namespace controller - deleteCollection ignored since not supported", "namespace", namespace, "resource", gvr)
+ if supported, err := d.isSupported(ctx, clusterName, key); err != nil {
+ return false, err
+ } else if !supported {
+ logger.V(5).Info("namespace controller - deleteCollection ignored since not supported - namespace", "namespace", namespace, "resource", gvr)
return false, nil
}
@@ -323,7 +358,8 @@ func (d *namespacedResourcesDeleter) deleteCollection(ctx context.Context, gvr s
// namespace itself.
background := metav1.DeletePropagationBackground
opts := metav1.DeleteOptions{PropagationPolicy: &background}
- err := d.metadataClient.Resource(gvr).Namespace(namespace).DeleteCollection(ctx, opts, metav1.ListOptions{})
+ err := d.metadataClient.Cluster(clusterName.Path()).Resource(gvr).Namespace(namespace).DeleteCollection(ctx, opts, metav1.ListOptions{})
+
if err == nil {
return true, nil
}
@@ -348,17 +384,20 @@ func (d *namespacedResourcesDeleter) deleteCollection(ctx context.Context, gvr s
// the list of items in the collection (if found)
// a boolean if the operation is supported
// an error if the operation is supported but could not be completed.
-func (d *namespacedResourcesDeleter) listCollection(ctx context.Context, gvr schema.GroupVersionResource, namespace string) (*metav1.PartialObjectMetadataList, bool, error) {
+func (d *namespacedResourcesDeleter) listCollection(ctx context.Context, clusterName logicalcluster.Name, gvr schema.GroupVersionResource, namespace string) (*metav1.PartialObjectMetadataList, bool, error) {
logger := klog.FromContext(ctx)
logger.V(5).Info("Namespace controller - listCollection", "namespace", namespace, "resource", gvr)
key := operationKey{operation: operationList, gvr: gvr}
- if !d.opCache.isSupported(key) {
+ if supported, err := d.isSupported(ctx, clusterName, key); err != nil {
logger.V(5).Info("Namespace controller - listCollection ignored since not supported", "namespace", namespace, "resource", gvr)
+ return nil, false, err
+ } else if !supported {
+ logger.V(5).Info("namespace controller - listCollection ignored since not supported - namespace", "namespace", namespace, "resource", gvr)
return nil, false, nil
}
- partialList, err := d.metadataClient.Resource(gvr).Namespace(namespace).List(ctx, metav1.ListOptions{})
+ partialList, err := d.metadataClient.Cluster(clusterName.Path()).Resource(gvr).Namespace(namespace).List(ctx, metav1.ListOptions{})
if err == nil {
return partialList, true, nil
}
@@ -377,10 +416,10 @@ func (d *namespacedResourcesDeleter) listCollection(ctx context.Context, gvr sch
}
// deleteEachItem is a helper function that will list the collection of resources and delete each item 1 by 1.
-func (d *namespacedResourcesDeleter) deleteEachItem(ctx context.Context, gvr schema.GroupVersionResource, namespace string) error {
+func (d *namespacedResourcesDeleter) deleteEachItem(ctx context.Context, clusterName logicalcluster.Name, gvr schema.GroupVersionResource, namespace string) error {
klog.FromContext(ctx).V(5).Info("Namespace controller - deleteEachItem", "namespace", namespace, "resource", gvr)
- partialList, listSupported, err := d.listCollection(ctx, gvr, namespace)
+ partialList, listSupported, err := d.listCollection(ctx, clusterName, gvr, namespace)
if err != nil {
return err
}
@@ -390,7 +429,7 @@ func (d *namespacedResourcesDeleter) deleteEachItem(ctx context.Context, gvr sch
for _, item := range partialList.Items {
background := metav1.DeletePropagationBackground
opts := metav1.DeleteOptions{PropagationPolicy: &background}
- if err = d.metadataClient.Resource(gvr).Namespace(namespace).Delete(ctx, item.GetName(), opts); err != nil && !errors.IsNotFound(err) && !errors.IsMethodNotSupported(err) {
+ if err = d.metadataClient.Cluster(clusterName.Path()).Resource(gvr).Namespace(namespace).Delete(ctx, item.GetName(), opts); err != nil && !errors.IsNotFound(err) && !errors.IsMethodNotSupported(err) {
return err
}
}
@@ -412,13 +451,14 @@ type gvrDeletionMetadata struct {
// If estimate > 0, not all resources are guaranteed to be gone.
func (d *namespacedResourcesDeleter) deleteAllContentForGroupVersionResource(
ctx context.Context,
+ clusterName logicalcluster.Name,
gvr schema.GroupVersionResource, namespace string,
namespaceDeletedAt metav1.Time) (gvrDeletionMetadata, error) {
logger := klog.FromContext(ctx)
logger.V(5).Info("Namespace controller - deleteAllContentForGroupVersionResource", "namespace", namespace, "resource", gvr)
// estimate how long it will take for the resource to be deleted (needed for objects that support graceful delete)
- estimate, err := d.estimateGracefulTermination(ctx, gvr, namespace, namespaceDeletedAt)
+ estimate, err := d.estimateGracefulTermination(ctx, clusterName, gvr, namespace, namespaceDeletedAt)
if err != nil {
logger.V(5).Info("Namespace controller - deleteAllContentForGroupVersionResource - unable to estimate", "namespace", namespace, "resource", gvr, "err", err)
return gvrDeletionMetadata{}, err
@@ -426,14 +466,14 @@ func (d *namespacedResourcesDeleter) deleteAllContentForGroupVersionResource(
logger.V(5).Info("Namespace controller - deleteAllContentForGroupVersionResource - estimate", "namespace", namespace, "resource", gvr, "estimate", estimate)
// first try to delete the entire collection
- deleteCollectionSupported, err := d.deleteCollection(ctx, gvr, namespace)
+ deleteCollectionSupported, err := d.deleteCollection(ctx, clusterName, gvr, namespace)
if err != nil {
return gvrDeletionMetadata{finalizerEstimateSeconds: estimate}, err
}
// delete collection was not supported, so we list and delete each item...
if !deleteCollectionSupported {
- err = d.deleteEachItem(ctx, gvr, namespace)
+ err = d.deleteEachItem(ctx, clusterName, gvr, namespace)
if err != nil {
return gvrDeletionMetadata{finalizerEstimateSeconds: estimate}, err
}
@@ -442,7 +482,7 @@ func (d *namespacedResourcesDeleter) deleteAllContentForGroupVersionResource(
// verify there are no more remaining items
// it is not an error condition for there to be remaining items if local estimate is non-zero
logger.V(5).Info("Namespace controller - deleteAllContentForGroupVersionResource - checking for no more items in namespace", "namespace", namespace, "resource", gvr)
- unstructuredList, listSupported, err := d.listCollection(ctx, gvr, namespace)
+ unstructuredList, listSupported, err := d.listCollection(ctx, clusterName, gvr, namespace)
if err != nil {
logger.V(5).Info("Namespace controller - deleteAllContentForGroupVersionResource - error verifying no items in namespace", "namespace", namespace, "resource", gvr, "err", err)
return gvrDeletionMetadata{finalizerEstimateSeconds: estimate}, err
@@ -509,7 +549,7 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
logger := klog.FromContext(ctx)
logger.V(4).Info("namespace controller - deleteAllContent", "namespace", namespace)
- resources, err := d.discoverResourcesFn()
+ resources, err := d.discoverResourcesFn(logicalcluster.From(ns).Path())
if err != nil {
// discovery errors are not fatal. We often have some set of resources we can operate against even if we don't have a complete list
errs = append(errs, err)
@@ -532,7 +572,7 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
if _, hasPods := groupVersionResources[podsGVR]; hasPods && utilfeature.DefaultFeatureGate.Enabled(features.OrderedNamespaceDeletion) {
// Ensure all pods in the namespace are deleted first
- gvrDeletionMetadata, err := d.deleteAllContentForGroupVersionResource(ctx, podsGVR, namespace, namespaceDeletedAt)
+ gvrDeletionMetadata, err := d.deleteAllContentForGroupVersionResource(ctx, logicalcluster.From(ns), podsGVR, namespace, namespaceDeletedAt)
if err != nil {
errs = append(errs, fmt.Errorf("failed to delete pods for namespace: %s, err: %w", namespace, err))
conditionUpdater.ProcessDeleteContentErr(err)
@@ -554,7 +594,7 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
if numRemainingTotals.gvrToNumRemaining[podsGVR] > 0 {
logger.V(5).Info("Namespace controller - pods still remain, delaying deletion of other resources", "namespace", namespace)
if hasChanged := conditionUpdater.Update(ns); hasChanged {
- if _, err = d.nsClient.UpdateStatus(ctx, ns, metav1.UpdateOptions{}); err != nil {
+ if _, err = d.nsClient.Cluster(logicalcluster.From(ns).Path()).UpdateStatus(ctx, ns, metav1.UpdateOptions{}); err != nil {
utilruntime.HandleError(fmt.Errorf("couldn't update status condition for namespace %q: %w", namespace, err))
}
}
@@ -568,7 +608,7 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
gvr.Version == podsGVR.Version && gvr.Resource == podsGVR.Resource {
continue
}
- gvrDeletionMetadata, err := d.deleteAllContentForGroupVersionResource(ctx, gvr, namespace, namespaceDeletedAt)
+ gvrDeletionMetadata, err := d.deleteAllContentForGroupVersionResource(ctx, logicalcluster.From(ns), gvr, namespace, namespaceDeletedAt)
if err != nil {
// If there is an error, hold on to it but proceed with all the remaining
// groupVersionResources.
@@ -594,7 +634,7 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
// we need to reflect that information. Recall that additional finalizers can be set on namespaces, so this finalizer may clear itself and
// NOT remove the resource instance.
if hasChanged := conditionUpdater.Update(ns); hasChanged {
- if _, err = d.nsClient.UpdateStatus(ctx, ns, metav1.UpdateOptions{}); err != nil {
+ if _, err = d.nsClient.Cluster(logicalcluster.From(ns).Path()).UpdateStatus(ctx, ns, metav1.UpdateOptions{}); err != nil {
utilruntime.HandleError(fmt.Errorf("couldn't update status condition for namespace %q: %v", namespace, err))
}
}
@@ -606,14 +646,14 @@ func (d *namespacedResourcesDeleter) deleteAllContent(ctx context.Context, ns *v
}
// estimateGracefulTermination will estimate the graceful termination required for the specific entity in the namespace
-func (d *namespacedResourcesDeleter) estimateGracefulTermination(ctx context.Context, gvr schema.GroupVersionResource, ns string, namespaceDeletedAt metav1.Time) (int64, error) {
+func (d *namespacedResourcesDeleter) estimateGracefulTermination(ctx context.Context, clusterName logicalcluster.Name, gvr schema.GroupVersionResource, ns string, namespaceDeletedAt metav1.Time) (int64, error) {
groupResource := gvr.GroupResource()
klog.FromContext(ctx).V(5).Info("Namespace controller - estimateGracefulTermination", "group", groupResource.Group, "resource", groupResource.Resource)
estimate := int64(0)
var err error
switch groupResource {
case schema.GroupResource{Group: "", Resource: "pods"}:
- estimate, err = d.estimateGracefulTerminationForPods(ctx, ns)
+ estimate, err = d.estimateGracefulTerminationForPods(ctx, clusterName, ns)
}
if err != nil {
return 0, err
@@ -628,14 +668,14 @@ func (d *namespacedResourcesDeleter) estimateGracefulTermination(ctx context.Con
}
// estimateGracefulTerminationForPods determines the graceful termination period for pods in the namespace
-func (d *namespacedResourcesDeleter) estimateGracefulTerminationForPods(ctx context.Context, ns string) (int64, error) {
+func (d *namespacedResourcesDeleter) estimateGracefulTerminationForPods(ctx context.Context, clusterName logicalcluster.Name, ns string) (int64, error) {
klog.FromContext(ctx).V(5).Info("Namespace controller - estimateGracefulTerminationForPods", "namespace", ns)
estimate := int64(0)
podsGetter := d.podsGetter
if podsGetter == nil || reflect.ValueOf(podsGetter).IsNil() {
return 0, fmt.Errorf("unexpected: podsGetter is nil. Cannot estimate grace period seconds for pods")
}
- items, err := podsGetter.Pods(ns).List(ctx, metav1.ListOptions{})
+ items, err := podsGetter.Pods().Cluster(clusterName.Path()).Namespace(ns).List(ctx, metav1.ListOptions{})
if err != nil {
return 0, err
}
diff --git a/pkg/controller/namespace/deletion/namespaced_resources_deleter_test.go b/pkg/controller/namespace/deletion/namespaced_resources_deleter_test.go
index a320bf1b8567a..304c0c774b7be 100644
--- a/pkg/controller/namespace/deletion/namespaced_resources_deleter_test.go
+++ b/pkg/controller/namespace/deletion/namespaced_resources_deleter_test.go
@@ -26,6 +26,7 @@ import (
"sync"
"testing"
+ "github.com/kcp-dev/logicalcluster/v3"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -199,12 +200,12 @@ func testSyncNamespaceThatIsTerminating(t *testing.T, versions *metav1.APIVersio
t.Fatal(err)
}
- fn := func() ([]*metav1.APIResourceList, error) {
+ fn := func(clusterName logicalcluster.Name) ([]*metav1.APIResourceList, error) {
return resources, testInput.gvrError
}
_, ctx := ktesting.NewTestContext(t)
d := NewNamespacedResourcesDeleter(ctx, mockClient.CoreV1().Namespaces(), metadataClient, mockClient.CoreV1(), fn, v1.FinalizerKubernetes)
- if err := d.Delete(ctx, testInput.testNamespace.Name); !matchErrors(err, testInput.expectErrorOnDelete) {
+ if err := d.Delete(ctx, logicalcluster.New(""), testInput.testNamespace.Name); !matchErrors(err, testInput.expectErrorOnDelete) {
t.Errorf("expected error %q when syncing namespace, got %q, %v", testInput.expectErrorOnDelete, err, testInput.expectErrorOnDelete == err)
}
@@ -297,13 +298,13 @@ func TestSyncNamespaceThatIsActive(t *testing.T) {
Phase: v1.NamespaceActive,
},
}
- fn := func() ([]*metav1.APIResourceList, error) {
+ fn := func(clusterName logicalcluster.Name) ([]*metav1.APIResourceList, error) {
return testResources(), nil
}
_, ctx := ktesting.NewTestContext(t)
d := NewNamespacedResourcesDeleter(ctx, mockClient.CoreV1().Namespaces(), nil, mockClient.CoreV1(),
fn, v1.FinalizerKubernetes)
- err := d.Delete(ctx, testNamespace.Name)
+ err := d.Delete(ctx, logicalcluster.New(""), testNamespace.Name)
if err != nil {
t.Errorf("Unexpected error when synching namespace %v", err)
}
@@ -427,7 +428,7 @@ func TestDeleteEncounters404(t *testing.T) {
mockMetadataClient.PrependReactor("delete-collection", "flakes", ns1FlakesNotFound)
mockMetadataClient.PrependReactor("list", "flakes", ns1FlakesNotFound)
- resourcesFn := func() ([]*metav1.APIResourceList, error) {
+ resourcesFn := func(clusterName logicalcluster.Name) ([]*metav1.APIResourceList, error) {
return []*metav1.APIResourceList{{
GroupVersion: "example.com/v1",
APIResources: []metav1.APIResource{{Name: "flakes", Namespaced: true, Kind: "Flake", Verbs: []string{"get", "list", "delete", "deletecollection", "create", "update"}}},
@@ -438,7 +439,7 @@ func TestDeleteEncounters404(t *testing.T) {
// Delete ns1 and get NotFound errors for the flakes resource
mockMetadataClient.ClearActions()
- if err := d.Delete(ctx, ns1.Name); err != nil {
+ if err := d.Delete(ctx, logicalcluster.New(""), ns1.Name); err != nil {
t.Fatal(err)
}
if len(mockMetadataClient.Actions()) != 3 ||
@@ -453,7 +454,7 @@ func TestDeleteEncounters404(t *testing.T) {
// Delete ns2
mockMetadataClient.ClearActions()
- if err := d.Delete(ctx, ns2.Name); err != nil {
+ if err := d.Delete(ctx, logicalcluster.New(""), ns2.Name); err != nil {
t.Fatal(err)
}
if len(mockMetadataClient.Actions()) != 2 ||
diff --git a/pkg/controller/namespace/namespace_controller.go b/pkg/controller/namespace/namespace_controller.go
index 9f2c0d85e972f..2feeeda9cfbfa 100644
--- a/pkg/controller/namespace/namespace_controller.go
+++ b/pkg/controller/namespace/namespace_controller.go
@@ -21,6 +21,12 @@ import (
"fmt"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpcorev1informers "github.com/kcp-dev/client-go/informers/core/v1"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
+ kcpmetadata "github.com/kcp-dev/client-go/metadata"
+ "github.com/kcp-dev/logicalcluster/v3"
"golang.org/x/time/rate"
v1 "k8s.io/api/core/v1"
@@ -28,13 +34,8 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
- coreinformers "k8s.io/client-go/informers/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- corelisters "k8s.io/client-go/listers/core/v1"
- "k8s.io/client-go/metadata"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
- "k8s.io/kubernetes/pkg/controller"
"k8s.io/kubernetes/pkg/controller/namespace/deletion"
"k8s.io/klog/v2"
@@ -53,7 +54,7 @@ const (
// NamespaceController is responsible for performing actions dependent upon a namespace phase
type NamespaceController struct {
// lister that can list namespaces from a shared cache
- lister corelisters.NamespaceLister
+ lister kcpcorev1listers.NamespaceClusterLister
// returns true when the namespace cache is ready
listerSynced cache.InformerSynced
// namespaces that have been queued up for processing by workers
@@ -65,10 +66,10 @@ type NamespaceController struct {
// NewNamespaceController creates a new NamespaceController
func NewNamespaceController(
ctx context.Context,
- kubeClient clientset.Interface,
- metadataClient metadata.Interface,
- discoverResourcesFn func() ([]*metav1.APIResourceList, error),
- namespaceInformer coreinformers.NamespaceInformer,
+ kubeClient kcpkubernetesclientset.ClusterInterface,
+ metadataClient kcpmetadata.ClusterInterface,
+ discoverResourcesFn func(clusterName logicalcluster.Path) ([]*metav1.APIResourceList, error),
+ namespaceInformer kcpcorev1informers.NamespaceClusterInformer,
resyncPeriod time.Duration,
finalizerToken v1.FinalizerName) *NamespaceController {
@@ -118,7 +119,7 @@ func nsControllerRateLimiter() workqueue.TypedRateLimiter[string] {
// enqueueNamespace adds an object to the controller work queue
// obj could be an *v1.Namespace, or a DeletionFinalStateUnknown item.
func (nm *NamespaceController) enqueueNamespace(obj interface{}) {
- key, err := controller.KeyFunc(obj)
+ key, err := kcpcache.MetaClusterNamespaceKeyFunc(obj)
if err != nil {
utilruntime.HandleError(fmt.Errorf("Couldn't get key for object %+v: %v", obj, err))
return
@@ -182,7 +183,13 @@ func (nm *NamespaceController) syncNamespaceFromKey(ctx context.Context, key str
logger.V(4).Info("Finished syncing namespace", "namespace", key, "duration", time.Since(startTime))
}()
- namespace, err := nm.lister.Get(key)
+ clusterName, _, namespaceName, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return err
+ }
+
+ namespace, err := nm.lister.Cluster(clusterName).Get(namespaceName)
if errors.IsNotFound(err) {
logger.Info("Namespace has been deleted", "namespace", key)
return nil
@@ -191,7 +198,7 @@ func (nm *NamespaceController) syncNamespaceFromKey(ctx context.Context, key str
utilruntime.HandleError(fmt.Errorf("Unable to retrieve namespace %v from store: %v", key, err))
return err
}
- return nm.namespacedResourcesDeleter.Delete(ctx, namespace.Name)
+ return nm.namespacedResourcesDeleter.Delete(ctx, clusterName, namespace.Name)
}
// Run starts observing the system with the specified number of workers.
diff --git a/pkg/controller/resourcequota/resource_quota_controller.go b/pkg/controller/resourcequota/resource_quota_controller.go
index dcfd6c6a88ef0..41ca51dc2f5cb 100644
--- a/pkg/controller/resourcequota/resource_quota_controller.go
+++ b/pkg/controller/resourcequota/resource_quota_controller.go
@@ -20,6 +20,7 @@ import (
"context"
"fmt"
"reflect"
+ "strconv"
"sync"
"time"
@@ -349,6 +350,11 @@ func (rq *Controller) syncResourceQuotaFromKey(ctx context.Context, key string)
return rq.syncResourceQuota(ctx, resourceQuota)
}
+const (
+ kcpClusterScopedQuotaNamespace = "admin"
+ kcpExperimentalClusterScopedQuotaAnnotationKey = "experimental.quota.kcp.io/cluster-scoped"
+)
+
// syncResourceQuota runs a complete sync of resource quota status across all known kinds
func (rq *Controller) syncResourceQuota(ctx context.Context, resourceQuota *v1.ResourceQuota) (err error) {
// quota is dirty if any part of spec hard limits differs from the status hard limits
@@ -367,7 +373,15 @@ func (rq *Controller) syncResourceQuota(ctx context.Context, resourceQuota *v1.R
var errs []error
- newUsage, err := quota.CalculateUsage(resourceQuota.Namespace, resourceQuota.Spec.Scopes, hardLimits, rq.registry, resourceQuota.Spec.ScopeSelector)
+ // kcp edits for cluster scoped quota
+ clusterScoped, _ := strconv.ParseBool(resourceQuota.Annotations[kcpExperimentalClusterScopedQuotaAnnotationKey])
+
+ namespaceToCheck := resourceQuota.Namespace
+ if namespaceToCheck == kcpClusterScopedQuotaNamespace && clusterScoped {
+ namespaceToCheck = ""
+ }
+
+ newUsage, err := quota.CalculateUsage(namespaceToCheck, resourceQuota.Spec.Scopes, hardLimits, rq.registry, resourceQuota.Spec.ScopeSelector)
if err != nil {
// if err is non-nil, remember it to return, but continue updating status with any resources in newUsage
errs = append(errs, err)
diff --git a/pkg/controller/resourcequota/resource_quota_controller_kcp.go b/pkg/controller/resourcequota/resource_quota_controller_kcp.go
new file mode 100644
index 0000000000000..04d86e925c291
--- /dev/null
+++ b/pkg/controller/resourcequota/resource_quota_controller_kcp.go
@@ -0,0 +1,113 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package resourcequota
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/client-go/discovery"
+ "k8s.io/client-go/tools/cache"
+ "k8s.io/klog/v2"
+)
+
+// UpdateMonitors determines if there are any newly available or removed API resources, and if so, starts/stops monitors
+// for them. This is similar to Sync, but instead of polling discovery every 30 seconds, this method is invoked by kcp
+// whenever the set of APIs is known to change (CRDs added or removed).
+func (rq *Controller) UpdateMonitors(ctx context.Context, discoveryFunc NamespacedResourcesFunc) {
+ logger := klog.FromContext(ctx)
+
+ // Something has changed, so track the new state and perform a sync.
+ oldResources := make(map[schema.GroupVersionResource]struct{})
+ func() {
+ // Get the current resource list from discovery.
+ newResources, err := GetQuotableResources(discoveryFunc)
+ if err != nil {
+ utilruntime.HandleError(err)
+
+ if groupLookupFailures, isLookupFailure := discovery.GroupDiscoveryFailedErrorGroups(err); isLookupFailure && len(newResources) > 0 {
+ // In partial discovery cases, preserve existing informers for resources in the failed groups, so resyncMonitors will only add informers for newly seen resources
+ for k, v := range oldResources {
+ if _, failed := groupLookupFailures[k.GroupVersion()]; failed {
+ newResources[k] = v
+ }
+ }
+ } else {
+ // short circuit in non-discovery error cases or if discovery returned zero resources
+ return
+ }
+ }
+
+ // Decide whether discovery has reported a change.
+ if reflect.DeepEqual(oldResources, newResources) {
+ logger.V(4).Info("no resource updates from discovery, skipping resource quota sync")
+ return
+ }
+
+ // Ensure workers are paused to avoid processing events before informers
+ // have resynced.
+ rq.workerLock.Lock()
+ defer rq.workerLock.Unlock()
+
+ // Something has changed, so track the new state and perform a sync.
+ if loggerV := logger.V(2); loggerV.Enabled() {
+ loggerV.Info("syncing resource quota controller with updated resources from discovery", "diff", printDiff(oldResources, newResources))
+ }
+
+ // Perform the monitor resync and wait for controllers to report cache sync.
+ if err := rq.resyncMonitors(ctx, newResources); err != nil {
+ utilruntime.HandleError(fmt.Errorf("failed to sync resource monitors: %v", err))
+ return
+ }
+
+ // at this point, we've synced the new resources to our monitors, so record that fact.
+ oldResources = newResources
+
+ // wait for caches to fill for a while (our sync period).
+ // this protects us from deadlocks where available resources changed and one of our informer caches will never fill.
+ // informers keep attempting to sync in the background, so retrying doesn't interrupt them.
+ // the call to resyncMonitors on the reattempt will no-op for resources that still exist.
+ if rq.quotaMonitor != nil &&
+ !cache.WaitForNamedCacheSync(
+ "resource quota",
+ ctx.Done(),
+ func() bool { return rq.quotaMonitor.IsSynced(ctx) },
+ ) {
+ utilruntime.HandleError(fmt.Errorf("timed out waiting for quota monitor sync"))
+ return
+ }
+
+ logger.V(2).Info("synced quota controller")
+ }()
+
+ // List all the quotas (this is scoped to the workspace)
+ quotas, err := rq.rqLister.List(labels.Everything())
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("error listing all resourcequotas: %w", err))
+ }
+
+ // Requeue all quotas in the workspace
+ for i := range quotas {
+ quota := quotas[i]
+ logger.V(2).Info("enqueuing resourcequota %s/%s because the list of available APIs changed", quota.Namespace, quota.Name)
+ rq.addQuota(logger, quota)
+ }
+}
diff --git a/pkg/controller/resourcequota/resource_quota_monitor.go b/pkg/controller/resourcequota/resource_quota_monitor.go
index d0d0f30b97551..c20868ccfffe7 100644
--- a/pkg/controller/resourcequota/resource_quota_monitor.go
+++ b/pkg/controller/resourcequota/resource_quota_monitor.go
@@ -236,6 +236,7 @@ func (qm *QuotaMonitor) SyncMonitors(ctx context.Context, resources map[schema.G
for _, monitor := range toRemove {
if monitor.stopCh != nil {
close(monitor.stopCh)
+ monitor.stopCh = nil
}
}
@@ -337,6 +338,7 @@ func (qm *QuotaMonitor) Run(ctx context.Context) {
if monitor.stopCh != nil {
stopped++
close(monitor.stopCh)
+ monitor.stopCh = nil
}
}
logger.Info("QuotaMonitor stopped monitors", "stopped", stopped, "total", len(monitors))
diff --git a/pkg/controller/serviceaccount/serviceaccounts_controller.go b/pkg/controller/serviceaccount/serviceaccounts_controller.go
index 4395136d066dd..8a18620ca6767 100644
--- a/pkg/controller/serviceaccount/serviceaccounts_controller.go
+++ b/pkg/controller/serviceaccount/serviceaccounts_controller.go
@@ -21,15 +21,16 @@ import (
"fmt"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpcorev1informers "github.com/kcp-dev/client-go/informers/core/v1"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
- coreinformers "k8s.io/client-go/informers/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- corelisters "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
@@ -61,7 +62,7 @@ func DefaultServiceAccountsControllerOptions() ServiceAccountsControllerOptions
}
// NewServiceAccountsController returns a new *ServiceAccountsController.
-func NewServiceAccountsController(saInformer coreinformers.ServiceAccountInformer, nsInformer coreinformers.NamespaceInformer, cl clientset.Interface, options ServiceAccountsControllerOptions) (*ServiceAccountsController, error) {
+func NewServiceAccountsController(saInformer kcpcorev1informers.ServiceAccountClusterInformer, nsInformer kcpcorev1informers.NamespaceClusterInformer, cl kcpkubernetesclientset.ClusterInterface, options ServiceAccountsControllerOptions) (*ServiceAccountsController, error) {
e := &ServiceAccountsController{
client: cl,
serviceAccountsToEnsure: options.ServiceAccounts,
@@ -91,16 +92,16 @@ func NewServiceAccountsController(saInformer coreinformers.ServiceAccountInforme
// ServiceAccountsController manages ServiceAccount objects inside Namespaces
type ServiceAccountsController struct {
- client clientset.Interface
+ client kcpkubernetesclientset.ClusterInterface
serviceAccountsToEnsure []v1.ServiceAccount
// To allow injection for testing.
syncHandler func(ctx context.Context, key string) error
- saLister corelisters.ServiceAccountLister
+ saLister kcpcorev1listers.ServiceAccountClusterLister
saListerSynced cache.InformerSynced
- nsLister corelisters.NamespaceLister
+ nsLister kcpcorev1listers.NamespaceClusterLister
nsListerSynced cache.InformerSynced
queue workqueue.TypedRateLimitingInterface[string]
@@ -140,19 +141,19 @@ func (c *ServiceAccountsController) serviceAccountDeleted(obj interface{}) {
return
}
}
- c.queue.Add(sa.Namespace)
+ c.enqueueNamespace(sa)
}
// namespaceAdded reacts to a Namespace creation by creating a default ServiceAccount object
func (c *ServiceAccountsController) namespaceAdded(obj interface{}) {
namespace := obj.(*v1.Namespace)
- c.queue.Add(namespace.Name)
+ c.enqueueNamespace(namespace)
}
// namespaceUpdated reacts to a Namespace update (or re-list) by creating a default ServiceAccount in the namespace if needed
func (c *ServiceAccountsController) namespaceUpdated(oldObj interface{}, newObj interface{}) {
newNamespace := newObj.(*v1.Namespace)
- c.queue.Add(newNamespace.Name)
+ c.enqueueNamespace(newNamespace)
}
func (c *ServiceAccountsController) runWorker(ctx context.Context) {
@@ -185,7 +186,13 @@ func (c *ServiceAccountsController) syncNamespace(ctx context.Context, key strin
klog.FromContext(ctx).V(4).Info("Finished syncing namespace", "namespace", key, "duration", time.Since(startTime))
}()
- ns, err := c.nsLister.Get(key)
+ clusterName, _, namespaceName, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return err
+ }
+
+ ns, err := c.nsLister.Cluster(clusterName).Get(namespaceName)
if apierrors.IsNotFound(err) {
return nil
}
@@ -199,7 +206,7 @@ func (c *ServiceAccountsController) syncNamespace(ctx context.Context, key strin
createFailures := []error{}
for _, sa := range c.serviceAccountsToEnsure {
- switch _, err := c.saLister.ServiceAccounts(ns.Name).Get(sa.Name); {
+ switch _, err := c.saLister.Cluster(clusterName).ServiceAccounts(ns.Name).Get(sa.Name); {
case err == nil:
continue
case apierrors.IsNotFound(err):
@@ -210,7 +217,7 @@ func (c *ServiceAccountsController) syncNamespace(ctx context.Context, key strin
// TODO eliminate this once the fake client can handle creation without NS
sa.Namespace = ns.Name
- if _, err := c.client.CoreV1().ServiceAccounts(ns.Name).Create(ctx, &sa, metav1.CreateOptions{}); err != nil && !apierrors.IsAlreadyExists(err) {
+ if _, err := c.client.Cluster(clusterName.Path()).CoreV1().ServiceAccounts(ns.Name).Create(ctx, &sa, metav1.CreateOptions{}); err != nil && !apierrors.IsAlreadyExists(err) {
// we can safely ignore terminating namespace errors
if !apierrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
createFailures = append(createFailures, err)
@@ -220,3 +227,12 @@ func (c *ServiceAccountsController) syncNamespace(ctx context.Context, key strin
return utilerrors.Flatten(utilerrors.NewAggregate(createFailures))
}
+
+func (c *ServiceAccountsController) enqueueNamespace(obj metav1.Object) {
+ key, err := kcpcache.MetaClusterNamespaceKeyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(err)
+ }
+
+ c.queue.Add(key)
+}
diff --git a/pkg/controller/serviceaccount/tokengetter.go b/pkg/controller/serviceaccount/tokengetter.go
index 98afde0b727e4..7388727fa5b3c 100644
--- a/pkg/controller/serviceaccount/tokengetter.go
+++ b/pkg/controller/serviceaccount/tokengetter.go
@@ -14,11 +14,18 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
+// +kcp-code-generator:skip
+
package serviceaccount
import (
"context"
- "k8s.io/api/core/v1"
+
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clientset "k8s.io/client-go/kubernetes"
@@ -26,6 +33,29 @@ import (
"k8s.io/kubernetes/pkg/serviceaccount"
)
+func NewClusterGetterFromClient(client kcpkubernetesclientset.ClusterInterface, secretLister kcpcorev1listers.SecretClusterLister, serviceAccountLister kcpcorev1listers.ServiceAccountClusterLister /*podLister kcpcorev1listers.PodClusterLister*/) serviceaccount.ServiceAccountTokenClusterGetter {
+ return &serviceAccountTokenClusterGetter{
+ client: client,
+ secretLister: secretLister,
+ serviceAccountLister: serviceAccountLister,
+ }
+}
+
+type serviceAccountTokenClusterGetter struct {
+ client kcpkubernetesclientset.ClusterInterface
+ secretLister kcpcorev1listers.SecretClusterLister
+ serviceAccountLister kcpcorev1listers.ServiceAccountClusterLister
+ podLister kcpcorev1listers.PodClusterLister
+}
+
+func (s *serviceAccountTokenClusterGetter) Cluster(name logicalcluster.Name) serviceaccount.ServiceAccountTokenGetter {
+ return NewGetterFromClient(
+ s.client.Cluster(name.Path()),
+ s.secretLister.Cluster(name),
+ s.serviceAccountLister.Cluster(name),
+ )
+}
+
// clientGetter implements ServiceAccountTokenGetter using a clientset.Interface
type clientGetter struct {
client clientset.Interface
@@ -39,8 +69,12 @@ type clientGetter struct {
// uses the specified client to retrieve service accounts, pods, secrets and nodes.
// The client should NOT authenticate using a service account token
// the returned getter will be used to retrieve, or recursion will result.
-func NewGetterFromClient(c clientset.Interface, secretLister v1listers.SecretLister, serviceAccountLister v1listers.ServiceAccountLister, podLister v1listers.PodLister, nodeLister v1listers.NodeLister) serviceaccount.ServiceAccountTokenGetter {
- return clientGetter{c, secretLister, serviceAccountLister, podLister, nodeLister}
+func NewGetterFromClient(c clientset.Interface, secretLister v1listers.SecretLister, serviceAccountLister v1listers.ServiceAccountLister) serviceaccount.ServiceAccountTokenGetter {
+ return clientGetter{
+ client: c,
+ secretLister: secretLister,
+ serviceAccountLister: serviceAccountLister,
+ }
}
func (c clientGetter) GetServiceAccount(namespace, name string) (*v1.ServiceAccount, error) {
@@ -51,8 +85,8 @@ func (c clientGetter) GetServiceAccount(namespace, name string) (*v1.ServiceAcco
}
func (c clientGetter) GetPod(namespace, name string) (*v1.Pod, error) {
- if pod, err := c.podLister.Pods(namespace).Get(name); err == nil {
- return pod, nil
+ if c.podLister == nil {
+ return nil, apierrors.NewNotFound(v1.Resource("pods"), name)
}
return c.client.CoreV1().Pods(namespace).Get(context.TODO(), name, metav1.GetOptions{})
}
diff --git a/pkg/controller/serviceaccount/tokens_controller.go b/pkg/controller/serviceaccount/tokens_controller.go
index 8d2acbd7dac08..db6f6ae14ea61 100644
--- a/pkg/controller/serviceaccount/tokens_controller.go
+++ b/pkg/controller/serviceaccount/tokens_controller.go
@@ -22,6 +22,12 @@ import (
"fmt"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpcorev1informers "github.com/kcp-dev/client-go/informers/core/v1"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
+ kcpthirdpartycache "github.com/kcp-dev/client-go/third_party/k8s.io/client-go/tools/cache"
+ "github.com/kcp-dev/logicalcluster/v3"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -31,13 +37,11 @@ import (
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
apiserverserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
- informers "k8s.io/client-go/informers/core/v1"
- clientset "k8s.io/client-go/kubernetes"
- listersv1 "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/tools/cache"
clientretry "k8s.io/client-go/util/retry"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
+ "k8s.io/kubernetes/pkg/registry/core/secret"
"k8s.io/kubernetes/pkg/serviceaccount"
)
@@ -69,7 +73,7 @@ type TokensControllerOptions struct {
}
// NewTokensController returns a new *TokensController.
-func NewTokensController(logger klog.Logger, serviceAccounts informers.ServiceAccountInformer, secrets informers.SecretInformer, cl clientset.Interface, options TokensControllerOptions) (*TokensController, error) {
+func NewTokensController(logger klog.Logger, serviceAccounts kcpcorev1informers.ServiceAccountClusterInformer, secrets kcpcorev1informers.SecretClusterInformer, cl kcpkubernetesclientset.ClusterInterface, options TokensControllerOptions) (*TokensController, error) {
maxRetries := options.MaxRetries
if maxRetries == 0 {
maxRetries = 10
@@ -104,7 +108,7 @@ func NewTokensController(logger klog.Logger, serviceAccounts informers.ServiceAc
)
secretCache := secrets.Informer().GetIndexer()
- e.updatedSecrets = cache.NewIntegerResourceVersionMutationCache(logger, secretCache, secretCache, 60*time.Second, true)
+ e.updatedSecrets = kcpthirdpartycache.NewIntegerResourceVersionMutationCache(logger, kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc, secretCache, secretCache, 60*time.Second, true)
e.secretSynced = secrets.Informer().HasSynced
secrets.Informer().AddEventHandlerWithOptions(
cache.FilteringResourceEventHandler{
@@ -134,16 +138,16 @@ func NewTokensController(logger klog.Logger, serviceAccounts informers.ServiceAc
// TokensController manages ServiceAccountToken secrets for ServiceAccount objects
type TokensController struct {
- client clientset.Interface
+ client kcpkubernetesclientset.ClusterInterface
token serviceaccount.TokenGenerator
rootCA []byte
- serviceAccounts listersv1.ServiceAccountLister
+ serviceAccounts kcpcorev1listers.ServiceAccountClusterLister
// updatedSecrets is a wrapper around the shared cache which allows us to record
// and return our local mutations (since we're very likely to act on an updated
// secret before the watch reports it).
- updatedSecrets cache.MutationCache
+ updatedSecrets kcpthirdpartycache.MutationCache
// Since we join two objects, we'll watch both of them with controllers.
serviceAccountSynced cache.InformerSynced
@@ -245,7 +249,7 @@ func (e *TokensController) syncServiceAccount(ctx context.Context) {
return
}
- sa, err := e.getServiceAccount(saInfo.namespace, saInfo.name, saInfo.uid, false)
+ sa, err := e.getServiceAccount(saInfo.clusterName, saInfo.namespace, saInfo.name, saInfo.uid, false)
switch {
case err != nil:
logger.Error(err, "Getting service account")
@@ -253,7 +257,7 @@ func (e *TokensController) syncServiceAccount(ctx context.Context) {
case sa == nil:
// service account no longer exists, so delete related tokens
logger.V(4).Info("Service account deleted, removing tokens", "namespace", saInfo.namespace, "serviceaccount", saInfo.name)
- sa = &v1.ServiceAccount{ObjectMeta: metav1.ObjectMeta{Namespace: saInfo.namespace, Name: saInfo.name, UID: saInfo.uid}}
+ sa = &v1.ServiceAccount{ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{logicalcluster.AnnotationKey: saInfo.clusterName.String()}, Namespace: saInfo.namespace, Name: saInfo.name, UID: saInfo.uid}}
retry, err = e.deleteTokens(sa)
if err != nil {
logger.Error(err, "Error deleting serviceaccount tokens", "namespace", saInfo.namespace, "serviceaccount", saInfo.name)
@@ -281,24 +285,24 @@ func (e *TokensController) syncSecret(ctx context.Context) {
return
}
- secret, err := e.getSecret(secretInfo.namespace, secretInfo.name, secretInfo.uid, false)
+ secret, err := e.getSecret(secretInfo.clusterName, secretInfo.namespace, secretInfo.name, secretInfo.uid, false)
switch {
case err != nil:
logger.Error(err, "Getting secret")
retry = true
case secret == nil:
// If the service account exists
- if sa, saErr := e.getServiceAccount(secretInfo.namespace, secretInfo.saName, secretInfo.saUID, false); saErr == nil && sa != nil {
+ if sa, saErr := e.getServiceAccount(secretInfo.clusterName, secretInfo.namespace, secretInfo.saName, secretInfo.saUID, false); saErr == nil && sa != nil {
// secret no longer exists, so delete references to this secret from the service account
if err := clientretry.RetryOnConflict(RemoveTokenBackoff, func() error {
- return e.removeSecretReference(secretInfo.namespace, secretInfo.saName, secretInfo.saUID, secretInfo.name)
+ return e.removeSecretReference(secretInfo.clusterName, secretInfo.namespace, secretInfo.saName, secretInfo.saUID, secretInfo.name)
}); err != nil {
logger.Error(err, "Removing secret reference")
}
}
default:
// Ensure service account exists
- sa, saErr := e.getServiceAccount(secretInfo.namespace, secretInfo.saName, secretInfo.saUID, true)
+ sa, saErr := e.getServiceAccount(secretInfo.clusterName, secretInfo.namespace, secretInfo.saName, secretInfo.saUID, true)
switch {
case saErr != nil:
logger.Error(saErr, "Getting service account")
@@ -306,14 +310,14 @@ func (e *TokensController) syncSecret(ctx context.Context) {
case sa == nil:
// Delete token
logger.V(4).Info("Service account does not exist, deleting token", "secret", klog.KRef(secretInfo.namespace, secretInfo.name))
- if retriable, err := e.deleteToken(secretInfo.namespace, secretInfo.name, secretInfo.uid); err != nil {
- logger.Error(err, "Deleting serviceaccount token", "secret", klog.KRef(secretInfo.namespace, secretInfo.name), "serviceAccount", klog.KRef(secretInfo.namespace, secretInfo.saName))
+ if retriable, err := e.deleteToken(secretInfo.clusterName, secretInfo.namespace, secretInfo.name, secretInfo.uid); err != nil {
+ logger.Error(err, "Deleting serviceaccount token", "secret", klog.KRef(secretInfo.namespace, secretInfo.name), "serviceAccount", klog.KRef(secretInfo.namespace, secretInfo.saName), "cluster", secretInfo.clusterName)
retry = retriable
}
default:
// Update token if needed
if retriable, err := e.generateTokenIfNeeded(logger, sa, secret); err != nil {
- logger.Error(err, "Populating serviceaccount token", "secret", klog.KRef(secretInfo.namespace, secretInfo.name), "serviceAccount", klog.KRef(secretInfo.namespace, secretInfo.saName))
+ logger.Error(err, "Populating serviceaccount token", "secret", klog.KRef(secretInfo.namespace, secretInfo.name), "serviceAccount", klog.KRef(secretInfo.namespace, secretInfo.saName), "cluster", secretInfo.clusterName)
retry = retriable
}
}
@@ -329,7 +333,7 @@ func (e *TokensController) deleteTokens(serviceAccount *v1.ServiceAccount) ( /*r
retry := false
errs := []error{}
for _, token := range tokens {
- r, err := e.deleteToken(token.Namespace, token.Name, token.UID)
+ r, err := e.deleteToken(logicalcluster.From(token), token.Namespace, token.Name, token.UID)
if err != nil {
errs = append(errs, err)
}
@@ -340,12 +344,13 @@ func (e *TokensController) deleteTokens(serviceAccount *v1.ServiceAccount) ( /*r
return retry, utilerrors.NewAggregate(errs)
}
-func (e *TokensController) deleteToken(ns, name string, uid types.UID) ( /*retry*/ bool, error) {
+func (e *TokensController) deleteToken(clusterName logicalcluster.Name, ns, name string, uid types.UID) ( /*retry*/ bool, error) {
var opts metav1.DeleteOptions
if len(uid) > 0 {
opts.Preconditions = &metav1.Preconditions{UID: &uid}
}
- err := e.client.CoreV1().Secrets(ns).Delete(context.TODO(), name, opts)
+
+ err := e.client.Cluster(clusterName.Path()).CoreV1().Secrets(ns).Delete(context.TODO(), name, opts)
// NotFound doesn't need a retry (it's already been deleted)
// Conflict doesn't need a retry (the UID precondition failed)
if err == nil || apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
@@ -355,6 +360,152 @@ func (e *TokensController) deleteToken(ns, name string, uid types.UID) ( /*retry
return true, err
}
+// ensureReferencedToken makes sure at least one ServiceAccountToken secret exists, and is included in the serviceAccount's Secrets list
+func (e *TokensController) ensureReferencedToken(serviceAccount *v1.ServiceAccount) ( /* retry */ bool, error) {
+ if hasToken, err := e.hasReferencedToken(serviceAccount); err != nil {
+ // Don't retry cache lookup errors
+ return false, err
+ } else if hasToken {
+ // A service account token already exists, and is referenced, short-circuit
+ return false, nil
+ }
+
+ clusterName := logicalcluster.From(serviceAccount)
+
+ // We don't want to update the cache's copy of the service account
+ // so add the secret to a freshly retrieved copy of the service account
+ serviceAccounts := e.client.Cluster(clusterName.Path()).CoreV1().ServiceAccounts(serviceAccount.Namespace)
+ liveServiceAccount, err := serviceAccounts.Get(context.TODO(), serviceAccount.Name, metav1.GetOptions{})
+ if err != nil {
+ // Retry if we cannot fetch the live service account (for a NotFound error, either the live lookup or our cache are stale)
+ return true, err
+ }
+ if liveServiceAccount.ResourceVersion != serviceAccount.ResourceVersion {
+ // Retry if our liveServiceAccount doesn't match our cache's resourceVersion (either the live lookup or our cache are stale)
+ klog.V(4).Infof("liveServiceAccount.ResourceVersion (%s) does not match cache (%s), retrying", liveServiceAccount.ResourceVersion, serviceAccount.ResourceVersion)
+ return true, nil
+ }
+
+ // Build the secret
+ secret := &v1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: secret.Strategy.GenerateName(fmt.Sprintf("%s-token-", serviceAccount.Name)),
+ Namespace: serviceAccount.Namespace,
+ Annotations: map[string]string{
+ v1.ServiceAccountNameKey: serviceAccount.Name,
+ v1.ServiceAccountUIDKey: string(serviceAccount.UID),
+ },
+ },
+ Type: v1.SecretTypeServiceAccountToken,
+ Data: map[string][]byte{},
+ }
+
+ // Generate the token
+ c, pc := serviceaccount.LegacyClaims(*serviceAccount, *secret)
+ // TODO: need to plumb context if using external signer ever becomes a posibility.
+ token, err := e.token.GenerateToken(context.TODO(), c, pc)
+ if err != nil {
+ // retriable error
+ return true, err
+ }
+ secret.Data[v1.ServiceAccountTokenKey] = []byte(token)
+ secret.Data[v1.ServiceAccountNamespaceKey] = []byte(serviceAccount.Namespace)
+ if e.rootCA != nil && len(e.rootCA) > 0 {
+ secret.Data[v1.ServiceAccountRootCAKey] = e.rootCA
+ }
+
+ // Save the secret
+ createdToken, err := e.client.Cluster(clusterName.Path()).CoreV1().Secrets(serviceAccount.Namespace).Create(context.TODO(), secret, metav1.CreateOptions{})
+ if err != nil {
+ // if the namespace is being terminated, create will fail no matter what
+ if apierrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
+ return false, err
+ }
+ // retriable error
+ return true, err
+ }
+ // Manually add the new token to the cache store.
+ // This prevents the service account update (below) triggering another token creation, if the referenced token couldn't be found in the store
+ e.updatedSecrets.Mutation(createdToken)
+
+ // Try to add a reference to the newly created token to the service account
+ addedReference := false
+ err = clientretry.RetryOnConflict(clientretry.DefaultRetry, func() error {
+ // refresh liveServiceAccount on every retry
+ defer func() { liveServiceAccount = nil }()
+
+ // fetch the live service account if needed, and verify the UID matches and that we still need a token
+ if liveServiceAccount == nil {
+ liveServiceAccount, err = serviceAccounts.Get(context.TODO(), serviceAccount.Name, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ if liveServiceAccount.UID != serviceAccount.UID {
+ // If we don't have the same service account, stop trying to add a reference to the token made for the old service account.
+ return nil
+ }
+
+ if hasToken, err := e.hasReferencedToken(liveServiceAccount); err != nil {
+ // Don't retry cache lookup errors
+ return nil
+ } else if hasToken {
+ // A service account token already exists, and is referenced, short-circuit
+ return nil
+ }
+ }
+
+ // Try to add a reference to the token
+ liveServiceAccount.Secrets = append(liveServiceAccount.Secrets, v1.ObjectReference{Name: secret.Name})
+ if _, err := serviceAccounts.Update(context.TODO(), liveServiceAccount, metav1.UpdateOptions{}); err != nil {
+ return err
+ }
+
+ addedReference = true
+ return nil
+ })
+
+ if !addedReference {
+ // we weren't able to use the token, try to clean it up.
+ klog.V(2).Infof("deleting secret %s|%s/%s because reference couldn't be added (%v)", clusterName, secret.Namespace, secret.Name, err)
+ deleteOpts := metav1.DeleteOptions{Preconditions: &metav1.Preconditions{UID: &createdToken.UID}}
+ if err := e.client.Cluster(clusterName.Path()).CoreV1().Secrets(createdToken.Namespace).Delete(context.TODO(), createdToken.Name, deleteOpts); err != nil {
+ klog.Error(err) // if we fail, just log it
+ }
+ }
+
+ if err != nil {
+ if apierrors.IsConflict(err) || apierrors.IsNotFound(err) {
+ // if we got a Conflict error, the service account was updated by someone else, and we'll get an update notification later
+ // if we got a NotFound error, the service account no longer exists, and we don't need to create a token for it
+ return false, nil
+ }
+ // retry in all other cases
+ return true, err
+ }
+
+ // success!
+ return false, nil
+}
+
+// hasReferencedToken returns true if the serviceAccount references a service account token secret
+func (e *TokensController) hasReferencedToken(serviceAccount *v1.ServiceAccount) (bool, error) {
+ if len(serviceAccount.Secrets) == 0 {
+ return false, nil
+ }
+ allSecrets, err := e.listTokenSecrets(serviceAccount)
+ if err != nil {
+ return false, err
+ }
+ referencedSecrets := getSecretReferences(serviceAccount)
+ for _, secret := range allSecrets {
+ if referencedSecrets.Has(secret.Name) {
+ return true, nil
+ }
+ }
+ return false, nil
+}
+
func (e *TokensController) secretUpdateNeeded(secret *v1.Secret) (bool, bool, bool) {
caData := secret.Data[v1.ServiceAccountRootCAKey]
needsCA := len(e.rootCA) > 0 && !bytes.Equal(caData, e.rootCA)
@@ -374,9 +525,11 @@ func (e *TokensController) generateTokenIfNeeded(logger klog.Logger, serviceAcco
return false, nil
}
+ clusterName := logicalcluster.From(serviceAccount)
+
// We don't want to update the cache's copy of the secret
// so add the token to a freshly retrieved copy of the secret
- secrets := e.client.CoreV1().Secrets(cachedSecret.Namespace)
+ secrets := e.client.Cluster(clusterName.Path()).CoreV1().Secrets(cachedSecret.Namespace)
liveSecret, err := secrets.Get(context.TODO(), cachedSecret.Name, metav1.GetOptions{})
if err != nil {
// Retry for any error other than a NotFound
@@ -385,7 +538,7 @@ func (e *TokensController) generateTokenIfNeeded(logger klog.Logger, serviceAcco
if liveSecret.ResourceVersion != cachedSecret.ResourceVersion {
// our view of the secret is not up to date
// we'll get notified of an update event later and get to try again
- logger.V(2).Info("Secret is not up to date, skipping token population", "secret", klog.KRef(liveSecret.Namespace, liveSecret.Name))
+ logger.V(2).Info("Secret is not up to date, skipping token population", "secret", klog.KRef(liveSecret.Namespace, liveSecret.Name), "cluster", clusterName)
return false, nil
}
@@ -439,10 +592,10 @@ func (e *TokensController) generateTokenIfNeeded(logger klog.Logger, serviceAcco
}
// removeSecretReference updates the given ServiceAccount to remove a reference to the given secretName if needed.
-func (e *TokensController) removeSecretReference(saNamespace string, saName string, saUID types.UID, secretName string) error {
+func (e *TokensController) removeSecretReference(saClusterName logicalcluster.Name, saNamespace string, saName string, saUID types.UID, secretName string) error {
// We don't want to update the cache's copy of the service account
// so remove the secret from a freshly retrieved copy of the service account
- serviceAccounts := e.client.CoreV1().ServiceAccounts(saNamespace)
+ serviceAccounts := e.client.Cluster(saClusterName.Path()).CoreV1().ServiceAccounts(saNamespace)
serviceAccount, err := serviceAccounts.Get(context.TODO(), saName, metav1.GetOptions{})
// Ignore NotFound errors when attempting to remove a reference
if apierrors.IsNotFound(err) {
@@ -478,9 +631,9 @@ func (e *TokensController) removeSecretReference(saNamespace string, saName stri
return err
}
-func (e *TokensController) getServiceAccount(ns string, name string, uid types.UID, fetchOnCacheMiss bool) (*v1.ServiceAccount, error) {
+func (e *TokensController) getServiceAccount(clusterName logicalcluster.Name, ns string, name string, uid types.UID, fetchOnCacheMiss bool) (*v1.ServiceAccount, error) {
// Look up in cache
- sa, err := e.serviceAccounts.ServiceAccounts(ns).Get(name)
+ sa, err := e.serviceAccounts.Cluster(clusterName).ServiceAccounts(ns).Get(name)
if err != nil && !apierrors.IsNotFound(err) {
return nil, err
}
@@ -496,7 +649,7 @@ func (e *TokensController) getServiceAccount(ns string, name string, uid types.U
}
// Live lookup
- sa, err = e.client.CoreV1().ServiceAccounts(ns).Get(context.TODO(), name, metav1.GetOptions{})
+ sa, err = e.client.Cluster(clusterName.Path()).CoreV1().ServiceAccounts(ns).Get(context.TODO(), name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
return nil, nil
}
@@ -510,9 +663,9 @@ func (e *TokensController) getServiceAccount(ns string, name string, uid types.U
return nil, nil
}
-func (e *TokensController) getSecret(ns string, name string, uid types.UID, fetchOnCacheMiss bool) (*v1.Secret, error) {
+func (e *TokensController) getSecret(clusterName logicalcluster.Name, ns string, name string, uid types.UID, fetchOnCacheMiss bool) (*v1.Secret, error) {
// Look up in cache
- obj, exists, err := e.updatedSecrets.GetByKey(makeCacheKey(ns, name))
+ obj, exists, err := e.updatedSecrets.GetByKey(kcpcache.ToClusterAwareKey(clusterName.String(), ns, name))
if err != nil {
return nil, err
}
@@ -532,7 +685,7 @@ func (e *TokensController) getSecret(ns string, name string, uid types.UID, fetc
}
// Live lookup
- secret, err := e.client.CoreV1().Secrets(ns).Get(context.TODO(), name, metav1.GetOptions{})
+ secret, err := e.client.Cluster(clusterName.Path()).CoreV1().Secrets(ns).Get(context.TODO(), name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
return nil, nil
}
@@ -549,7 +702,7 @@ func (e *TokensController) getSecret(ns string, name string, uid types.UID, fetc
// listTokenSecrets returns a list of all of the ServiceAccountToken secrets that
// reference the given service account's name and uid
func (e *TokensController) listTokenSecrets(serviceAccount *v1.ServiceAccount) ([]*v1.Secret, error) {
- namespaceSecrets, err := e.updatedSecrets.ByIndex("namespace", serviceAccount.Namespace)
+ namespaceSecrets, err := e.updatedSecrets.ByIndex(kcpcache.ClusterAndNamespaceIndexName, kcpcache.ClusterAndNamespaceIndexKey(logicalcluster.From(serviceAccount), serviceAccount.Namespace))
if err != nil {
return nil, err
}
@@ -577,6 +730,8 @@ func getSecretReferences(serviceAccount *v1.ServiceAccount) sets.String {
// It contains enough information to look up the cached service account,
// or delete owned tokens if the service account no longer exists.
type serviceAccountQueueKey struct {
+ clusterName logicalcluster.Name // Required for kcp
+
namespace string
name string
uid types.UID
@@ -584,6 +739,8 @@ type serviceAccountQueueKey struct {
func makeServiceAccountKey(sa *v1.ServiceAccount) serviceAccountQueueKey {
return serviceAccountQueueKey{
+ clusterName: logicalcluster.From(sa),
+
namespace: sa.Namespace,
name: sa.Name,
uid: sa.UID,
@@ -602,6 +759,8 @@ func parseServiceAccountKey(key interface{}) (serviceAccountQueueKey, error) {
// It contains enough information to look up the cached service account,
// or delete the secret reference if the secret no longer exists.
type secretQueueKey struct {
+ clusterName logicalcluster.Name // Required for kcp
+
namespace string
name string
uid types.UID
@@ -612,6 +771,8 @@ type secretQueueKey struct {
func makeSecretQueueKey(secret *v1.Secret) secretQueueKey {
return secretQueueKey{
+ clusterName: logicalcluster.From(secret), // Required for kcp
+
namespace: secret.Namespace,
name: secret.Name,
uid: secret.UID,
diff --git a/pkg/controller/validatingadmissionpolicystatus/controller.go b/pkg/controller/validatingadmissionpolicystatus/controller.go
index 4e9bf280c387b..2713ad364410b 100644
--- a/pkg/controller/validatingadmissionpolicystatus/controller.go
+++ b/pkg/controller/validatingadmissionpolicystatus/controller.go
@@ -28,10 +28,13 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
validatingadmissionpolicy "k8s.io/apiserver/pkg/admission/plugin/policy/validating"
admissionregistrationv1apply "k8s.io/client-go/applyconfigurations/admissionregistration/v1"
- informerv1 "k8s.io/client-go/informers/admissionregistration/v1"
- admissionregistrationv1 "k8s.io/client-go/kubernetes/typed/admissionregistration/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
+
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpinformerv1 "github.com/kcp-dev/client-go/informers/admissionregistration/v1"
+ kcpadmissionregistrationv1 "github.com/kcp-dev/client-go/kubernetes/typed/admissionregistration/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
)
// ControllerName has "Status" in it to differentiate this controller with the other that runs in API server.
@@ -40,15 +43,16 @@ const ControllerName = "validatingadmissionpolicy-status"
// Controller is the ValidatingAdmissionPolicy Status controller that reconciles the Status field of each policy object.
// This controller runs type checks against referred types for each policy definition.
type Controller struct {
- policyInformer informerv1.ValidatingAdmissionPolicyInformer
+ policyInformer kcpinformerv1.ValidatingAdmissionPolicyClusterInformer
policyQueue workqueue.TypedRateLimitingInterface[string]
- policySynced cache.InformerSynced
- policyClient admissionregistrationv1.ValidatingAdmissionPolicyInterface
+
+ policySynced cache.InformerSynced
+ policyClient kcpadmissionregistrationv1.ValidatingAdmissionPolicyClusterInterface
// typeChecker checks the policy's expressions for type errors.
// Type of params is defined in policy.Spec.ParamsKind
// Types of object are calculated from policy.Spec.MatchingConstraints
- typeChecker *validatingadmissionpolicy.TypeChecker
+ typeCheckerFn func(clusterName logicalcluster.Path) (*validatingadmissionpolicy.TypeChecker, error)
}
func (c *Controller) Run(ctx context.Context, workers int) {
@@ -66,15 +70,15 @@ func (c *Controller) Run(ctx context.Context, workers int) {
<-ctx.Done()
}
-func NewController(policyInformer informerv1.ValidatingAdmissionPolicyInformer, policyClient admissionregistrationv1.ValidatingAdmissionPolicyInterface, typeChecker *validatingadmissionpolicy.TypeChecker) (*Controller, error) {
+func NewController(policyInformer kcpinformerv1.ValidatingAdmissionPolicyClusterInformer, policyClient kcpadmissionregistrationv1.ValidatingAdmissionPolicyClusterInterface, typeCheckerFn func(clusterName logicalcluster.Path) (*validatingadmissionpolicy.TypeChecker, error)) (*Controller, error) {
c := &Controller{
policyInformer: policyInformer,
policyQueue: workqueue.NewTypedRateLimitingQueueWithConfig(
workqueue.DefaultTypedControllerRateLimiter[string](),
workqueue.TypedRateLimitingQueueConfig[string]{Name: ControllerName},
),
- policyClient: policyClient,
- typeChecker: typeChecker,
+ policyClient: policyClient,
+ typeCheckerFn: typeCheckerFn,
}
reg, err := policyInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -94,7 +98,10 @@ func NewController(policyInformer informerv1.ValidatingAdmissionPolicyInformer,
func (c *Controller) enqueuePolicy(policy any) {
if policy, ok := policy.(*v1.ValidatingAdmissionPolicy); ok {
// policy objects are cluster-scoped, no point include its namespace.
- key := policy.ObjectMeta.Name
+ key, err := kcpcache.MetaClusterNamespaceKeyFunc(policy)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("cannot get cluster namespace key from policy: %w", err))
+ }
if key == "" {
utilruntime.HandleError(fmt.Errorf("cannot get name of object %v", policy))
}
@@ -115,7 +122,12 @@ func (c *Controller) processNextWorkItem(ctx context.Context) bool {
defer c.policyQueue.Done(key)
err := func() error {
- policy, err := c.policyInformer.Lister().Get(key)
+ clusterName, _, policyName, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ return fmt.Errorf("failed to split key: %w", err)
+ }
+
+ policy, err := c.policyInformer.Lister().Cluster(clusterName).Get(policyName)
if err != nil {
if kerrors.IsNotFound(err) {
// If not found, the policy is being deleting, do nothing.
@@ -144,7 +156,14 @@ func (c *Controller) reconcile(ctx context.Context, policy *v1.ValidatingAdmissi
if policy.Generation <= policy.Status.ObservedGeneration {
return nil
}
- warnings := c.typeChecker.Check(policy)
+
+ cluster := logicalcluster.From(policy)
+ typeChecker, err := c.typeCheckerFn(cluster.Path())
+ if err != nil {
+ return err
+ }
+
+ warnings := typeChecker.Check(policy)
warningsConfig := make([]*admissionregistrationv1apply.ExpressionWarningApplyConfiguration, 0, len(warnings))
for _, warning := range warnings {
warningsConfig = append(warningsConfig, admissionregistrationv1apply.ExpressionWarning().
@@ -156,6 +175,6 @@ func (c *Controller) reconcile(ctx context.Context, policy *v1.ValidatingAdmissi
WithObservedGeneration(policy.Generation).
WithTypeChecking(admissionregistrationv1apply.TypeChecking().
WithExpressionWarnings(warningsConfig...)))
- _, err := c.policyClient.ApplyStatus(ctx, applyConfig, metav1.ApplyOptions{FieldManager: ControllerName, Force: true})
+ _, err = c.policyClient.Cluster(cluster.Path()).ApplyStatus(ctx, applyConfig, metav1.ApplyOptions{FieldManager: ControllerName, Force: true})
return err
}
diff --git a/pkg/controlplane/apiserver/aggregator.go b/pkg/controlplane/apiserver/aggregator.go
index 37d5265d59dbf..e494829e6b5e2 100644
--- a/pkg/controlplane/apiserver/aggregator.go
+++ b/pkg/controlplane/apiserver/aggregator.go
@@ -28,6 +28,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/admission"
+ "k8s.io/apiserver/pkg/endpoints/request"
genericfeatures "k8s.io/apiserver/pkg/features"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/healthz"
@@ -306,7 +307,7 @@ func DefaultGenericAPIServicePriorities() map[schema.GroupVersion]APIServicePrio
func apiServicesToRegister(delegateAPIServer genericapiserver.DelegationTarget, registration autoregister.AutoAPIServiceRegistration, apiVersionPriorities map[schema.GroupVersion]APIServicePriority) []*v1.APIService {
apiServices := []*v1.APIService{}
- for _, curr := range delegateAPIServer.ListedPaths() {
+ for _, curr := range delegateAPIServer.ListedPaths(&request.Cluster{}) {
if curr == "/api/v1" {
apiService := makeAPIService(schema.GroupVersion{Group: "", Version: "v1"}, apiVersionPriorities)
registration.AddAPIServiceToSyncOnStart(apiService)
diff --git a/pkg/controlplane/apiserver/apiextensions.go b/pkg/controlplane/apiserver/apiextensions.go
index 1a0b8910c297e..2221d6d6e05d8 100644
--- a/pkg/controlplane/apiserver/apiextensions.go
+++ b/pkg/controlplane/apiserver/apiextensions.go
@@ -19,12 +19,12 @@ package apiserver
import (
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
apiextensionsapiserver "k8s.io/apiextensions-apiserver/pkg/apiserver"
+ "k8s.io/apiextensions-apiserver/pkg/apiserver/conversion"
apiextensionsoptions "k8s.io/apiextensions-apiserver/pkg/cmd/server/options"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/admission"
"k8s.io/apiserver/pkg/server"
- "k8s.io/apiserver/pkg/util/webhook"
"k8s.io/client-go/informers"
v1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
@@ -37,8 +37,7 @@ func CreateAPIExtensionsConfig(
pluginInitializers []admission.PluginInitializer,
commandOptions options.CompletedOptions,
masterCount int,
- serviceResolver webhook.ServiceResolver,
- authResolverWrapper webhook.AuthenticationInfoResolverWrapper,
+ conversionFactory conversion.Factory,
) (*apiextensionsapiserver.Config, error) {
// make a shallow copy to let us twiddle a few things
// most of the config actually remains the same. We only need to mess with a couple items related to the particulars of the apiextensions
@@ -74,8 +73,7 @@ func CreateAPIExtensionsConfig(
ExtraConfig: apiextensionsapiserver.ExtraConfig{
CRDRESTOptionsGetter: apiextensionsoptions.NewCRDRESTOptionsGetter(etcdOptions, genericConfig.ResourceTransformers, genericConfig.StorageObjectCountTracker),
MasterCount: masterCount,
- AuthResolverWrapper: authResolverWrapper,
- ServiceResolver: serviceResolver,
+ ConversionFactory: conversionFactory,
},
}
diff --git a/pkg/controlplane/apiserver/apis.go b/pkg/controlplane/apiserver/apis.go
index b164feffa5915..d1fed286cdf11 100644
--- a/pkg/controlplane/apiserver/apis.go
+++ b/pkg/controlplane/apiserver/apis.go
@@ -19,6 +19,7 @@ package apiserver
import (
"fmt"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
"k8s.io/apiserver/pkg/registry/generic"
genericapiserver "k8s.io/apiserver/pkg/server"
serverstorage "k8s.io/apiserver/pkg/server/storage"
@@ -55,7 +56,7 @@ func (c *CompletedConfig) NewCoreGenericConfig() *corerest.GenericConfig {
ServiceAccountMaxExpiration: c.Extra.ServiceAccountMaxExpiration,
MaxExtendedExpiration: c.Extra.ServiceAccountExtendedMaxExpiration,
APIAudiences: c.Generic.Authentication.APIAudiences,
- Informers: c.Extra.VersionedInformers,
+ Informers: informerfactoryhack.Wrap(c.Extra.VersionedInformers),
}
}
diff --git a/pkg/controlplane/apiserver/completion.go b/pkg/controlplane/apiserver/completion.go
index 7b44c3ae295f8..800968a09ca99 100644
--- a/pkg/controlplane/apiserver/completion.go
+++ b/pkg/controlplane/apiserver/completion.go
@@ -18,6 +18,7 @@ package apiserver
import (
"k8s.io/apiserver/pkg/endpoints/discovery"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
genericapiserver "k8s.io/apiserver/pkg/server"
)
@@ -33,7 +34,7 @@ type CompletedConfig struct {
func (c *Config) Complete() CompletedConfig {
cfg := completedConfig{
- c.Generic.Complete(c.VersionedInformers),
+ c.Generic.Complete(informerfactoryhack.Wrap(c.VersionedInformers)),
&c.Extra,
}
diff --git a/pkg/controlplane/apiserver/config.go b/pkg/controlplane/apiserver/config.go
index 1d48ad02bd62a..2ee148a6141e9 100644
--- a/pkg/controlplane/apiserver/config.go
+++ b/pkg/controlplane/apiserver/config.go
@@ -23,18 +23,24 @@ import (
"net/http"
"time"
+ "github.com/kcp-dev/client-go/dynamic"
+ kcpinformers "github.com/kcp-dev/client-go/informers"
+ kcpclient "github.com/kcp-dev/client-go/kubernetes"
+ "github.com/kcp-dev/logicalcluster/v3"
noopoteltrace "go.opentelemetry.io/otel/trace/noop"
- "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
utilnet "k8s.io/apimachinery/pkg/util/net"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apiserver/pkg/admission"
"k8s.io/apiserver/pkg/authorization/authorizer"
+ "k8s.io/apiserver/pkg/clientsethack"
+ "k8s.io/apiserver/pkg/dynamichack"
"k8s.io/apiserver/pkg/endpoints/discovery/aggregated"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
genericfeatures "k8s.io/apiserver/pkg/features"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
peerreconcilers "k8s.io/apiserver/pkg/reconcilers"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/egressselector"
@@ -43,9 +49,7 @@ import (
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/apiserver/pkg/util/openapi"
utilpeerproxy "k8s.io/apiserver/pkg/util/peerproxy"
- "k8s.io/client-go/dynamic"
clientgoinformers "k8s.io/client-go/informers"
- clientgoclientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/util/keyutil"
aggregatorapiserver "k8s.io/kube-aggregator/pkg/apiserver"
openapicommon "k8s.io/kube-openapi/pkg/common"
@@ -61,6 +65,10 @@ import (
"k8s.io/kubernetes/pkg/serviceaccount"
)
+// LocalAdminCluster is the default logical cluster that kube-apiserver's
+// objects, e.g. the RBAC bootstrap policy land in.
+var LocalAdminCluster = logicalcluster.Name("system:admin")
+
// Config defines configuration for the master
type Config struct {
Generic *genericapiserver.Config
@@ -101,7 +109,7 @@ type Extra struct {
SystemNamespaces []string
- VersionedInformers clientgoinformers.SharedInformerFactory
+ VersionedInformers kcpinformers.SharedInformerFactory
}
// BuildGenericConfig takes the generic controlplane apiserver options and produces
@@ -114,7 +122,7 @@ func BuildGenericConfig(
getOpenAPIDefinitions func(ref openapicommon.ReferenceCallback) map[string]openapicommon.OpenAPIDefinition,
) (
genericConfig *genericapiserver.Config,
- versionedInformers clientgoinformers.SharedInformerFactory,
+ versionedInformers kcpinformers.SharedInformerFactory,
storageFactory *serverstorage.DefaultStorageFactory,
lastErr error,
) {
@@ -130,30 +138,20 @@ func BuildGenericConfig(
return
}
- // Use protobufs for self-communication.
- // Since not every generic apiserver has to support protobufs, we
- // cannot default to it in generic apiserver and need to explicitly
- // set it in kube-apiserver.
- genericConfig.LoopbackClientConfig.ContentConfig.ContentType = "application/vnd.kubernetes.protobuf"
// Disable compression for self-communication, since we are going to be
// on a fast local network
genericConfig.LoopbackClientConfig.DisableCompression = true
kubeClientConfig := genericConfig.LoopbackClientConfig
- clientgoExternalClient, err := clientgoclientset.NewForConfig(kubeClientConfig)
+ clusterClient, err := kcpclient.NewForConfig(kubeClientConfig)
if err != nil {
- lastErr = fmt.Errorf("failed to create real external clientset: %w", err)
+ lastErr = fmt.Errorf("failed to create cluster clientset: %v", err)
return
}
- trim := func(obj interface{}) (interface{}, error) {
- if accessor, err := meta.Accessor(obj); err == nil && accessor.GetManagedFields() != nil {
- accessor.SetManagedFields(nil)
- }
- return obj, nil
- }
- versionedInformers = clientgoinformers.NewSharedInformerFactoryWithOptions(clientgoExternalClient, 10*time.Minute, clientgoinformers.WithTransform(trim))
+ versionedInformers = kcpinformers.NewSharedInformerFactory(clusterClient, 10*time.Minute)
- if lastErr = s.Features.ApplyTo(genericConfig, clientgoExternalClient, versionedInformers); lastErr != nil {
+ // TODO(embik): this creates flowcontrol for system:admin, but that's probably wrong.
+ if lastErr = s.Features.ApplyTo(genericConfig, clusterClient.Cluster(LocalAdminCluster.Path()), clientgoinformers.NewSharedInformerFactory(clusterClient.Cluster(LocalAdminCluster.Path()), 10*time.Minute)); lastErr != nil {
return
}
if lastErr = s.APIEnablement.ApplyTo(genericConfig, resourceConfig, legacyscheme.Scheme); lastErr != nil {
@@ -206,7 +204,7 @@ func BuildGenericConfig(
ctx := wait.ContextForChannel(genericConfig.DrainedNotify())
// Authentication.ApplyTo requires already applied OpenAPIConfig and EgressSelector if present
- if lastErr = s.Authentication.ApplyTo(ctx, &genericConfig.Authentication, genericConfig.SecureServing, genericConfig.EgressSelector, genericConfig.OpenAPIConfig, genericConfig.OpenAPIV3Config, clientgoExternalClient, versionedInformers, genericConfig.APIServerID); lastErr != nil {
+ if lastErr = s.Authentication.ApplyTo(ctx, &genericConfig.Authentication, genericConfig.SecureServing, genericConfig.EgressSelector, genericConfig.OpenAPIConfig, genericConfig.OpenAPIV3Config, clusterClient, versionedInformers, genericConfig.APIServerID); lastErr != nil {
return
}
@@ -216,7 +214,7 @@ func BuildGenericConfig(
s,
genericConfig.EgressSelector,
genericConfig.APIServerID,
- versionedInformers,
+ informerfactoryhack.Wrap(versionedInformers),
)
if err != nil {
lastErr = fmt.Errorf("invalid authorization config: %w", err)
@@ -231,6 +229,11 @@ func BuildGenericConfig(
return
}
+ // TODO(ntnn): Find out what happened to APIPriorityAndFairness
+ // if utilfeature.DefaultFeatureGate.Enabled(genericfeatures.APIPriorityAndFairness) && s.GenericServerRunOptions.EnablePriorityAndFairness {
+ // genericConfig.FlowControl, lastErr = BuildPriorityAndFairness(s, clusterClient.Cluster(LocalAdminCluster.Path()), informerfactoryhack.Wrap(versionedInformers))
+ // }
+
genericConfig.AggregatedDiscoveryGroupManager = aggregated.NewResourceManager("apis")
return
@@ -272,7 +275,7 @@ func BuildAuthorizer(ctx context.Context, s options.CompletedOptions, egressSele
func CreateConfig(
opts options.CompletedOptions,
genericConfig *genericapiserver.Config,
- versionedInformers clientgoinformers.SharedInformerFactory,
+ versionedInformers kcpinformers.SharedInformerFactory,
storageFactory *serverstorage.DefaultStorageFactory,
serviceResolver aggregatorapiserver.ServiceResolver,
additionalInitializers []admission.PluginInitializer,
@@ -312,7 +315,7 @@ func CreateConfig(
return nil, nil, err
}
if opts.PeerCAFile != "" {
- leaseInformer := versionedInformers.Coordination().V1().Leases()
+ leaseInformer := informerfactoryhack.Wrap(versionedInformers).Coordination().V1().Leases()
config.PeerProxy, err = BuildPeerProxy(
leaseInformer,
genericConfig.LoopbackClientConfig,
@@ -349,14 +352,14 @@ func CreateConfig(
// setup admission
genericAdmissionConfig := controlplaneadmission.Config{
- ExternalInformers: versionedInformers,
+ ExternalInformers: informerfactoryhack.Wrap(versionedInformers),
LoopbackClientConfig: genericConfig.LoopbackClientConfig,
}
genericInitializers, err := genericAdmissionConfig.New(proxyTransport, genericConfig.EgressSelector, serviceResolver, genericConfig.TracerProvider)
if err != nil {
return nil, nil, fmt.Errorf("failed to create admission plugin initializer: %w", err)
}
- clientgoExternalClient, err := clientgoclientset.NewForConfig(genericConfig.LoopbackClientConfig)
+ clientgoExternalClient, err := kcpclient.NewForConfig(genericConfig.LoopbackClientConfig)
if err != nil {
return nil, nil, fmt.Errorf("failed to create real client-go external client: %w", err)
}
@@ -366,9 +369,9 @@ func CreateConfig(
}
err = opts.Admission.ApplyTo(
genericConfig,
- versionedInformers,
- clientgoExternalClient,
- dynamicExternalClient,
+ informerfactoryhack.Wrap(versionedInformers),
+ clientsethack.Wrap(clientgoExternalClient),
+ dynamichack.Wrap(dynamicExternalClient),
utilfeature.DefaultFeatureGate,
append(genericInitializers, additionalInitializers...)...,
)
diff --git a/pkg/controlplane/apiserver/miniaggregator/aggregator_kcp.go b/pkg/controlplane/apiserver/miniaggregator/aggregator_kcp.go
new file mode 100644
index 0000000000000..1061f02e92b9f
--- /dev/null
+++ b/pkg/controlplane/apiserver/miniaggregator/aggregator_kcp.go
@@ -0,0 +1,231 @@
+/*
+Copyright 2017 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Package aggregator contains a server that aggregates content from a generic control
+// plane server, apiextensions server, and CustomResourceDefinitions.
+package miniaggregator
+
+import (
+ "fmt"
+ "net/http"
+
+ "github.com/emicklei/go-restful/v3"
+
+ apiextensionsapiserver "k8s.io/apiextensions-apiserver/pkg/apiserver"
+ "k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/runtime/serializer"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/apiserver/pkg/endpoints/handlers/negotiation"
+ "k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+ genericapiserver "k8s.io/apiserver/pkg/server"
+ "k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator"
+ "k8s.io/kube-openapi/pkg/handler"
+
+ controlplaneapiserver "k8s.io/kubernetes/pkg/controlplane/apiserver"
+)
+
+var (
+ // DiscoveryScheme defines methods for serializing and deserializing API objects.
+ DiscoveryScheme = runtime.NewScheme()
+
+ // DiscoveryCodecs provides methods for retrieving codecs and serializers for specific
+ // versions and content types.
+ DiscoveryCodecs = serializer.NewCodecFactory(DiscoveryScheme)
+)
+
+func init() {
+ // we need to add the options to empty v1
+ // TODO fix the server code to avoid this
+ metav1.AddToGroupVersion(DiscoveryScheme, schema.GroupVersion{Version: "v1"})
+
+ // TODO: keep the generic API server from wanting this
+ unversioned := schema.GroupVersion{Group: "", Version: "v1"}
+ DiscoveryScheme.AddUnversionedTypes(unversioned,
+ &metav1.Status{},
+ &metav1.APIVersions{},
+ &metav1.APIGroupList{},
+ &metav1.APIGroup{},
+ &metav1.APIResourceList{},
+ )
+}
+
+// MiniAggregatorConfig contains configuration settings for the mini aggregator.
+type MiniAggregatorConfig struct {
+ GenericConfig genericapiserver.Config
+}
+
+// completedMiniAggregatorConfig contains completed configuration settings for
+// the mini aggregator. Any fields not filled in by the user that are required
+// to have valid data are defaulted. This struct is private and ultimately
+// embedded in CompletedMiniAggregatorConfig to require the user to invoke
+// Complete() prior to being able to instantiate a MiniAggregatorServer.
+type completedMiniAggregatorConfig struct {
+ GenericConfig genericapiserver.CompletedConfig
+}
+
+// CompletedMiniAggregatorConfig contains completed configuration settings for
+// the mini aggregator. Any fields not filled in by the user that are required
+// to have valid data are defaulted.
+type CompletedMiniAggregatorConfig struct {
+ *completedMiniAggregatorConfig
+}
+
+// MiniAggregatorServer sits in front of the Apis and
+// ApiExtensions servers and aggregates them.
+type MiniAggregatorServer struct {
+ // GenericAPIServer is the aggregator's server.
+ GenericAPIServer *genericapiserver.GenericAPIServer
+ // Apis is the server for the minimal control plane. It serves
+ // APIs such as core v1, certificates.k8s.io, RBAC, etc.
+ Apis *controlplaneapiserver.Server
+ // ApiExtensions is the server for API extensions.
+ ApiExtensions *apiextensionsapiserver.CustomResourceDefinitions
+}
+
+// Complete fills in any fields not set that are required to have valid data.
+// It's mutating the receiver.
+func (cfg *MiniAggregatorConfig) Complete() CompletedMiniAggregatorConfig {
+ // CRITICAL: to be able to provide our own /openapi/v2 implementation that aggregates
+ // content from multiple servers, we *must* skip OpenAPI installation. Otherwise,
+ // when PrepareRun() is invoked, it will register a handler for /openapi/v2,
+ // replacing the aggregator's handler.
+ cfg.GenericConfig.SkipOpenAPIInstallation = true
+
+ return CompletedMiniAggregatorConfig{
+ completedMiniAggregatorConfig: &completedMiniAggregatorConfig{
+ GenericConfig: cfg.GenericConfig.Complete(nil),
+ },
+ }
+}
+
+// New creates a new MiniAggregatorServer.
+func (c completedMiniAggregatorConfig) New(
+ delegationTarget genericapiserver.DelegationTarget,
+ apis *controlplaneapiserver.Server,
+ crds *apiextensionsapiserver.CustomResourceDefinitions,
+) (*MiniAggregatorServer, error) {
+ genericServer, err := c.GenericConfig.New("mini-aggregator", delegationTarget)
+ if err != nil {
+ return nil, err
+ }
+
+ s := &MiniAggregatorServer{
+ GenericAPIServer: genericServer,
+ Apis: apis,
+ ApiExtensions: crds,
+ }
+
+ // Have to do this as a filter because of how the APIServerHandler.Director serves requests.
+ s.GenericAPIServer.Handler.GoRestfulContainer.Filter(s.filterAPIsRequest)
+
+ s.GenericAPIServer.Handler.NonGoRestfulMux.HandleFunc("/openapi/v2", s.serveOpenAPI)
+
+ return s, nil
+}
+
+// filterAPIsRequest checks if the request is for /apis, and if so, it aggregates group discovery
+// for the generic control plane server, apiextensions server (which provides the apiextensions.k8s.io group),
+// and the CRDs themselves.
+func (s *MiniAggregatorServer) filterAPIsRequest(req *restful.Request, resp *restful.Response, chain *restful.FilterChain) {
+ if req.Request.URL.Path != "/apis" && req.Request.URL.Path != "/apis/" {
+ chain.ProcessFilter(req, resp)
+ return
+ }
+
+ // Discovery for things like core, authentication, authorization, certificates, ...
+ gcpGroups, err := s.Apis.GenericAPIServer.DiscoveryGroupManager.Groups(req.Request.Context(), req.Request)
+ if err != nil {
+ http.Error(resp.ResponseWriter, fmt.Sprintf("error retrieving generic control plane discovery groups: %v", err), http.StatusInternalServerError)
+ }
+
+ // Discovery for the apiextensions group itself
+ apiextensionsGroups, err := s.ApiExtensions.GenericAPIServer.DiscoveryGroupManager.Groups(req.Request.Context(), req.Request)
+ if err != nil {
+ http.Error(resp.ResponseWriter, fmt.Sprintf("error retrieving apiextensions discovery groups: %v", err), http.StatusInternalServerError)
+ }
+
+ // Discovery for all the groups contributed by CRDs
+ crdGroups, err := s.ApiExtensions.DiscoveryGroupLister.Groups(req.Request.Context(), req.Request)
+ if err != nil {
+ http.Error(resp.ResponseWriter, fmt.Sprintf("error retrieving custom resource discovery groups: %v", err), http.StatusInternalServerError)
+ }
+
+ // Combine the slices using copy - more efficient than append
+ combined := make([]metav1.APIGroup, len(gcpGroups)+len(apiextensionsGroups)+len(crdGroups))
+ var i int
+ i += copy(combined[i:], gcpGroups)
+ i += copy(combined[i:], apiextensionsGroups)
+ i += copy(combined[i:], crdGroups)
+
+ responsewriters.WriteObjectNegotiated(DiscoveryCodecs, negotiation.DefaultEndpointRestrictions, schema.GroupVersion{}, resp.ResponseWriter, req.Request, http.StatusOK, &metav1.APIGroupList{Groups: combined}, false)
+}
+
+// serveOpenAPI aggregates OpenAPI specs from the generic control plane and apiextensions servers.
+func (s *MiniAggregatorServer) serveOpenAPI(w http.ResponseWriter, req *http.Request) {
+ downloader := aggregator.NewDownloader()
+
+ cluster := genericapirequest.ClusterFrom(req.Context())
+
+ withCluster := func(handler http.Handler) http.HandlerFunc {
+ return func(res http.ResponseWriter, req *http.Request) {
+ if cluster != nil {
+ req = req.Clone(genericapirequest.WithCluster(req.Context(), *cluster))
+ }
+ handler.ServeHTTP(res, req)
+ }
+ }
+
+ // Can't use withCluster here because the GenericControlPlane doesn't have APIs coming from multiple logical clusters at this time.
+ controlPlaneSpec, _, _, err := downloader.Download(s.Apis.GenericAPIServer.Handler.Director, "")
+
+ // Use withCluster here because each logical cluster can have a distinct set of APIs coming from its CRDs.
+ crdSpecs, _, _, err := downloader.Download(withCluster(s.ApiExtensions.GenericAPIServer.Handler.Director), "")
+
+ // TODO(ncdc): merging on the fly is expensive. We may need to optimize this (e.g. caching).
+ mergedSpecs, err := builder.MergeSpecs(controlPlaneSpec, crdSpecs)
+ if err != nil {
+ utilruntime.HandleError(err)
+ }
+
+ h := &singlePathHandler{}
+
+ // In order to reuse the kube-openapi API as much as possible, we
+ // register the OpenAPI service in the singlePathHandler
+ handler.NewOpenAPIService(mergedSpecs).RegisterOpenAPIVersionedService("/openapi/v2", h)
+
+ h.ServeHTTP(w, req)
+}
+
+// singlePathHandler is a dummy PathHandler that mainly allows grabbing a http.Handler
+// from a PathHandler consumer and then being able to use the http.Handler
+// to serve a request.
+type singlePathHandler struct {
+ handler [1]http.Handler
+}
+
+func (sph *singlePathHandler) Handle(path string, handler http.Handler) {
+ sph.handler[0] = handler
+}
+func (sph *singlePathHandler) ServeHTTP(res http.ResponseWriter, req *http.Request) {
+ if sph.handler[0] == nil {
+ res.WriteHeader(404)
+ }
+ sph.handler[0].ServeHTTP(res, req)
+}
diff --git a/pkg/controlplane/apiserver/options/options.go b/pkg/controlplane/apiserver/options/options.go
index e5c4daf376f23..bafadde3a3d56 100644
--- a/pkg/controlplane/apiserver/options/options.go
+++ b/pkg/controlplane/apiserver/options/options.go
@@ -250,7 +250,9 @@ func (o *Options) Complete(ctx context.Context, alternateDNS []string, alternate
}
// put authorization options in final state
- completed.Authorization.Complete()
+ if completed.Authorization != nil {
+ completed.Authorization.Complete()
+ }
// adjust authentication for completed authorization
completed.Authentication.ApplyAuthorization(completed.Authorization)
diff --git a/pkg/controlplane/apiserver/samples/generic/server/server.go b/pkg/controlplane/apiserver/samples/generic/server/server.go
index be6af3f94d471..f4b74d5143ed6 100644
--- a/pkg/controlplane/apiserver/samples/generic/server/server.go
+++ b/pkg/controlplane/apiserver/samples/generic/server/server.go
@@ -133,7 +133,7 @@ func NewOptions() *options.Options {
s.SecureServing.ServerCert.CertDirectory = filepath.Join(wd, ".sample-minimal-controlplane")
// Wire ServiceAccount authentication without relying on pods and nodes.
- s.Authentication.ServiceAccounts.OptionalTokenGetter = genericTokenGetter
+ s.Authentication.ServiceAccounts.OptionalTokenGetter = genericTokenClusterGetter
return s
}
diff --git a/pkg/controlplane/apiserver/samples/generic/server/serviceaccounts_kcp.go b/pkg/controlplane/apiserver/samples/generic/server/serviceaccounts_kcp.go
new file mode 100644
index 0000000000000..ce16964c00bc8
--- /dev/null
+++ b/pkg/controlplane/apiserver/samples/generic/server/serviceaccounts_kcp.go
@@ -0,0 +1,60 @@
+/*
+Copyright 2024 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package server
+
+import (
+ "github.com/kcp-dev/client-go/informers"
+ kcpv1listers "github.com/kcp-dev/client-go/listers/core/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+ v1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/kubernetes/pkg/serviceaccount"
+)
+
+// clientGetter implements ServiceAccountTokenGetter using a factory function
+type clientClusterGetter struct {
+ secretLister kcpv1listers.SecretClusterLister
+ serviceAccountLister kcpv1listers.ServiceAccountClusterLister
+ clusterName logicalcluster.Name
+}
+
+// genericTokenGetter returns a ServiceAccountTokenGetter that does not depend
+// on pods and nodes.
+func genericTokenClusterGetter(factory informers.SharedInformerFactory) serviceaccount.ServiceAccountTokenClusterGetter {
+ return clientClusterGetter{secretLister: factory.Core().V1().Secrets().Lister(), serviceAccountLister: factory.Core().V1().ServiceAccounts().Lister()}
+}
+
+func (c clientClusterGetter) Cluster(name logicalcluster.Name) serviceaccount.ServiceAccountTokenGetter {
+ c.clusterName = name
+ return c
+}
+
+func (c clientClusterGetter) GetServiceAccount(namespace, name string) (*v1.ServiceAccount, error) {
+ return c.serviceAccountLister.Cluster(c.clusterName).ServiceAccounts(namespace).Get(name)
+}
+
+func (c clientClusterGetter) GetPod(namespace, name string) (*v1.Pod, error) {
+ return nil, apierrors.NewNotFound(v1.Resource("pods"), name)
+}
+
+func (c clientClusterGetter) GetSecret(namespace, name string) (*v1.Secret, error) {
+ return c.secretLister.Cluster(c.clusterName).Secrets(namespace).Get(name)
+}
+
+func (c clientClusterGetter) GetNode(name string) (*v1.Node, error) {
+ return nil, apierrors.NewNotFound(v1.Resource("nodes"), name)
+}
diff --git a/pkg/controlplane/apiserver/server.go b/pkg/controlplane/apiserver/server.go
index c939850e98d15..c45fe1f90c3f7 100644
--- a/pkg/controlplane/apiserver/server.go
+++ b/pkg/controlplane/apiserver/server.go
@@ -22,12 +22,15 @@ import (
"os"
"time"
+ "github.com/kcp-dev/client-go/kubernetes"
+
coordinationapiv1 "k8s.io/api/coordination/v1"
apiv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/uuid"
apiserverfeatures "k8s.io/apiserver/pkg/features"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
peerreconcilers "k8s.io/apiserver/pkg/reconcilers"
genericregistry "k8s.io/apiserver/pkg/registry/generic"
genericapiserver "k8s.io/apiserver/pkg/server"
@@ -35,7 +38,6 @@ import (
serverstorage "k8s.io/apiserver/pkg/server/storage"
utilfeature "k8s.io/apiserver/pkg/util/feature"
clientgoinformers "k8s.io/client-go/informers"
- "k8s.io/client-go/kubernetes"
zpagesfeatures "k8s.io/component-base/zpages/features"
"k8s.io/component-base/zpages/flagz"
"k8s.io/component-base/zpages/statusz"
@@ -137,7 +139,7 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
APIResourceConfigSource: c.APIResourceConfigSource,
RESTOptionsGetter: c.Generic.RESTOptionsGetter,
ClusterAuthenticationInfo: c.ClusterAuthenticationInfo,
- VersionedInformers: c.VersionedInformers,
+ VersionedInformers: informerfactoryhack.Wrap(c.VersionedInformers),
}
client, err := kubernetes.NewForConfig(s.GenericAPIServer.LoopbackClientConfig)
@@ -146,7 +148,7 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
}
if len(c.SystemNamespaces) > 0 {
s.GenericAPIServer.AddPostStartHookOrDie("start-system-namespaces-controller", func(hookContext genericapiserver.PostStartHookContext) error {
- go systemnamespaces.NewController(c.SystemNamespaces, client, s.VersionedInformers.Core().V1().Namespaces()).Run(hookContext.Done())
+ go systemnamespaces.NewController(c.SystemNamespaces, client.Cluster(LocalAdminCluster.Path()), informerfactoryhack.Unwrap(s.VersionedInformers).Core().V1().Namespaces().Cluster(LocalAdminCluster)).Run(hookContext.Done())
return nil
})
}
@@ -178,11 +180,11 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
controller, err := leaderelection.NewController(
leaseInformer,
lcInformer,
- client.CoordinationV1(),
- client.CoordinationV1beta1(),
+ client.Cluster(LocalAdminCluster.Path()).CoordinationV1(),
+ client.Cluster(LocalAdminCluster.Path()).CoordinationV1beta1(),
)
gccontroller := leaderelection.NewLeaseCandidateGC(
- client,
+ client.Cluster(LocalAdminCluster.Path()),
LeaseCandidateGCPeriod,
lcInformer,
)
@@ -202,7 +204,11 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
peeraddress,
c.Extra.PeerEndpointLeaseReconciler,
c.Extra.PeerEndpointReconcileInterval,
- client)
+ client.Cluster(LocalAdminCluster.Path()))
+ if err != nil {
+ return nil, fmt.Errorf("failed to create peer endpoint lease controller: %w", err)
+ }
+
s.GenericAPIServer.AddPostStartHookOrDie("peer-endpoint-reconciler-controller",
func(hookContext genericapiserver.PostStartHookContext) error {
peerEndpointCtrl.Start(hookContext.Done())
@@ -235,7 +241,8 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
}
s.GenericAPIServer.AddPostStartHookOrDie("start-cluster-authentication-info-controller", func(hookContext genericapiserver.PostStartHookContext) error {
- controller := clusterauthenticationtrust.NewClusterAuthenticationTrustController(s.ClusterAuthenticationInfo, client)
+ controller := clusterauthenticationtrust.NewClusterAuthenticationTrustController(s.ClusterAuthenticationInfo, client.Cluster(LocalAdminCluster.Path()))
+
// prime values and start listeners
if s.ClusterAuthenticationInfo.ClientCA != nil {
s.ClusterAuthenticationInfo.ClientCA.AddListener(controller)
@@ -271,7 +278,7 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
// must replace ':,[]' in [ip:port] to be able to store this as a valid label value
controller := lease.NewController(
clock.RealClock{},
- client,
+ client.Cluster(LocalAdminCluster.Path()),
holderIdentity,
int32(IdentityLeaseDurationSeconds),
nil,
@@ -286,7 +293,7 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
// TODO: move this into generic apiserver and make the lease identity value configurable
s.GenericAPIServer.AddPostStartHookOrDie("start-kube-apiserver-identity-lease-garbage-collector", func(hookContext genericapiserver.PostStartHookContext) error {
go apiserverleasegc.NewAPIServerLeaseGC(
- client,
+ client.Cluster(LocalAdminCluster.Path()),
IdentityLeaseGCPeriod,
metav1.NamespaceSystem,
IdentityLeaseComponentLabelKey+"="+name,
@@ -300,7 +307,7 @@ func (c completedConfig) New(name string, delegationTarget genericapiserver.Dele
}
s.GenericAPIServer.AddPostStartHookOrDie("start-legacy-token-tracking-controller", func(hookContext genericapiserver.PostStartHookContext) error {
- go legacytokentracking.NewController(client).Run(hookContext.Done())
+ go legacytokentracking.NewController(client.Cluster(LocalAdminCluster.Path())).Run(hookContext.Done())
return nil
})
diff --git a/pkg/controlplane/instance.go b/pkg/controlplane/instance.go
index 83a8cf61db40e..6347320b7051f 100644
--- a/pkg/controlplane/instance.go
+++ b/pkg/controlplane/instance.go
@@ -349,7 +349,7 @@ func (c CompletedConfig) New(delegationTarget genericapiserver.DelegationTarget)
ServicePort: c.Extra.APIServerServicePort,
PublicServicePort: publicServicePort,
KubernetesServiceNodePort: c.Extra.KubernetesServiceNodePort,
- }, client, c.ControlPlane.Extra.VersionedInformers.Core().V1().Services())
+ }, client, c.ControlPlane.Extra.VersionedInformers.Core().V1().Services().Cluster(controlplaneapiserver.LocalAdminCluster))
s.ControlPlane.GenericAPIServer.AddPostStartHookOrDie("bootstrap-controller", func(hookContext genericapiserver.PostStartHookContext) error {
kubernetesServiceCtrl.Start(hookContext.Done())
return nil
diff --git a/pkg/features/kube_features.go b/pkg/features/kube_features.go
index c942dfea9a10f..ad41a6252dec2 100644
--- a/pkg/features/kube_features.go
+++ b/pkg/features/kube_features.go
@@ -990,6 +990,13 @@ const (
// operation when scheduling a Pod by setting the `metadata.labels` field on the submitted Binding,
// similar to how `metadata.annotations` behaves.
PodTopologyLabelsAdmission featuregate.Feature = "PodTopologyLabelsAdmission"
+
+ // TODO(cnvergence): Remove when not applicable
+ // owner: @cnvergence
+ // alpha: v1.31
+ //
+ // GlobalServiceAccount is a feature gate that enables the cross-workspace service accounts feature.
+ GlobalServiceAccount featuregate.Feature = "GlobalServiceAccount"
)
// defaultVersionedKubernetesFeatureGates consists of all known Kubernetes-specific feature keys with VersionedSpecs.
@@ -1877,6 +1884,10 @@ var defaultVersionedKubernetesFeatureGates = map[featuregate.Feature]featuregate
DisableCPUQuotaWithExclusiveCPUs: {
{Version: version.MustParse("1.33"), Default: true, PreRelease: featuregate.Beta},
},
+
+ GlobalServiceAccount: {
+ {Version: version.MustParse("1.31"), Default: false, PreRelease: featuregate.Alpha},
+ },
}
func init() {
diff --git a/pkg/kubeapiserver/authenticator/config.go b/pkg/kubeapiserver/authenticator/config.go
index 2e1b78b80bdfc..5913036804843 100644
--- a/pkg/kubeapiserver/authenticator/config.go
+++ b/pkg/kubeapiserver/authenticator/config.go
@@ -23,6 +23,8 @@ import (
"sync/atomic"
"time"
+ typedv1core "github.com/kcp-dev/client-go/kubernetes/typed/core/v1"
+
utilerrors "k8s.io/apimachinery/pkg/util/errors"
utilnet "k8s.io/apimachinery/pkg/util/net"
"k8s.io/apimachinery/pkg/util/wait"
@@ -43,7 +45,6 @@ import (
webhookutil "k8s.io/apiserver/pkg/util/webhook"
"k8s.io/apiserver/plugin/pkg/authenticator/token/oidc"
"k8s.io/apiserver/plugin/pkg/authenticator/token/webhook"
- typedv1core "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/validation/spec"
@@ -83,8 +84,8 @@ type Config struct {
// ServiceAccountPublicKeysGetter returns public keys for verifying service account tokens.
ServiceAccountPublicKeysGetter serviceaccount.PublicKeysGetter
// ServiceAccountTokenGetter fetches API objects used to verify bound objects in service account token claims.
- ServiceAccountTokenGetter serviceaccount.ServiceAccountTokenGetter
- SecretsWriter typedv1core.SecretsGetter
+ ServiceAccountTokenGetter serviceaccount.ServiceAccountTokenClusterGetter
+ SecretsWriter typedv1core.SecretClusterInterface
BootstrapTokenAuthenticator authenticator.Token
// ClientCAContentProvider are the options for verifying incoming connections using mTLS and directly assigning to users.
// Generally this is the CA bundle file used to authenticate client certificates
@@ -336,7 +337,7 @@ func newAuthenticatorFromTokenFile(tokenAuthFile string) (authenticator.Token, e
}
// newLegacyServiceAccountAuthenticator returns an authenticator.Token or an error
-func newLegacyServiceAccountAuthenticator(publicKeysGetter serviceaccount.PublicKeysGetter, lookup bool, apiAudiences authenticator.Audiences, serviceAccountGetter serviceaccount.ServiceAccountTokenGetter, secretsWriter typedv1core.SecretsGetter) (authenticator.Token, error) {
+func newLegacyServiceAccountAuthenticator(publicKeysGetter serviceaccount.PublicKeysGetter, lookup bool, apiAudiences authenticator.Audiences, serviceAccountGetter serviceaccount.ServiceAccountTokenClusterGetter, secretsWriter typedv1core.SecretClusterInterface) (authenticator.Token, error) {
if publicKeysGetter == nil {
return nil, fmt.Errorf("no public key getter provided")
}
@@ -350,7 +351,7 @@ func newLegacyServiceAccountAuthenticator(publicKeysGetter serviceaccount.Public
}
// newServiceAccountAuthenticator returns an authenticator.Token or an error
-func newServiceAccountAuthenticator(issuers []string, publicKeysGetter serviceaccount.PublicKeysGetter, apiAudiences authenticator.Audiences, serviceAccountGetter serviceaccount.ServiceAccountTokenGetter) (authenticator.Token, error) {
+func newServiceAccountAuthenticator(issuers []string, publicKeysGetter serviceaccount.PublicKeysGetter, apiAudiences authenticator.Audiences, serviceAccountGetter serviceaccount.ServiceAccountTokenClusterGetter) (authenticator.Token, error) {
if publicKeysGetter == nil {
return nil, fmt.Errorf("no public key getter provided")
}
diff --git a/pkg/kubeapiserver/options/authentication.go b/pkg/kubeapiserver/options/authentication.go
index 72bfdd8bd1182..d2b43c17ed248 100644
--- a/pkg/kubeapiserver/options/authentication.go
+++ b/pkg/kubeapiserver/options/authentication.go
@@ -27,6 +27,8 @@ import (
"sync"
"time"
+ kcpinformers "github.com/kcp-dev/client-go/informers"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
"github.com/spf13/pflag"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -47,9 +49,6 @@ import (
authenticationconfigmetrics "k8s.io/apiserver/pkg/server/options/authenticationconfig/metrics"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/apiserver/plugin/pkg/authenticator/token/oidc"
- "k8s.io/client-go/informers"
- "k8s.io/client-go/kubernetes"
- v1listers "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/util/keyutil"
cliflag "k8s.io/component-base/cli/flag"
"k8s.io/klog/v2"
@@ -137,7 +136,7 @@ type ServiceAccountAuthenticationOptions struct {
MaxExtendedExpiration time.Duration
// OptionalTokenGetter is a function that returns a service account token getter.
// If not set, the default token getter will be used.
- OptionalTokenGetter func(factory informers.SharedInformerFactory) serviceaccount.ServiceAccountTokenGetter
+ OptionalTokenGetter func(factory kcpinformers.SharedInformerFactory) serviceaccount.ServiceAccountTokenClusterGetter
// ExternalPublicKeysGetter gets set if `--service-account-signing-endpoint` is passed.
// ExternalPublicKeysGetter is mutually exclusive with KeyFiles.
ExternalPublicKeysGetter serviceaccount.PublicKeysGetter
@@ -652,8 +651,8 @@ func (o *BuiltInAuthenticationOptions) ApplyTo(
egressSelector *egressselector.EgressSelector,
openAPIConfig *openapicommon.Config,
openAPIV3Config *openapicommon.OpenAPIV3Config,
- extclient kubernetes.Interface,
- versionedInformer informers.SharedInformerFactory,
+ extclient kcpkubernetesclientset.ClusterInterface,
+ versionedInformer kcpinformers.SharedInformerFactory,
apiServerID string) error {
if o == nil {
return nil
@@ -689,24 +688,18 @@ func (o *BuiltInAuthenticationOptions) ApplyTo(
if o.ServiceAccounts != nil && o.ServiceAccounts.OptionalTokenGetter != nil {
authenticatorConfig.ServiceAccountTokenGetter = o.ServiceAccounts.OptionalTokenGetter(versionedInformer)
} else {
- var nodeLister v1listers.NodeLister
- if utilfeature.DefaultFeatureGate.Enabled(features.ServiceAccountTokenNodeBindingValidation) {
- nodeLister = versionedInformer.Core().V1().Nodes().Lister()
- }
-
- authenticatorConfig.ServiceAccountTokenGetter = serviceaccountcontroller.NewGetterFromClient(
+ authenticatorConfig.ServiceAccountTokenGetter = serviceaccountcontroller.NewClusterGetterFromClient(
extclient,
versionedInformer.Core().V1().Secrets().Lister(),
versionedInformer.Core().V1().ServiceAccounts().Lister(),
- versionedInformer.Core().V1().Pods().Lister(),
- nodeLister,
)
}
- authenticatorConfig.SecretsWriter = extclient.CoreV1()
+ authenticatorConfig.SecretsWriter = extclient.CoreV1().Secrets()
if authenticatorConfig.BootstrapToken {
authenticatorConfig.BootstrapTokenAuthenticator = bootstrap.NewTokenAuthenticator(
- versionedInformer.Core().V1().Secrets().Lister().Secrets(metav1.NamespaceSystem),
+ // TODO(sttts): make it possible to reference LocalAdminCluster here without import cycle
+ versionedInformer.Core().V1().Secrets().Lister().Cluster("system:admin").Secrets(metav1.NamespaceSystem), //TODO(kcp): This should be a cluster scoped lister?
)
}
diff --git a/pkg/registry/rbac/rest/storage_rbac.go b/pkg/registry/rbac/rest/storage_rbac.go
index 16502c476c27f..726c83226ecbb 100644
--- a/pkg/registry/rbac/rest/storage_rbac.go
+++ b/pkg/registry/rbac/rest/storage_rbac.go
@@ -22,6 +22,8 @@ import (
"time"
"k8s.io/klog/v2"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ "github.com/kcp-dev/logicalcluster/v3"
rbacapiv1 "k8s.io/api/rbac/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@@ -164,11 +166,13 @@ func (p *PolicyData) EnsureRBACPolicy() genericapiserver.PostStartHookFunc {
// initializing roles is really important. On some e2e runs, we've seen cases where etcd is down when the server
// starts, the roles don't initialize, and nothing works.
err := wait.Poll(1*time.Second, 30*time.Second, func() (done bool, err error) {
- client, err := clientset.NewForConfig(hookContext.LoopbackClientConfig)
+ clientClusterGetter, err := kcpkubernetesclientset.NewForConfig(hookContext.LoopbackClientConfig)
if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to initialize client set: %v", err))
return false, nil
}
+ // TODO(sttts): make it possible to reference LocalAdminCluster here without import cycle
+ client := clientClusterGetter.Cluster(logicalcluster.Name("system:admin").Path())
return ensureRBACPolicy(p, client)
})
// if we're never able to make it through initialization, kill the API server
diff --git a/pkg/registry/rbac/validation/kcp.go b/pkg/registry/rbac/validation/kcp.go
new file mode 100644
index 0000000000000..967f671c90b80
--- /dev/null
+++ b/pkg/registry/rbac/validation/kcp.go
@@ -0,0 +1,281 @@
+package validation
+
+import (
+ "context"
+ "fmt"
+ "strings"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+ rbacv1 "k8s.io/api/rbac/v1"
+ "k8s.io/apimachinery/pkg/util/json"
+ "k8s.io/apimachinery/pkg/util/sets"
+ authserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
+ "k8s.io/apiserver/pkg/authentication/user"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+ utilfeature "k8s.io/apiserver/pkg/util/feature"
+ "k8s.io/kubernetes/pkg/features"
+)
+
+const (
+ // WarrantExtraKey is the key used in a user's "extra" to specify
+ // JSON-encoded user infos for attached extra permissions for that user
+ // evaluated by the authorizer.
+ WarrantExtraKey = "authorization.kcp.io/warrant"
+
+ // ScopeExtraKey is the key used in a user's "extra" to specify
+ // that the user is restricted to a given scope. Valid values for
+ // one extra value are:
+ // - "cluster:"
+ // - "cluster:,cluster:"
+ // - etc.
+ // The clusters in one extra value are or'ed, multiple extra values
+ // are and'ed.
+ ScopeExtraKey = "authentication.kcp.io/scopes"
+
+ // ClusterPrefix is the prefix for cluster scopes.
+ clusterPrefix = "cluster:"
+)
+
+// Warrant is serialized into the user's "extra" field authorization.kcp.io/warrant
+// to hold user information for extra permissions.
+type Warrant struct {
+ // User is the user you're testing for.
+ // If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups
+ // +optional
+ User string `json:"user,omitempty"`
+ // Groups is the groups you're testing for.
+ // +optional
+ // +listType=atomic
+ Groups []string `json:"groups,omitempty"`
+ // Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer
+ // it needs a reflection here.
+ // +optional
+ Extra map[string][]string `json:"extra,omitempty"`
+ // UID information about the requesting user.
+ // +optional
+ UID string `json:"uid,omitempty"`
+}
+
+type appliesToUserFunc func(user user.Info, subject rbacv1.Subject, namespace string) bool
+type appliesToUserFuncCtx func(ctx context.Context, user user.Info, subject rbacv1.Subject, namespace string) bool
+
+var appliesToUserWithScopedAndWarrants = withScopesAndWarrants(appliesToUser)
+
+// withScopesAndWarrants flattens the warrants, applies scopes and then applies the users to the subjects.
+func withScopesAndWarrants(appliesToUser appliesToUserFunc) appliesToUserFuncCtx {
+ return func(ctx context.Context, u user.Info, bindingSubject rbacv1.Subject, namespace string) bool {
+ var clusterName logicalcluster.Name
+ if cluster := genericapirequest.ClusterFrom(ctx); cluster != nil {
+ clusterName = cluster.Name
+ }
+
+ for _, eu := range EffectiveUsers(clusterName, u) {
+ if appliesToUser(eu, bindingSubject, namespace) {
+ return true
+ }
+ }
+
+ return false
+ }
+}
+
+var (
+ authenticated = &user.DefaultInfo{Name: user.Anonymous, Groups: []string{user.AllAuthenticated}}
+ unauthenticated = &user.DefaultInfo{Name: user.Anonymous, Groups: []string{user.AllUnauthenticated}}
+)
+
+// EffectiveUsers flattens the warrants and scopes each user to the given cluster.
+func EffectiveUsers(clusterName logicalcluster.Name, u user.Info) []user.Info {
+ ret := make([]user.Info, 0, 2)
+
+ var wantAuthenticated bool
+ var wantUnauthenticated bool
+ globalsa := utilfeature.DefaultFeatureGate.Enabled(features.GlobalServiceAccount)
+
+ var recursive func(u user.Info)
+ recursive = func(u user.Info) {
+ if IsInScope(u, clusterName) {
+ ret = append(ret, u)
+ } else {
+ found := false
+ for _, g := range u.GetGroups() {
+ if g == user.AllAuthenticated {
+ found = true
+ break
+ }
+ }
+ wantAuthenticated = wantAuthenticated || found
+ wantUnauthenticated = wantUnauthenticated || !found
+ }
+
+ if IsServiceAccount(u) && globalsa {
+ if clusters := u.GetExtra()[authserviceaccount.ClusterNameKey]; len(clusters) == 1 {
+ nsNameSuffix := strings.TrimPrefix(u.GetName(), "system:serviceaccount:")
+ rewritten := &user.DefaultInfo{
+ Name: fmt.Sprintf("system:kcp:serviceaccount:%s:%s", clusters[0], nsNameSuffix),
+ Extra: u.GetExtra(),
+ }
+ for _, g := range u.GetGroups() {
+ if g == user.AllAuthenticated {
+ rewritten.Groups = []string{user.AllAuthenticated}
+ break
+ }
+ }
+ ret = append(ret, rewritten)
+ }
+ }
+
+ for _, v := range u.GetExtra()[WarrantExtraKey] {
+ var w Warrant
+ if err := json.Unmarshal([]byte(v), &w); err != nil {
+ continue
+ }
+
+ wu := &user.DefaultInfo{
+ Name: w.User,
+ UID: w.UID,
+ Groups: w.Groups,
+ Extra: w.Extra,
+ }
+ if IsServiceAccount(wu) && len(w.Extra[authserviceaccount.ClusterNameKey]) == 0 {
+ // warrants must be scoped to a cluster
+ continue
+ }
+ recursive(wu)
+ }
+ }
+ recursive(u)
+
+ if wantAuthenticated {
+ ret = append(ret, authenticated)
+ }
+ if wantUnauthenticated {
+ ret = append(ret, unauthenticated)
+ }
+
+ return ret
+}
+
+// IsServiceAccount returns true if the user is a service account.
+func IsServiceAccount(attr user.Info) bool {
+ return strings.HasPrefix(attr.GetName(), "system:serviceaccount:")
+}
+
+// IsForeign returns true if the service account is not from the given cluster.
+func IsForeign(attr user.Info, cluster logicalcluster.Name) bool {
+ clusters := attr.GetExtra()[authserviceaccount.ClusterNameKey]
+ switch {
+ case len(clusters) == 0:
+ // an unqualified service account is considered local: think of some
+ // local SubjectAccessReview specifying a service account without the
+ // cluster scope.
+ return false
+ case len(clusters) != 1:
+ return true
+ default:
+ return !sets.New(clusters...).Has(string(cluster))
+ }
+}
+
+// IsInScope checks if the user is valid for the given cluster.
+func IsInScope(attr user.Info, cluster logicalcluster.Name) bool {
+ if IsServiceAccount(attr) && IsForeign(attr, cluster) {
+ return false
+ }
+
+ values := attr.GetExtra()[ScopeExtraKey]
+ for _, scopes := range values {
+ found := false
+ for _, scope := range strings.Split(scopes, ",") {
+ if strings.HasPrefix(scope, clusterPrefix) && scope[len(clusterPrefix):] == string(cluster) {
+ found = true
+ break
+ }
+ }
+ if !found {
+ return false
+ }
+ }
+
+ return true
+}
+
+// EffectiveGroups returns the effective groups of the user in the given context
+// taking scopes and warrants into account.
+func EffectiveGroups(ctx context.Context, u user.Info) sets.Set[string] {
+ var clusterName logicalcluster.Name
+ if cluster := genericapirequest.ClusterFrom(ctx); cluster != nil {
+ clusterName = cluster.Name
+ }
+
+ eus := EffectiveUsers(clusterName, u)
+ groups := sets.New[string]()
+ for _, eu := range eus {
+ groups.Insert(eu.GetGroups()...)
+ }
+
+ return groups
+}
+
+// PrefixUser returns a new user with the name and groups prefixed with the
+// given prefix, and all warrants recursively prefixed.
+//
+// If the user is a service account, the prefix is added to the global service
+// account name.
+//
+// Invalid warrants are skipped.
+func PrefixUser(u user.Info, prefix string) user.Info {
+ pu := &user.DefaultInfo{
+ Name: prefix + u.GetName(),
+ UID: u.GetUID(),
+ }
+ if IsServiceAccount(u) {
+ if clusters := u.GetExtra()[authserviceaccount.ClusterNameKey]; len(clusters) != 1 {
+ // this should not happen. But if it does, we are defensive.
+ for _, g := range u.GetGroups() {
+ if g == user.AllAuthenticated {
+ return &user.DefaultInfo{Name: prefix + user.Anonymous, Groups: []string{prefix + user.AllAuthenticated}}
+ }
+ }
+ return &user.DefaultInfo{Name: prefix + user.Anonymous, Groups: []string{prefix + user.AllUnauthenticated}}
+ } else {
+ pu.Name = fmt.Sprintf("%ssystem:kcp:serviceaccount:%s:%s", prefix, clusters[0], strings.TrimPrefix(u.GetName(), "system:serviceaccount:"))
+ }
+ }
+
+ for _, g := range u.GetGroups() {
+ pu.Groups = append(pu.Groups, prefix+g)
+ }
+
+ for k, v := range u.GetExtra() {
+ if k == WarrantExtraKey {
+ continue
+ }
+ if pu.Extra == nil {
+ pu.Extra = map[string][]string{}
+ }
+ pu.Extra[k] = v
+ }
+
+ for _, w := range u.GetExtra()[WarrantExtraKey] {
+ var warrant Warrant
+ if err := json.Unmarshal([]byte(w), &warrant); err != nil {
+ continue // skip invalid warrant
+ }
+
+ wpu := PrefixUser(&user.DefaultInfo{Name: warrant.User, UID: warrant.UID, Groups: warrant.Groups, Extra: warrant.Extra}, prefix)
+ warrant = Warrant{User: wpu.GetName(), UID: wpu.GetUID(), Groups: wpu.GetGroups(), Extra: wpu.GetExtra()}
+
+ bs, err := json.Marshal(warrant)
+ if err != nil {
+ continue // skip invalid warrant
+ }
+
+ if pu.Extra == nil {
+ pu.Extra = map[string][]string{}
+ }
+ pu.Extra[WarrantExtraKey] = append(pu.Extra[WarrantExtraKey], string(bs))
+ }
+
+ return pu
+}
diff --git a/pkg/registry/rbac/validation/kcp_test.go b/pkg/registry/rbac/validation/kcp_test.go
new file mode 100644
index 0000000000000..7eaf73c200526
--- /dev/null
+++ b/pkg/registry/rbac/validation/kcp_test.go
@@ -0,0 +1,489 @@
+package validation
+
+import (
+ "context"
+ "testing"
+
+ "github.com/google/go-cmp/cmp"
+ "github.com/kcp-dev/logicalcluster/v3"
+ rbacv1 "k8s.io/api/rbac/v1"
+ "k8s.io/apimachinery/pkg/util/sets"
+ authserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
+ "k8s.io/apiserver/pkg/authentication/user"
+ "k8s.io/apiserver/pkg/endpoints/request"
+)
+
+func TestIsInScope(t *testing.T) {
+ tests := []struct {
+ name string
+ info user.DefaultInfo
+ cluster logicalcluster.Name
+ want bool
+ }{
+ {name: "empty", cluster: logicalcluster.Name("cluster"), want: true},
+ {
+ name: "empty scope",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {""}}},
+ cluster: logicalcluster.Name("cluster"),
+ want: false,
+ },
+ {
+ name: "scoped user",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this"}}},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "scoped user to a different cluster",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:another"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "contradicting scopes",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this", "cluster:another"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "empty contradicting value",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"", "cluster:this"}}},
+ cluster: logicalcluster.Name("cluster"),
+ want: false,
+ },
+ {
+ name: "unknown scope",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"unknown:foo"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "another or'ed scope",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:another,cluster:this"}}},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "multiple or'ed scopes",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:another,cluster:this", "cluster:this,cluster:other"}}},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "multiple wrong or'ed scopes",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:another,cluster:other"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "multiple or'ed scopes that contradict eachother",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this,cluster:other", "cluster:another,cluster:jungle"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "or'ed empty scope",
+ info: user.DefaultInfo{Extra: map[string][]string{"authentication.kcp.io/scopes": {",,cluster:this"}}},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "serviceaccount from other cluster",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"anotherws"}}},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "serviceaccount from same cluster",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"this"}}},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "serviceaccount without a cluster",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo"},
+ cluster: logicalcluster.Name("this"),
+ // an unqualified service account is considered local: think of some
+ // local SubjectAccessReview specifying a service account without the
+ // cluster scope.
+ want: true,
+ },
+ {
+ name: "scoped service account",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo", Extra: map[string][]string{
+ "authentication.kubernetes.io/cluster-name": {"this"},
+ "authentication.kcp.io/scopes": {"cluster:this"},
+ }},
+ cluster: logicalcluster.Name("this"),
+ want: true,
+ },
+ {
+ name: "scoped foreign service account",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo", Extra: map[string][]string{
+ "authentication.kubernetes.io/cluster-name": {"another"},
+ "authentication.kcp.io/scopes": {"cluster:this"},
+ }},
+ cluster: logicalcluster.Name("this"),
+ want: false,
+ },
+ {
+ name: "scoped service account to another clusters",
+ info: user.DefaultInfo{Name: "system:serviceaccount:default:foo", Extra: map[string][]string{
+ "authentication.kubernetes.io/cluster-name": {"this"},
+ "authentication.kcp.io/scopes": {"cluster:another"},
+ }},
+ want: false,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if got := IsInScope(&tt.info, tt.cluster); got != tt.want {
+ t.Errorf("IsInScope() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func TestAppliesToUserWithWarrantsAndScopes(t *testing.T) {
+ tests := []struct {
+ name string
+ user user.Info
+ sub rbacv1.Subject
+ want bool
+ }{
+ // base cases
+ {
+ name: "simple matching user",
+ user: &user.DefaultInfo{Name: "user-a"},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "simple non-matching user",
+ user: &user.DefaultInfo{Name: "user-a"},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-b"},
+ want: false,
+ },
+
+ // warrants
+ {
+ name: "simple matching user with warrants",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-b"}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "simple non-matching user with matching warrants",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-a"}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "simple non-matching user with non-matching warrants",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-b"}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: false,
+ },
+ {
+ name: "simple non-matching user with multiple warrants",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-b"}`, `{"user":"user-a"}`, `{"user":"user-c"}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "simple non-matching user with nested warrants",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-b","extra":{"authorization.kcp.io/warrant":["{\"user\":\"user-a\"}"]}}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+
+ // non-cluster-aware service accounts
+ {
+ name: "non-cluster-aware service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa"},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: true,
+ },
+ {
+ name: "non-cluster-aware service account with this scope",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: true,
+ },
+ {
+ name: "non-cluster-aware service account with other scope",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+ {
+ name: "non-cluster-aware service account as warrant",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"system:serviceaccount:ns:sa"}`}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+
+ // service accounts with cluster
+ {
+ name: "local service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"this"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: true,
+ },
+ {
+ name: "foreign service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+ {
+ name: "foreign service account with local warrant",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}, WarrantExtraKey: {`{"user":"system:serviceaccount:ns:sa","extra":{"authentication.kubernetes.io/cluster-name":["this"]}}`}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: true,
+ },
+ {
+ name: "foreign service account with foreign warrant",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}, WarrantExtraKey: {`{"user":"system:serviceaccount:ns:sa","extra":{"authentication.kubernetes.io/cluster-name":["other"]}}`}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+ {
+ name: "local service account with multiple clusters",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"this", "this"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+ {
+ name: "out-of-scope local service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"this"}, "authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Namespace: "ns", Name: "sa"},
+ want: false,
+ },
+
+ // global service accounts
+ {
+ name: "local service account as global kcp service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"this"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "system:kcp:serviceaccount:this:ns:sa"},
+ want: true,
+ },
+ {
+ name: "foreign service account as global kcp service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "system:kcp:serviceaccount:this:ns:sa"},
+ want: false,
+ },
+ {
+ name: "non-cluster-aware service account as global kcp service account",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa"},
+ sub: rbacv1.Subject{Kind: "User", Name: "system:kcp:serviceaccount:this:ns:sa"},
+ want: false,
+ },
+
+ // scopes
+ {
+ name: "in-scope user",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "out-of-scope user",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: false,
+ },
+ {
+ name: "out-of-scope user with warrant",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}, WarrantExtraKey: {`{"user":"user-a"}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "out-of-scope warrant",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-a","extra":{"authentication.kcp.io/scopes":["cluster:other"]}}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: false,
+ },
+ {
+ name: "in-scope warrant",
+ user: &user.DefaultInfo{Name: "user-b", Extra: map[string][]string{WarrantExtraKey: {`{"user":"user-a","extra":{"authentication.kcp.io/scopes":["cluster:this"]}}`}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "in-scope scoped user matches itself",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:this"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: true,
+ },
+ {
+ name: "out-of-scope user does not match itself",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "User", Name: "user-a"},
+ want: false,
+ },
+
+ // authenticated and unauthenticated
+ {
+ name: "out-of-scope unauthenticated user does not match system:authenticated",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:authenticated"},
+ want: false,
+ },
+ {
+ name: "out-of-scope unauthenticated user matches system:unauthenticated",
+ user: &user.DefaultInfo{Name: "user-a", Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:unauthenticated"},
+ want: true,
+ },
+ {
+ name: "out-of-scope authenticated user matches system:authenticated",
+ user: &user.DefaultInfo{Name: "user-a", Groups: []string{user.AllAuthenticated}, Extra: map[string][]string{"authentication.kcp.io/scopes": {"cluster:other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:authenticated"},
+ want: true,
+ },
+ {
+ name: "foreign service-account does not match itself",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "ServiceAccount", Name: "system:serviceaccount:ns:sa"},
+ want: false,
+ },
+ {
+ name: "foreign unauthenticated service-account does not match system:authenticated",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:authenticated"},
+ want: false,
+ },
+ {
+ name: "foreign unauthenticated service-account matches system:unauthenticated",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:unauthenticated"},
+ want: true,
+ },
+ {
+ name: "foreign authenticated service-account matches system:authenticated",
+ user: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Groups: []string{user.AllAuthenticated}, Extra: map[string][]string{"authentication.kubernetes.io/cluster-name": {"other"}}},
+ sub: rbacv1.Subject{Kind: "Group", Name: "system:authenticated"},
+ want: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ ctx := request.WithCluster(context.Background(), request.Cluster{Name: "this"})
+ if got := appliesToUserWithScopedAndWarrants(ctx, tt.user, tt.sub, "ns"); got != tt.want {
+ t.Errorf("withWarrants(withScopes(base)) = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func TestEffectiveGroups(t *testing.T) {
+ tests := map[string]struct {
+ u user.Info
+ want sets.Set[string]
+ }{
+ "empty user": {
+ u: &user.DefaultInfo{},
+ want: sets.New[string](),
+ },
+ "authenticated user": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{user.AllAuthenticated}},
+ want: sets.New(user.AllAuthenticated),
+ },
+ "multiple groups": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{"a", "b"}},
+ want: sets.New("a", "b"),
+ },
+ "out of scope user": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{"a", "b"}, Extra: map[string][]string{
+ ScopeExtraKey: {"cluster:other"},
+ }},
+ want: sets.New("system:unauthenticated"),
+ },
+ "out of scope authenticated user": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{user.AllAuthenticated, "a", "b"}, Extra: map[string][]string{
+ ScopeExtraKey: {"cluster:other"},
+ }},
+ want: sets.New(user.AllAuthenticated),
+ },
+ "user with warrant": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{"a", "b"}, Extra: map[string][]string{
+ WarrantExtraKey: {`{"user":"warrant","groups":["c","d"]}`},
+ }},
+ want: sets.New("a", "b", "c", "d"),
+ },
+ "user with warrant out of scope": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{"a", "b"}, Extra: map[string][]string{
+ WarrantExtraKey: {`{"user":"warrant","groups":["c","d"],"extra":{"authentication.kcp.io/scopes":["cluster:other"]}}`},
+ }},
+ want: sets.New("a", "b", "system:unauthenticated"),
+ },
+ "nested warrants": {
+ u: &user.DefaultInfo{Name: user.Anonymous, Groups: []string{"a", "b"}, Extra: map[string][]string{
+ WarrantExtraKey: {`{"user":"warrant","groups":["c","d"],"extra":{"authorization.kcp.io/warrant":["{\"user\":\"warrant2\",\"groups\":[\"e\",\"f\"]}"]}}`},
+ }},
+ want: sets.New("a", "b", "c", "d", "e", "f"),
+ },
+ }
+ for name, tt := range tests {
+ t.Run(name, func(t *testing.T) {
+ ctx := request.WithCluster(context.Background(), request.Cluster{Name: "root:ws"})
+ got := EffectiveGroups(ctx, tt.u)
+ if diff := cmp.Diff(sets.List(tt.want), sets.List(got)); diff != "" {
+ t.Errorf("effectiveGroups() +got -want\n%s", diff)
+ }
+ })
+ }
+}
+
+func TestPrefixUser(t *testing.T) {
+ tests := map[string]struct {
+ u user.Info
+ prefix string
+ want user.Info
+ }{
+ "user with groups": {
+ u: &user.DefaultInfo{Name: "user", Groups: []string{"a", "b"}},
+ prefix: "prefix:",
+ want: &user.DefaultInfo{Name: "prefix:user", Groups: []string{"prefix:a", "prefix:b"}},
+ },
+ "user with warrant": {
+ u: &user.DefaultInfo{Name: "user", Extra: map[string][]string{
+ WarrantExtraKey: {`{"user":"warrant","groups":["c","d"]}`},
+ }},
+ prefix: "prefix:",
+ want: &user.DefaultInfo{Name: "prefix:user", Extra: map[string][]string{
+ WarrantExtraKey: {`{"user":"prefix:warrant","groups":["prefix:c","prefix:d"]}`},
+ }},
+ },
+ "service account without cluster": {
+ u: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Groups: []string{"system:serviceaccounts"}},
+ prefix: "prefix:",
+ want: &user.DefaultInfo{Name: "prefix:system:anonymous", Groups: []string{"prefix:system:unauthenticated"}},
+ },
+ "service account without cluster but authenticated": {
+ u: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Groups: []string{"system:serviceaccounts", user.AllAuthenticated}},
+ prefix: "prefix:",
+ want: &user.DefaultInfo{Name: "prefix:system:anonymous", Groups: []string{"prefix:system:authenticated"}},
+ },
+ "service account with cluster": {
+ u: &user.DefaultInfo{Name: "system:serviceaccount:ns:sa", Groups: []string{"system:serviceaccounts"}, Extra: map[string][]string{
+ authserviceaccount.ClusterNameKey: {"cluster"},
+ }},
+ prefix: "prefix:",
+ want: &user.DefaultInfo{Name: "prefix:system:kcp:serviceaccount:cluster:ns:sa", Groups: []string{"prefix:system:serviceaccounts"}, Extra: map[string][]string{
+ authserviceaccount.ClusterNameKey: {"cluster"},
+ }},
+ },
+ }
+ for name, tt := range tests {
+ t.Run(name, func(t *testing.T) {
+ got := PrefixUser(tt.u, tt.prefix)
+ if diff := cmp.Diff(tt.want, got); diff != "" {
+ t.Errorf("prefixUser() mismatch (-want +got):\n%s", diff)
+ }
+ })
+ }
+}
diff --git a/pkg/registry/rbac/validation/rule.go b/pkg/registry/rbac/validation/rule.go
index 5322c7419feb4..44f78639ad226 100644
--- a/pkg/registry/rbac/validation/rule.go
+++ b/pkg/registry/rbac/validation/rule.go
@@ -184,7 +184,7 @@ func (r *DefaultRuleResolver) VisitRulesFor(ctx context.Context, user user.Info,
} else {
sourceDescriber := &clusterRoleBindingDescriber{}
for _, clusterRoleBinding := range clusterRoleBindings {
- subjectIndex, applies := appliesTo(user, clusterRoleBinding.Subjects, "")
+ subjectIndex, applies := appliesTo(ctx, user, clusterRoleBinding.Subjects, "")
if !applies {
continue
}
@@ -213,7 +213,7 @@ func (r *DefaultRuleResolver) VisitRulesFor(ctx context.Context, user user.Info,
} else {
sourceDescriber := &roleBindingDescriber{}
for _, roleBinding := range roleBindings {
- subjectIndex, applies := appliesTo(user, roleBinding.Subjects, namespace)
+ subjectIndex, applies := appliesTo(ctx, user, roleBinding.Subjects, namespace)
if !applies {
continue
}
@@ -260,9 +260,9 @@ func (r *DefaultRuleResolver) GetRoleReferenceRules(ctx context.Context, roleRef
// appliesTo returns whether any of the bindingSubjects applies to the specified subject,
// and if true, the index of the first subject that applies
-func appliesTo(user user.Info, bindingSubjects []rbacv1.Subject, namespace string) (int, bool) {
+func appliesTo(ctx context.Context, user user.Info, bindingSubjects []rbacv1.Subject, namespace string) (int, bool) {
for i, bindingSubject := range bindingSubjects {
- if appliesToUser(user, bindingSubject, namespace) {
+ if appliesToUserWithScopedAndWarrants(ctx, user, bindingSubject, namespace) {
return i, true
}
}
diff --git a/pkg/registry/rbac/validation/rule_test.go b/pkg/registry/rbac/validation/rule_test.go
index 459d9c21ae5cd..fe9c963600a23 100644
--- a/pkg/registry/rbac/validation/rule_test.go
+++ b/pkg/registry/rbac/validation/rule_test.go
@@ -17,6 +17,7 @@ limitations under the License.
package validation
import (
+ "context"
"hash/fnv"
"io"
"reflect"
@@ -267,7 +268,7 @@ func TestAppliesTo(t *testing.T) {
}
for _, tc := range tests {
- gotIndex, got := appliesTo(tc.user, tc.subjects, tc.namespace)
+ gotIndex, got := appliesTo(context.Background(), tc.user, tc.subjects, tc.namespace)
if got != tc.appliesTo {
t.Errorf("case %q want appliesTo=%t, got appliesTo=%t", tc.testCase, tc.appliesTo, got)
}
diff --git a/pkg/serviceaccount/claims.go b/pkg/serviceaccount/claims.go
index 2893599cf6cdb..7be9f35dda9b9 100644
--- a/pkg/serviceaccount/claims.go
+++ b/pkg/serviceaccount/claims.go
@@ -25,6 +25,8 @@ import (
"github.com/google/uuid"
"gopkg.in/go-jose/go-jose.v2/jwt"
+ "github.com/kcp-dev/logicalcluster/v3"
+
"k8s.io/apiserver/pkg/audit"
apiserverserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
authenticationtokenjwt "k8s.io/apiserver/pkg/authentication/token/jwt"
@@ -54,6 +56,8 @@ type privateClaims struct {
}
type kubernetes struct {
+ ClusterName logicalcluster.Name `json:"clusterName,omitempty"`
+
Namespace string `json:"namespace,omitempty"`
Svcacct ref `json:"serviceaccount,omitempty"`
Pod *ref `json:"pod,omitempty"`
@@ -81,7 +85,8 @@ func Claims(sa core.ServiceAccount, pod *core.Pod, secret *core.Secret, node *co
}
pc := &privateClaims{
Kubernetes: kubernetes{
- Namespace: sa.Namespace,
+ ClusterName: logicalcluster.From(&sa),
+ Namespace: sa.Namespace,
Svcacct: ref{
Name: sa.Name,
UID: string(sa.UID),
@@ -129,14 +134,14 @@ func Claims(sa core.ServiceAccount, pod *core.Pod, secret *core.Secret, node *co
return sc, pc, nil
}
-func NewValidator(getter ServiceAccountTokenGetter) Validator[privateClaims] {
+func NewValidator(getter ServiceAccountTokenClusterGetter) Validator[privateClaims] {
return &validator{
getter: getter,
}
}
type validator struct {
- getter ServiceAccountTokenGetter
+ getter ServiceAccountTokenClusterGetter
}
var _ = Validator[privateClaims](&validator{})
@@ -172,12 +177,13 @@ func (v *validator) Validate(ctx context.Context, _ string, public *jwt.Claims,
// consider things deleted prior to now()-leeway to be invalid
invalidIfDeletedBefore := nowTime.Add(-jwt.DefaultLeeway)
namespace := private.Kubernetes.Namespace
+ clusterName := private.Kubernetes.ClusterName
saref := private.Kubernetes.Svcacct
podref := private.Kubernetes.Pod
noderef := private.Kubernetes.Node
secref := private.Kubernetes.Secret
// Make sure service account still exists (name and UID)
- serviceAccount, err := v.getter.GetServiceAccount(namespace, saref.Name)
+ serviceAccount, err := v.getter.Cluster(clusterName).GetServiceAccount(namespace, saref.Name)
if err != nil {
klog.V(4).Infof("Could not retrieve service account %s/%s: %v", namespace, saref.Name, err)
return nil, err
@@ -194,7 +200,7 @@ func (v *validator) Validate(ctx context.Context, _ string, public *jwt.Claims,
if secref != nil {
// Make sure token hasn't been invalidated by deletion of the secret
- secret, err := v.getter.GetSecret(namespace, secref.Name)
+ secret, err := v.getter.Cluster(clusterName).GetSecret(namespace, secref.Name)
if err != nil {
klog.V(4).Infof("Could not retrieve bound secret %s/%s for service account %s/%s: %v", namespace, secref.Name, namespace, saref.Name, err)
return nil, errors.New("service account token has been invalidated")
@@ -212,7 +218,7 @@ func (v *validator) Validate(ctx context.Context, _ string, public *jwt.Claims,
var podName, podUID string
if podref != nil {
// Make sure token hasn't been invalidated by deletion of the pod
- pod, err := v.getter.GetPod(namespace, podref.Name)
+ pod, err := v.getter.Cluster(clusterName).GetPod(namespace, podref.Name)
if err != nil {
klog.V(4).Infof("Could not retrieve bound pod %s/%s for service account %s/%s: %v", namespace, podref.Name, namespace, saref.Name, err)
return nil, errors.New("service account token has been invalidated")
@@ -244,7 +250,7 @@ func (v *validator) Validate(ctx context.Context, _ string, public *jwt.Claims,
return nil, fmt.Errorf("token is bound to a Node object but the %s feature gate is disabled", features.ServiceAccountTokenNodeBindingValidation)
}
- node, err := v.getter.GetNode(noderef.Name)
+ node, err := v.getter.Cluster(clusterName).GetNode(noderef.Name)
if err != nil {
klog.V(4).Infof("Could not retrieve node object %q for service account %s/%s: %v", noderef.Name, namespace, saref.Name, err)
return nil, errors.New("service account token has been invalidated")
@@ -280,6 +286,7 @@ func (v *validator) Validate(ctx context.Context, _ string, public *jwt.Claims,
jti = public.ID
}
return &apiserverserviceaccount.ServiceAccountInfo{
+ ClusterName: private.Kubernetes.ClusterName,
Namespace: private.Kubernetes.Namespace,
Name: private.Kubernetes.Svcacct.Name,
UID: private.Kubernetes.Svcacct.UID,
diff --git a/pkg/serviceaccount/jwt.go b/pkg/serviceaccount/jwt.go
index 71ac5c90959e3..0a555cb5363a2 100644
--- a/pkg/serviceaccount/jwt.go
+++ b/pkg/serviceaccount/jwt.go
@@ -31,6 +31,8 @@ import (
jose "gopkg.in/go-jose/go-jose.v2"
"gopkg.in/go-jose/go-jose.v2/jwt"
+ "github.com/kcp-dev/logicalcluster/v3"
+
v1 "k8s.io/api/core/v1"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
"k8s.io/apiserver/pkg/audit"
@@ -38,6 +40,11 @@ import (
apiserverserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
)
+// ServiceAccountTokenClusterGetter can scope down to a ServiceAccountTokenGetter for one cluster
+type ServiceAccountTokenClusterGetter interface {
+ Cluster(logicalcluster.Name) ServiceAccountTokenGetter
+}
+
// ServiceAccountTokenGetter defines functions to retrieve a named service account and secret
type ServiceAccountTokenGetter interface {
GetServiceAccount(namespace, name string) (*v1.ServiceAccount, error)
diff --git a/pkg/serviceaccount/legacy.go b/pkg/serviceaccount/legacy.go
index 7eae7f69d1dea..7e5bda4264dbb 100644
--- a/pkg/serviceaccount/legacy.go
+++ b/pkg/serviceaccount/legacy.go
@@ -26,6 +26,9 @@ import (
"gopkg.in/go-jose/go-jose.v2/jwt"
+ typedv1core "github.com/kcp-dev/client-go/kubernetes/typed/core/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
@@ -33,7 +36,6 @@ import (
apiserverserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
"k8s.io/apiserver/pkg/warning"
applyv1 "k8s.io/client-go/applyconfigurations/core/v1"
- typedv1core "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/klog/v2"
)
@@ -47,6 +49,7 @@ func LegacyClaims(serviceAccount v1.ServiceAccount, secret v1.Secret) (*jwt.Clai
ServiceAccountName: serviceAccount.Name,
ServiceAccountUID: string(serviceAccount.UID),
SecretName: secret.Name,
+ ClusterName: logicalcluster.From(&serviceAccount),
}
}
@@ -56,13 +59,14 @@ const (
)
type legacyPrivateClaims struct {
- ServiceAccountName string `json:"kubernetes.io/serviceaccount/service-account.name"`
- ServiceAccountUID string `json:"kubernetes.io/serviceaccount/service-account.uid"`
- SecretName string `json:"kubernetes.io/serviceaccount/secret.name"`
- Namespace string `json:"kubernetes.io/serviceaccount/namespace"`
+ ServiceAccountName string `json:"kubernetes.io/serviceaccount/service-account.name"`
+ ServiceAccountUID string `json:"kubernetes.io/serviceaccount/service-account.uid"`
+ SecretName string `json:"kubernetes.io/serviceaccount/secret.name"`
+ Namespace string `json:"kubernetes.io/serviceaccount/namespace"`
+ ClusterName logicalcluster.Name `json:"kubernetes.io/serviceaccount/clusterName"`
}
-func NewLegacyValidator(lookup bool, getter ServiceAccountTokenGetter, secretsWriter typedv1core.SecretsGetter) (Validator[legacyPrivateClaims], error) {
+func NewLegacyValidator(lookup bool, getter ServiceAccountTokenClusterGetter, secretsWriter typedv1core.SecretClusterInterface) (Validator[legacyPrivateClaims], error) {
if lookup && getter == nil {
return nil, errors.New("ServiceAccountTokenGetter must be provided")
}
@@ -78,8 +82,8 @@ func NewLegacyValidator(lookup bool, getter ServiceAccountTokenGetter, secretsWr
type legacyValidator struct {
lookup bool
- getter ServiceAccountTokenGetter
- secretsWriter typedv1core.SecretsGetter
+ getter ServiceAccountTokenClusterGetter
+ secretsWriter typedv1core.SecretClusterInterface
}
var _ = Validator[legacyPrivateClaims](&legacyValidator{})
@@ -113,7 +117,7 @@ func (v *legacyValidator) Validate(ctx context.Context, tokenData string, public
if v.lookup {
// Make sure token hasn't been invalidated by deletion of the secret
- secret, err := v.getter.GetSecret(namespace, secretName)
+ secret, err := v.getter.Cluster(private.ClusterName).GetSecret(namespace, secretName)
if err != nil {
klog.V(4).Infof("Could not retrieve token %s/%s for service account %s/%s: %v", namespace, secretName, namespace, serviceAccountName, err)
return nil, errors.New("Token has been invalidated")
@@ -128,7 +132,7 @@ func (v *legacyValidator) Validate(ctx context.Context, tokenData string, public
}
// Make sure service account still exists (name and UID)
- serviceAccount, err := v.getter.GetServiceAccount(namespace, serviceAccountName)
+ serviceAccount, err := v.getter.Cluster(private.ClusterName).GetServiceAccount(namespace, serviceAccountName)
if err != nil {
klog.V(4).Infof("Could not retrieve service account %s/%s: %v", namespace, serviceAccountName, err)
return nil, err
@@ -149,7 +153,7 @@ func (v *legacyValidator) Validate(ctx context.Context, tokenData string, public
if invalidSince := secret.Labels[InvalidSinceLabelKey]; invalidSince != "" {
audit.AddAuditAnnotation(ctx, "authentication.k8s.io/legacy-token-invalidated", secret.Name+"/"+secret.Namespace)
invalidatedAutoTokensTotal.WithContext(ctx).Inc()
- v.patchSecretWithLastUsedDate(ctx, secret)
+ v.patchSecretWithLastUsedDate(ctx, secret, private.ClusterName)
return nil, fmt.Errorf("the token in secret %s/%s for service account %s/%s has been marked invalid. Use tokens from the TokenRequest API or manually created secret-based tokens, or remove the '%s' label from the secret to temporarily allow use of this token", namespace, secretName, namespace, serviceAccountName, InvalidSinceLabelKey)
}
@@ -170,17 +174,33 @@ func (v *legacyValidator) Validate(ctx context.Context, tokenData string, public
manuallyCreatedTokensTotal.WithContext(ctx).Inc()
}
- v.patchSecretWithLastUsedDate(ctx, secret)
+ now := time.Now().UTC()
+ today := now.Format("2006-01-02")
+ tomorrow := now.AddDate(0, 0, 1).Format("2006-01-02")
+ lastUsed := secret.Labels[LastUsedLabelKey]
+ if lastUsed != today && lastUsed != tomorrow {
+ patchContent, err := json.Marshal(applyv1.Secret(secret.Name, secret.Namespace).WithLabels(map[string]string{LastUsedLabelKey: today}))
+ if err != nil {
+ klog.Errorf("Failed to marshal legacy service account token tracking labels, err: %v", err)
+ } else {
+ if _, err := v.secretsWriter.Cluster(private.ClusterName.Path()).Namespace(namespace).Patch(ctx, secret.Name, types.MergePatchType, patchContent, metav1.PatchOptions{}); err != nil {
+ klog.Errorf("Failed to label legacy service account token secret with last-used, err: %v", err)
+ }
+ }
+ }
+
+ v.patchSecretWithLastUsedDate(ctx, secret, private.ClusterName)
}
return &apiserverserviceaccount.ServiceAccountInfo{
- Namespace: private.Namespace,
- Name: private.ServiceAccountName,
- UID: private.ServiceAccountUID,
+ ClusterName: private.ClusterName,
+ Namespace: private.Namespace,
+ Name: private.ServiceAccountName,
+ UID: private.ServiceAccountUID,
}, nil
}
-func (v *legacyValidator) patchSecretWithLastUsedDate(ctx context.Context, secret *v1.Secret) {
+func (v *legacyValidator) patchSecretWithLastUsedDate(ctx context.Context, secret *v1.Secret, clusterName logicalcluster.Name) {
now := time.Now().UTC()
today := now.Format("2006-01-02")
tomorrow := now.AddDate(0, 0, 1).Format("2006-01-02")
@@ -190,7 +210,7 @@ func (v *legacyValidator) patchSecretWithLastUsedDate(ctx context.Context, secre
if err != nil {
klog.Errorf("Failed to marshal legacy service account token %s/%s tracking labels, err: %v", secret.Name, secret.Namespace, err)
} else {
- if _, err := v.secretsWriter.Secrets(secret.Namespace).Patch(ctx, secret.Name, types.MergePatchType, patchContent, metav1.PatchOptions{}); err != nil {
+ if _, err := v.secretsWriter.Cluster(clusterName.Path()).Namespace(secret.Namespace).Patch(ctx, secret.Name, types.MergePatchType, patchContent, metav1.PatchOptions{}); err != nil {
klog.Errorf("Failed to label legacy service account token %s/%s with last-used date, err: %v", secret.Name, secret.Namespace, err)
}
}
diff --git a/plugin/pkg/admission/limitranger/admission.go b/plugin/pkg/admission/limitranger/admission.go
index 55a65b056bb0d..d79dea20c8ac2 100644
--- a/plugin/pkg/admission/limitranger/admission.go
+++ b/plugin/pkg/admission/limitranger/admission.go
@@ -91,6 +91,11 @@ func (l *LimitRanger) SetExternalKubeInformerFactory(f informers.SharedInformerF
l.lister = limitRangeInformer.Lister()
}
+// SetExternalKubeLister registers a limit range lister into the LimitRanger
+func (l *LimitRanger) SetExternalKubeLister(lister corev1listers.LimitRangeLister) {
+ l.lister = lister
+}
+
// SetExternalKubeClientSet registers the client into LimitRanger
func (l *LimitRanger) SetExternalKubeClientSet(client kubernetes.Interface) {
l.client = client
diff --git a/plugin/pkg/auth/authorizer/rbac/subject_locator.go b/plugin/pkg/auth/authorizer/rbac/subject_locator.go
index c4947de6a08b3..45d38c8d73659 100644
--- a/plugin/pkg/auth/authorizer/rbac/subject_locator.go
+++ b/plugin/pkg/auth/authorizer/rbac/subject_locator.go
@@ -24,6 +24,7 @@ import (
utilerrors "k8s.io/apimachinery/pkg/util/errors"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/apiserver/pkg/authorization/authorizer"
+
rbacregistryvalidation "k8s.io/kubernetes/pkg/registry/rbac/validation"
)
diff --git a/staging/src/k8s.io/apiextensions-apiserver/hack/update-codegen.sh b/staging/src/k8s.io/apiextensions-apiserver/hack/update-codegen.sh
index 93a25e2dc72e9..388f408421b61 100755
--- a/staging/src/k8s.io/apiextensions-apiserver/hack/update-codegen.sh
+++ b/staging/src/k8s.io/apiextensions-apiserver/hack/update-codegen.sh
@@ -45,6 +45,7 @@ kube::codegen::gen_openapi \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg"
+# kcp: TODO(gman0) re-add `--prefers-protobuf` once kcp-dev/{client-go,kcp} supports protobuf codec.
kube::codegen::gen_client \
--with-watch \
--with-applyconfig \
@@ -52,5 +53,4 @@ kube::codegen::gen_client \
--output-pkg "${THIS_PKG}/pkg/client" \
--versioned-name "clientset" \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
- --prefers-protobuf \
"${SCRIPT_ROOT}/pkg/apis"
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go
index 065041b54c9a7..720eec0ecf9a5 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go
@@ -75,27 +75,39 @@ var supportedValidationReason = sets.NewString(
func ValidateCustomResourceDefinition(ctx context.Context, obj *apiextensions.CustomResourceDefinition) field.ErrorList {
nameValidationFn := func(name string, prefix bool) []string {
ret := genericvalidation.NameIsDNSSubdomain(name, prefix)
- requiredName := obj.Spec.Names.Plural + "." + obj.Spec.Group
+
+ // KCP: loosen naming restriction for CRDs created by the apibindings controller.
+ // TODO(ncdc): a user could potentially set this annotation in one of their own normal CRDs. Is there any
+ // mechanism that is restricted to the system so users can't bypass the standard plural.group requirement?
+ if _, bound := obj.Annotations["apis.kcp.io/bound-crd"]; bound {
+ return ret
+ }
+
+ group := obj.Spec.Group
+ if group == "" {
+ group = "core"
+ }
+ requiredName := obj.Spec.Names.Plural + "." + group
if name != requiredName {
ret = append(ret, fmt.Sprintf(`must be spec.names.plural+"."+spec.group`))
}
return ret
}
- opts := validationOptions{
- allowDefaults: true,
- requireRecognizedConversionReviewVersion: true,
- requireImmutableNames: false,
- requireOpenAPISchema: true,
- requireValidPropertyType: true,
- requireStructuralSchema: true,
- requirePrunedDefaults: true,
- requireAtomicSetType: true,
- requireMapListKeysMapSetValidation: true,
+ opts := ValidationOptions{
+ AllowDefaults: true,
+ RequireRecognizedConversionReviewVersion: true,
+ RequireImmutableNames: false,
+ RequireOpenAPISchema: true,
+ RequireValidPropertyType: true,
+ RequireStructuralSchema: true,
+ RequirePrunedDefaults: true,
+ RequireAtomicSetType: true,
+ RequireMapListKeysMapSetValidation: true,
// strictCost is always true to enforce cost limits.
- celEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
+ CELEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
// allowInvalidCABundle is set to true since the CRD is not established yet.
- allowInvalidCABundle: true,
+ AllowInvalidCABundle: true,
}
allErrs := genericvalidation.ValidateObjectMeta(&obj.ObjectMeta, false, nameValidationFn, field.NewPath("metadata"))
@@ -107,44 +119,44 @@ func ValidateCustomResourceDefinition(ctx context.Context, obj *apiextensions.Cu
return allErrs
}
-// validationOptions groups several validation options, to avoid passing multiple bool parameters to methods
-type validationOptions struct {
- // allowDefaults permits the validation schema to contain default attributes
- allowDefaults bool
- // disallowDefaultsReason gives a reason as to why allowDefaults is false (for better user feedback)
- disallowDefaultsReason string
- // requireRecognizedConversionReviewVersion requires accepted webhook conversion versions to contain a recognized version
- requireRecognizedConversionReviewVersion bool
- // requireImmutableNames disables changing spec.names
- requireImmutableNames bool
- // requireOpenAPISchema requires an openapi V3 schema be specified
- requireOpenAPISchema bool
- // requireValidPropertyType requires property types specified in the validation schema to be valid openapi v3 types
- requireValidPropertyType bool
- // requireStructuralSchema indicates that any schemas present must be structural
- requireStructuralSchema bool
- // requirePrunedDefaults indicates that defaults must be pruned
- requirePrunedDefaults bool
- // requireAtomicSetType indicates that the items type for a x-kubernetes-list-type=set list must be atomic.
- requireAtomicSetType bool
- // requireMapListKeysMapSetValidation indicates that:
+// ValidationOptions groups several validation options, to avoid passing multiple bool parameters to methods
+type ValidationOptions struct {
+ // AllowDefaults permits the validation schema to contain default attributes
+ AllowDefaults bool
+ // DisallowDefaultsReason gives a reason as to why AllowDefaults is false (for better user feedback)
+ DisallowDefaultsReason string
+ // RequireRecognizedConversionReviewVersion requires accepted webhook conversion versions to contain a recognized version
+ RequireRecognizedConversionReviewVersion bool
+ // RequireImmutableNames disables changing spec.names
+ RequireImmutableNames bool
+ // RequireOpenAPISchema requires an openapi V3 schema be specified
+ RequireOpenAPISchema bool
+ // RequireValidPropertyType requires property types specified in the validation schema to be valid openapi v3 types
+ RequireValidPropertyType bool
+ // RequireStructuralSchema indicates that any schemas present must be structural
+ RequireStructuralSchema bool
+ // RequirePrunedDefaults indicates that defaults must be pruned
+ RequirePrunedDefaults bool
+ // RequireAtomicSetType indicates that the items type for a x-kubernetes-list-type=set list must be atomic.
+ RequireAtomicSetType bool
+ // RequireMapListKeysMapSetValidation indicates that:
// 1. For x-kubernetes-list-type=map list, key fields are not nullable, and are required or have a default
// 2. For x-kubernetes-list-type=map or x-kubernetes-list-type=set list, the whole item must not be nullable.
- requireMapListKeysMapSetValidation bool
- // preexistingExpressions tracks which CEL expressions existed in an object before an update. May be nil for create.
- preexistingExpressions preexistingExpressions
- // versionsWithUnchangedSchemas tracks schemas of which versions are unchanged when updating a CRD.
+ RequireMapListKeysMapSetValidation bool
+ // PreexistingExpressions tracks which CEL expressions existed in an object before an update. May be nil for create.
+ PreexistingExpressions preexistingExpressions
+ // VersionsWithUnchangedSchemas tracks schemas of which versions are unchanged when updating a CRD.
// Does not apply to creation or deletion.
// Some checks use this to avoid rejecting previously accepted versions due to a control plane upgrade/downgrade.
- versionsWithUnchangedSchemas sets.Set[string]
- // suppressPerExpressionCost indicates whether CEL per-expression cost limit should be suppressed.
+ VersionsWithUnchangedSchemas sets.Set[string]
+ // SuppressPerExpressionCost indicates whether CEL per-expression cost limit should be suppressed.
// It will be automatically set during Versions validation if the version is in versionsWithUnchangedSchemas.
- suppressPerExpressionCost bool
+ SuppressPerExpressionCost bool
- celEnvironmentSet *environment.EnvSet
+ CELEnvironmentSet *environment.EnvSet
// allowInvalidCABundle allows an invalid conversion webhook CABundle on update only if the existing CABundle is invalid.
// An invalid CABundle is also permitted on create and before a CRD is in an Established=True condition.
- allowInvalidCABundle bool
+ AllowInvalidCABundle bool
}
type preexistingExpressions struct {
@@ -215,9 +227,9 @@ func findVersionsWithUnchangedSchemas(obj, oldObject *apiextensions.CustomResour
// suppressExpressionCostForUnchangedSchema returns a copy of opts with suppressPerExpressionCost set to true if
// the specified version's schema is unchanged.
-func suppressExpressionCostForUnchangedSchema(opts validationOptions, version string) validationOptions {
- if opts.versionsWithUnchangedSchemas.Has(version) {
- opts.suppressPerExpressionCost = true
+func suppressExpressionCostForUnchangedSchema(opts ValidationOptions, version string) ValidationOptions {
+ if opts.VersionsWithUnchangedSchemas.Has(version) {
+ opts.SuppressPerExpressionCost = true
}
return opts
}
@@ -225,28 +237,28 @@ func suppressExpressionCostForUnchangedSchema(opts validationOptions, version st
// ValidateCustomResourceDefinitionUpdate statically validates
// context is passed for supporting context cancellation during cel validation when validating defaults
func ValidateCustomResourceDefinitionUpdate(ctx context.Context, obj, oldObj *apiextensions.CustomResourceDefinition) field.ErrorList {
- opts := validationOptions{
- allowDefaults: true,
- requireRecognizedConversionReviewVersion: oldObj.Spec.Conversion == nil || hasValidConversionReviewVersionOrEmpty(oldObj.Spec.Conversion.ConversionReviewVersions),
- requireImmutableNames: apiextensions.IsCRDConditionTrue(oldObj, apiextensions.Established),
- requireOpenAPISchema: requireOpenAPISchema(&oldObj.Spec),
- requireValidPropertyType: requireValidPropertyType(&oldObj.Spec),
- requireStructuralSchema: requireStructuralSchema(&oldObj.Spec),
- requirePrunedDefaults: requirePrunedDefaults(&oldObj.Spec),
- requireAtomicSetType: requireAtomicSetType(&oldObj.Spec),
- requireMapListKeysMapSetValidation: requireMapListKeysMapSetValidation(&oldObj.Spec),
- preexistingExpressions: findPreexistingExpressions(&oldObj.Spec),
- versionsWithUnchangedSchemas: findVersionsWithUnchangedSchemas(obj, oldObj),
+ opts := ValidationOptions{
+ AllowDefaults: true,
+ RequireRecognizedConversionReviewVersion: oldObj.Spec.Conversion == nil || hasValidConversionReviewVersionOrEmpty(oldObj.Spec.Conversion.ConversionReviewVersions),
+ RequireImmutableNames: apiextensions.IsCRDConditionTrue(oldObj, apiextensions.Established),
+ RequireOpenAPISchema: requireOpenAPISchema(&oldObj.Spec),
+ RequireValidPropertyType: requireValidPropertyType(&oldObj.Spec),
+ RequireStructuralSchema: requireStructuralSchema(&oldObj.Spec),
+ RequirePrunedDefaults: requirePrunedDefaults(&oldObj.Spec),
+ RequireAtomicSetType: requireAtomicSetType(&oldObj.Spec),
+ RequireMapListKeysMapSetValidation: requireMapListKeysMapSetValidation(&oldObj.Spec),
+ PreexistingExpressions: findPreexistingExpressions(&oldObj.Spec),
+ VersionsWithUnchangedSchemas: findVersionsWithUnchangedSchemas(obj, oldObj),
// strictCost is always true to enforce cost limits.
- celEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
- allowInvalidCABundle: allowInvalidCABundle(oldObj),
+ CELEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
+ AllowInvalidCABundle: allowInvalidCABundle(oldObj),
}
return validateCustomResourceDefinitionUpdate(ctx, obj, oldObj, opts)
}
-func validateCustomResourceDefinitionUpdate(ctx context.Context, obj, oldObj *apiextensions.CustomResourceDefinition, opts validationOptions) field.ErrorList {
+func validateCustomResourceDefinitionUpdate(ctx context.Context, obj, oldObj *apiextensions.CustomResourceDefinition, opts ValidationOptions) field.ErrorList {
allErrs := genericvalidation.ValidateObjectMetaUpdate(&obj.ObjectMeta, &oldObj.ObjectMeta, field.NewPath("metadata"))
- allErrs = append(allErrs, validateCustomResourceDefinitionSpecUpdate(ctx, &obj.Spec, &oldObj.Spec, opts, field.NewPath("spec"))...)
+ allErrs = append(allErrs, ValidateCustomResourceDefinitionSpecUpdate(ctx, &obj.Spec, &oldObj.Spec, opts, field.NewPath("spec"))...)
allErrs = append(allErrs, ValidateCustomResourceDefinitionStatus(&obj.Status, field.NewPath("status"))...)
allErrs = append(allErrs, ValidateCustomResourceDefinitionStoredVersions(obj.Status.StoredVersions, obj.Spec.Versions, field.NewPath("status").Child("storedVersions"))...)
allErrs = append(allErrs, validateAPIApproval(obj, oldObj)...)
@@ -290,13 +302,13 @@ func ValidateUpdateCustomResourceDefinitionStatus(obj, oldObj *apiextensions.Cus
// validateCustomResourceDefinitionVersion statically validates.
// context is passed for supporting context cancellation during cel validation when validating defaults
-func validateCustomResourceDefinitionVersion(ctx context.Context, version *apiextensions.CustomResourceDefinitionVersion, fldPath *field.Path, statusEnabled bool, opts validationOptions) field.ErrorList {
+func validateCustomResourceDefinitionVersion(ctx context.Context, version *apiextensions.CustomResourceDefinitionVersion, fldPath *field.Path, statusEnabled bool, opts ValidationOptions) field.ErrorList {
allErrs := field.ErrorList{}
- for _, err := range validateDeprecationWarning(version.Deprecated, version.DeprecationWarning) {
+ for _, err := range ValidateDeprecationWarning(version.Deprecated, version.DeprecationWarning) {
allErrs = append(allErrs, field.Invalid(fldPath.Child("deprecationWarning"), version.DeprecationWarning, err))
}
opts = suppressExpressionCostForUnchangedSchema(opts, version.Name)
- allErrs = append(allErrs, validateCustomResourceDefinitionValidation(ctx, version.Schema, statusEnabled, opts, fldPath.Child("schema"))...)
+ allErrs = append(allErrs, ValidateCustomResourceDefinitionValidation(ctx, version.Schema, statusEnabled, opts, fldPath.Child("schema"))...)
allErrs = append(allErrs, ValidateCustomResourceDefinitionSubresources(version.Subresources, fldPath.Child("subresources"))...)
for i := range version.AdditionalPrinterColumns {
allErrs = append(allErrs, ValidateCustomResourceColumnDefinition(&version.AdditionalPrinterColumns[i], fldPath.Child("additionalPrinterColumns").Index(i))...)
@@ -316,7 +328,7 @@ func validateCustomResourceDefinitionVersion(ctx context.Context, version *apiex
return allErrs
}
-func validateDeprecationWarning(deprecated bool, deprecationWarning *string) []string {
+func ValidateDeprecationWarning(deprecated bool, deprecationWarning *string) []string {
if !deprecated && deprecationWarning != nil {
return []string{"can only be set for deprecated versions"}
}
@@ -347,11 +359,15 @@ func validateDeprecationWarning(deprecated bool, deprecationWarning *string) []s
}
// context is passed for supporting context cancellation during cel validation when validating defaults
-func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensions.CustomResourceDefinitionSpec, opts validationOptions, fldPath *field.Path) field.ErrorList {
+func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensions.CustomResourceDefinitionSpec, opts ValidationOptions, fldPath *field.Path) field.ErrorList {
allErrs := field.ErrorList{}
- if len(spec.Group) == 0 {
- allErrs = append(allErrs, field.Required(fldPath.Child("group"), ""))
+ // HACK: Relax naming constraints when registering legacy schema resources through CRDs
+ // for the KCP scenario
+ if isKubernetesAPIGroup(spec.Group) {
+ // No error: these are legacy schema kubernetes types
+ // that are not added in the controlplane schema
+ // and that we want to move up to the KCP as CRDs
} else if errs := utilvalidation.IsDNS1123Subdomain(spec.Group); len(errs) > 0 {
allErrs = append(allErrs, field.Invalid(fldPath.Child("group"), spec.Group, strings.Join(errs, ",")))
} else if len(strings.Split(spec.Group, ".")) < 2 {
@@ -362,10 +378,10 @@ func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensio
// enabling pruning requires structural schemas
if spec.PreserveUnknownFields == nil || *spec.PreserveUnknownFields == false {
- opts.requireStructuralSchema = true
+ opts.RequireStructuralSchema = true
}
- if opts.requireOpenAPISchema {
+ if opts.RequireOpenAPISchema {
// check that either a global schema or versioned schemas are set in all versions
if spec.Validation == nil || spec.Validation.OpenAPIV3Schema == nil {
for i, v := range spec.Versions {
@@ -385,14 +401,14 @@ func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensio
}
}
}
- if opts.allowDefaults && specHasDefaults(spec) {
- opts.requireStructuralSchema = true
+ if opts.AllowDefaults && specHasDefaults(spec) {
+ opts.RequireStructuralSchema = true
if spec.PreserveUnknownFields == nil || *spec.PreserveUnknownFields {
allErrs = append(allErrs, field.Invalid(fldPath.Child("preserveUnknownFields"), true, "must be false in order to use defaults in the schema"))
}
}
if specHasKubernetesExtensions(spec) {
- opts.requireStructuralSchema = true
+ opts.RequireStructuralSchema = true
}
storageFlagCount := 0
@@ -466,7 +482,7 @@ func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensio
}
allErrs = append(allErrs, ValidateCustomResourceDefinitionNames(&spec.Names, fldPath.Child("names"))...)
- allErrs = append(allErrs, validateCustomResourceDefinitionValidation(ctx, spec.Validation, hasAnyStatusEnabled(spec), suppressExpressionCostForUnchangedSchema(opts, spec.Version), fldPath.Child("validation"))...)
+ allErrs = append(allErrs, ValidateCustomResourceDefinitionValidation(ctx, spec.Validation, hasAnyStatusEnabled(spec), suppressExpressionCostForUnchangedSchema(opts, spec.Version), fldPath.Child("validation"))...)
allErrs = append(allErrs, ValidateCustomResourceDefinitionSubresources(spec.Subresources, fldPath.Child("subresources"))...)
for i := range spec.AdditionalPrinterColumns {
@@ -491,7 +507,7 @@ func validateCustomResourceDefinitionSpec(ctx context.Context, spec *apiextensio
if (spec.Conversion != nil && spec.Conversion.Strategy != apiextensions.NoneConverter) && (spec.PreserveUnknownFields == nil || *spec.PreserveUnknownFields) {
allErrs = append(allErrs, field.Invalid(fldPath.Child("conversion").Child("strategy"), spec.Conversion.Strategy, "must be None if spec.preserveUnknownFields is true"))
}
- allErrs = append(allErrs, validateCustomResourceConversion(spec.Conversion, opts.requireRecognizedConversionReviewVersion, fldPath.Child("conversion"), opts)...)
+ allErrs = append(allErrs, validateCustomResourceConversion(spec.Conversion, opts.RequireRecognizedConversionReviewVersion, fldPath.Child("conversion"), opts)...)
return allErrs
}
@@ -578,7 +594,7 @@ func hasValidConversionReviewVersionOrEmpty(versions []string) bool {
return false
}
-func validateCustomResourceConversion(conversion *apiextensions.CustomResourceConversion, requireRecognizedVersion bool, fldPath *field.Path, opts validationOptions) field.ErrorList {
+func validateCustomResourceConversion(conversion *apiextensions.CustomResourceConversion, requireRecognizedVersion bool, fldPath *field.Path, opts ValidationOptions) field.ErrorList {
allErrs := field.ErrorList{}
if conversion == nil {
return allErrs
@@ -597,7 +613,7 @@ func validateCustomResourceConversion(conversion *apiextensions.CustomResourceCo
case cc.Service != nil:
allErrs = append(allErrs, webhook.ValidateWebhookService(fldPath.Child("webhookClientConfig").Child("service"), cc.Service.Name, cc.Service.Namespace, cc.Service.Path, cc.Service.Port)...)
}
- if len(cc.CABundle) > 0 && !opts.allowInvalidCABundle {
+ if len(cc.CABundle) > 0 && !opts.AllowInvalidCABundle {
allErrs = append(allErrs, webhook.ValidateCABundle(fldPath.Child("webhookClientConfig").Child("caBundle"), cc.CABundle)...)
}
}
@@ -615,10 +631,10 @@ func validateCustomResourceConversion(conversion *apiextensions.CustomResourceCo
// validateCustomResourceDefinitionSpecUpdate statically validates
// context is passed for supporting context cancellation during cel validation when validating defaults
-func validateCustomResourceDefinitionSpecUpdate(ctx context.Context, spec, oldSpec *apiextensions.CustomResourceDefinitionSpec, opts validationOptions, fldPath *field.Path) field.ErrorList {
+func ValidateCustomResourceDefinitionSpecUpdate(ctx context.Context, spec, oldSpec *apiextensions.CustomResourceDefinitionSpec, opts ValidationOptions, fldPath *field.Path) field.ErrorList {
allErrs := validateCustomResourceDefinitionSpec(ctx, spec, opts, fldPath)
- if opts.requireImmutableNames {
+ if opts.RequireImmutableNames {
// these effect the storage and cannot be changed therefore
allErrs = append(allErrs, genericvalidation.ValidateImmutableField(spec.Scope, oldSpec.Scope, fldPath.Child("scope"))...)
allErrs = append(allErrs, genericvalidation.ValidateImmutableField(spec.Names.Kind, oldSpec.Names.Kind, fldPath.Child("names", "kind"))...)
@@ -872,9 +888,9 @@ type specStandardValidator interface {
withForbidOldSelfValidations(path *field.Path) specStandardValidator
}
-// validateCustomResourceDefinitionValidation statically validates
+// ValidateCustomResourceDefinitionValidation statically validates
// context is passed for supporting context cancellation during cel validation when validating defaults
-func validateCustomResourceDefinitionValidation(ctx context.Context, customResourceValidation *apiextensions.CustomResourceValidation, statusSubresourceEnabled bool, opts validationOptions, fldPath *field.Path) field.ErrorList {
+func ValidateCustomResourceDefinitionValidation(ctx context.Context, customResourceValidation *apiextensions.CustomResourceValidation, statusSubresourceEnabled bool, opts ValidationOptions, fldPath *field.Path) field.ErrorList {
allErrs := field.ErrorList{}
if customResourceValidation == nil {
@@ -915,21 +931,21 @@ func validateCustomResourceDefinitionValidation(ctx context.Context, customResou
}
openAPIV3Schema := &specStandardValidatorV3{
- allowDefaults: opts.allowDefaults,
- disallowDefaultsReason: opts.disallowDefaultsReason,
- requireValidPropertyType: opts.requireValidPropertyType,
+ allowDefaults: opts.AllowDefaults,
+ disallowDefaultsReason: opts.DisallowDefaultsReason,
+ requireValidPropertyType: opts.RequireValidPropertyType,
}
var celContext *CELSchemaContext
var structuralSchemaInitErrs field.ErrorList
- if opts.requireStructuralSchema {
+ if opts.RequireStructuralSchema {
if ss, err := structuralschema.NewStructural(schema); err != nil {
// These validation errors overlap with OpenAPISchema validation errors so we keep track of them
// separately and only show them if OpenAPISchema validation does not report any errors.
structuralSchemaInitErrs = append(structuralSchemaInitErrs, field.Invalid(fldPath.Child("openAPIV3Schema"), "", err.Error()))
} else if validationErrors := structuralschema.ValidateStructural(fldPath.Child("openAPIV3Schema"), ss); len(validationErrors) > 0 {
allErrs = append(allErrs, validationErrors...)
- } else if validationErrors, err := structuraldefaulting.ValidateDefaults(ctx, fldPath.Child("openAPIV3Schema"), ss, true, opts.requirePrunedDefaults); err != nil {
+ } else if validationErrors, err := structuraldefaulting.ValidateDefaults(ctx, fldPath.Child("openAPIV3Schema"), ss, true, opts.RequirePrunedDefaults); err != nil {
// this should never happen
allErrs = append(allErrs, field.Invalid(fldPath.Child("openAPIV3Schema"), "", err.Error()))
} else if len(validationErrors) > 0 {
@@ -997,7 +1013,7 @@ func (o *OpenAPISchemaErrorList) AllErrors() field.ErrorList {
}
// ValidateCustomResourceDefinitionOpenAPISchema statically validates
-func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSchemaProps, fldPath *field.Path, ssv specStandardValidator, isRoot bool, opts *validationOptions, celContext *CELSchemaContext) *OpenAPISchemaErrorList {
+func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSchemaProps, fldPath *field.Path, ssv specStandardValidator, isRoot bool, opts *ValidationOptions, celContext *CELSchemaContext) *OpenAPISchemaErrorList {
allErrs := &OpenAPISchemaErrorList{SchemaErrors: field.ErrorList{}, CELErrors: field.ErrorList{}}
if schema == nil {
@@ -1131,7 +1147,7 @@ func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSch
} else {
allErrs.SchemaErrors = append(allErrs.SchemaErrors, field.Invalid(fldPath.Child("type"), schema.Type, "must be array if x-kubernetes-list-type is specified"))
}
- } else if opts.requireAtomicSetType && schema.XListType != nil && *schema.XListType == "set" && schema.Items != nil && schema.Items.Schema != nil { // by structural schema items are present
+ } else if opts.RequireAtomicSetType && schema.XListType != nil && *schema.XListType == "set" && schema.Items != nil && schema.Items.Schema != nil { // by structural schema items are present
is := schema.Items.Schema
switch is.Type {
case "array":
@@ -1192,7 +1208,7 @@ func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSch
}
}
- if opts.requireMapListKeysMapSetValidation {
+ if opts.RequireMapListKeysMapSetValidation {
allErrs.SchemaErrors = append(allErrs.SchemaErrors, validateMapListKeysMapSet(schema, fldPath)...)
}
if len(schema.XValidations) > 0 {
@@ -1241,13 +1257,13 @@ func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSch
} else if typeInfo == nil {
allErrs.CELErrors = append(allErrs.CELErrors, field.InternalError(fldPath.Child("x-kubernetes-validations"), fmt.Errorf("internal error: failed to retrieve type information for x-kubernetes-validations")))
} else {
- compResults, err := cel.Compile(typeInfo.Schema, typeInfo.DeclType, celconfig.PerCallLimit, opts.celEnvironmentSet, opts.preexistingExpressions)
+ compResults, err := cel.Compile(typeInfo.Schema, typeInfo.DeclType, celconfig.PerCallLimit, opts.CELEnvironmentSet, opts.PreexistingExpressions)
if err != nil {
allErrs.CELErrors = append(allErrs.CELErrors, field.InternalError(fldPath.Child("x-kubernetes-validations"), err))
} else {
for i, cr := range compResults {
expressionCost := getExpressionCost(cr, celContext)
- if !opts.suppressPerExpressionCost && expressionCost > StaticEstimatedCostLimit {
+ if !opts.SuppressPerExpressionCost && expressionCost > StaticEstimatedCostLimit {
costErrorMsg := getCostErrorMessage("estimated rule cost", expressionCost, StaticEstimatedCostLimit)
allErrs.CELErrors = append(allErrs.CELErrors, field.Forbidden(fldPath.Child("x-kubernetes-validations").Index(i).Child("rule"), costErrorMsg))
}
@@ -1265,7 +1281,7 @@ func ValidateCustomResourceDefinitionOpenAPISchema(schema *apiextensions.JSONSch
allErrs.CELErrors = append(allErrs.CELErrors, field.Invalid(fldPath.Child("x-kubernetes-validations").Index(i).Child("messageExpression"), schema.XValidations[i], cr.MessageExpressionError.Detail))
} else {
if cr.MessageExpression != nil {
- if !opts.suppressPerExpressionCost && cr.MessageExpressionMaxCost > StaticEstimatedCostLimit {
+ if !opts.SuppressPerExpressionCost && cr.MessageExpressionMaxCost > StaticEstimatedCostLimit {
costErrorMsg := getCostErrorMessage("estimated messageExpression cost", cr.MessageExpressionMaxCost, StaticEstimatedCostLimit)
allErrs.CELErrors = append(allErrs.CELErrors, field.Forbidden(fldPath.Child("x-kubernetes-validations").Index(i).Child("messageExpression"), costErrorMsg))
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_kcp.go
new file mode 100644
index 0000000000000..09c29b21b6ec5
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_kcp.go
@@ -0,0 +1,45 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package validation
+
+import "k8s.io/apimachinery/pkg/util/sets"
+
+var kubernetesAPIGroups = sets.NewString(
+ "admissionregistration.k8s.io",
+ "apps",
+ "authentication.k8s.io",
+ "authorization.k8s.io",
+ "autoscaling",
+ "batch",
+ "certificates.k8s.io",
+ "coordination.k8s.io",
+ "discovery.k8s.io",
+ "events.k8s.io",
+ "extensions",
+ "flowcontrol.apiserver.k8s.io",
+ "imagepolicy.k8s.io",
+ "policy",
+ "rbac.authorization.k8s.io",
+ "scheduling.k8s.io",
+ "storage.k8s.io",
+ "storagemigration.k8s.io",
+ "",
+)
+
+func isKubernetesAPIGroup(group string) bool {
+ return kubernetesAPIGroups.Has(group)
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_test.go
index 7662a613069d6..bccc71afba268 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_test.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation_test.go
@@ -7891,9 +7891,9 @@ func TestValidateCustomResourceDefinitionValidationRuleCompatibility(t *testing.
t.Run(tc.name, func(t *testing.T) {
ctx := context.TODO()
- errs := validateCustomResourceDefinitionUpdate(ctx, resource, old, validationOptions{
- preexistingExpressions: findPreexistingExpressions(&old.Spec),
- celEnvironmentSet: envSet,
+ errs := validateCustomResourceDefinitionUpdate(ctx, resource, old, ValidationOptions{
+ PreexistingExpressions: findPreexistingExpressions(&old.Spec),
+ CELEnvironmentSet: envSet,
})
seenErrs := make([]bool, len(errs))
@@ -7926,7 +7926,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
name string
input apiextensions.CustomResourceValidation
statusEnabled bool
- opts validationOptions
+ opts ValidationOptions
expectedErrors []validationMatch
}{
{
@@ -8054,7 +8054,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{},
},
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
expectedErrors: []validationMatch{
required("spec.validation.openAPIV3Schema.type"),
},
@@ -8066,7 +8066,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
Type: "object",
},
},
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
},
{
name: "require valid types, valid",
@@ -8075,7 +8075,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
Type: "object",
},
},
- opts: validationOptions{requireValidPropertyType: true, requireStructuralSchema: true},
+ opts: ValidationOptions{RequireValidPropertyType: true, RequireStructuralSchema: true},
},
{
name: "require valid types, invalid",
@@ -8084,7 +8084,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
Type: "null",
},
},
- opts: validationOptions{requireValidPropertyType: true, requireStructuralSchema: true},
+ opts: ValidationOptions{RequireValidPropertyType: true, RequireStructuralSchema: true},
expectedErrors: []validationMatch{
// Invalid value: "null": must be object at the root
unsupported("spec.validation.openAPIV3Schema.type"),
@@ -8101,7 +8101,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
Type: "bogus",
},
},
- opts: validationOptions{requireValidPropertyType: true, requireStructuralSchema: true},
+ opts: ValidationOptions{RequireValidPropertyType: true, RequireStructuralSchema: true},
expectedErrors: []validationMatch{
unsupported("spec.validation.openAPIV3Schema.type"),
invalid("spec.validation.openAPIV3Schema.type"),
@@ -8398,7 +8398,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{requireAtomicSetType: true},
+ opts: ValidationOptions{RequireAtomicSetType: true},
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.items.x-kubernetes-map-type"),
},
@@ -8419,7 +8419,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{requireAtomicSetType: true},
+ opts: ValidationOptions{RequireAtomicSetType: true},
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.items.x-kubernetes-map-type"),
},
@@ -8482,7 +8482,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{requireAtomicSetType: true},
+ opts: ValidationOptions{RequireAtomicSetType: true},
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.items.x-kubernetes-list-type"),
},
@@ -8582,8 +8582,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
expectedErrors: []validationMatch{
required("spec.validation.openAPIV3Schema.items.properties[key].default"),
@@ -8609,8 +8609,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
},
{
@@ -8633,9 +8633,9 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- allowDefaults: true,
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ AllowDefaults: true,
+ RequireMapListKeysMapSetValidation: true,
},
},
{
@@ -8658,8 +8658,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
expectedErrors: []validationMatch{
required("spec.validation.openAPIV3Schema.items.properties[key].default"),
@@ -8686,8 +8686,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
expectedErrors: []validationMatch{
forbidden("spec.validation.openAPIV3Schema.items.nullable"),
@@ -8724,8 +8724,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
expectedErrors: []validationMatch{
forbidden("spec.validation.openAPIV3Schema.items.properties[b].default"),
@@ -8744,8 +8744,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
expectedErrors: []validationMatch{
forbidden("spec.validation.openAPIV3Schema.items.nullable"),
@@ -8764,8 +8764,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireMapListKeysMapSetValidation: true,
+ opts: ValidationOptions{
+ RequireMapListKeysMapSetValidation: true,
},
},
{
@@ -8790,8 +8790,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8815,8 +8815,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8832,8 +8832,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
expectedErrors: []validationMatch{
required("spec.validation.openAPIV3Schema.x-kubernetes-validations[0].rule"),
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8844,8 +8844,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
XValidations: apiextensions.ValidationRules{},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8873,8 +8873,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.properties[subRoot].x-kubernetes-validations[0].rule"),
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8912,8 +8912,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8944,8 +8944,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -8971,8 +8971,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -9008,8 +9008,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -9167,9 +9167,9 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
- celEnvironmentSet: environment.MustBaseEnvSet(version.MajorMinor(1, 30), true),
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
+ CELEnvironmentSet: environment.MustBaseEnvSet(version.MajorMinor(1, 30), true),
},
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.x-kubernetes-validations[2].rule"),
@@ -9351,9 +9351,9 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
- celEnvironmentSet: environment.MustBaseEnvSet(version.MajorMinor(1, 31), true),
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
+ CELEnvironmentSet: environment.MustBaseEnvSet(version.MajorMinor(1, 31), true),
},
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.x-kubernetes-validations[21].rule"),
@@ -9377,8 +9377,8 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
expectedErrors: []validationMatch{
invalid("spec.validation.openAPIV3Schema.x-kubernetes-validations[0].rule"),
},
- opts: validationOptions{
- requireStructuralSchema: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
},
},
{
@@ -9436,9 +9436,9 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
},
},
- opts: validationOptions{
- requireStructuralSchema: true,
- allowDefaults: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
+ AllowDefaults: true,
},
},
{
@@ -9501,9 +9501,9 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
invalid("spec.validation.openAPIV3Schema.properties[value].default"),
invalid("spec.validation.openAPIV3Schema.properties[object].default"),
},
- opts: validationOptions{
- requireStructuralSchema: true,
- allowDefaults: true,
+ opts: ValidationOptions{
+ RequireStructuralSchema: true,
+ AllowDefaults: true,
},
},
{
@@ -9587,7 +9587,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list of type atomic",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9614,7 +9614,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list defaulting to type atomic",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9640,7 +9640,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on list of type atomic",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9687,7 +9687,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list of type set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9715,7 +9715,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on list of type set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9740,7 +9740,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on element of list of type map",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9768,7 +9768,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on list of type map",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9797,7 +9797,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on element of map of type granular",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9821,7 +9821,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of map of unrecognized type",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9849,7 +9849,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on element of map defaulting to type granular",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9872,7 +9872,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on map of type granular",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9890,7 +9890,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on map defaulting to type granular",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9907,7 +9907,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on element of map of type atomic",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9930,7 +9930,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow transition rule on map of type atomic",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9948,7 +9948,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid double-nested rule with no limit set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -9986,7 +9986,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid double-nested rule with one limit set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10024,7 +10024,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow double-nested rule with three limits set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10058,7 +10058,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "allow double-nested rule with one limit set on outermost array",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10091,7 +10091,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "check for cardinality of 1 under root object",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10109,7 +10109,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid validation rules where cost total exceeds total limit",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10162,7 +10162,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "skip CEL expression validation when OpenAPIv3 schema is an invalid structural schema",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10190,7 +10190,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "skip CEL expression validation when OpenAPIv3 schema is an invalid structural schema at level below",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10222,7 +10222,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
{
// So long at the schema information accessible to the CEL expression is valid, the expression should be validated.
name: "do not skip when OpenAPIv3 schema is an invalid structural schema in a separate part of the schema tree",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10263,7 +10263,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
{
// So long at the schema information accessible to the CEL expression is valid, the expression should be validated.
name: "do not skip CEL expression validation when OpenAPIv3 schema is an invalid structural schema at level above",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10295,7 +10295,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated for escaped property name",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10315,7 +10315,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under array items",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10340,7 +10340,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under array items, parent has rule",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10367,7 +10367,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under additionalProperties",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10392,7 +10392,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under additionalProperties, parent has rule",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10420,7 +10420,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under unescaped property name",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10440,7 +10440,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under unescaped property name, parent has rule",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10463,7 +10463,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under escaped property name",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10483,7 +10483,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under escaped property name, parent has rule",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10506,7 +10506,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under unescapable property name",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10526,7 +10526,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule validated under unescapable property name, parent has rule",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10549,7 +10549,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule with messageExpression",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10570,7 +10570,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule allows both message and messageExpression",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10592,7 +10592,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule invalidated by messageExpression syntax error",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10615,7 +10615,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule invalidated by messageExpression not returning a string",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10638,7 +10638,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule invalidated by messageExpression exceeding per-expression estimated cost limit",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10667,7 +10667,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule with lowerAscii check should be within estimated cost limit",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10693,7 +10693,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule invalidated by messageExpression exceeding per-CRD estimated cost limit",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10726,7 +10726,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "x-kubernetes-validations rule invalidated by messageExpression being only empty spaces",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10749,7 +10749,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list of type atomic when optionalOldSelf is set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10776,7 +10776,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list defaulting to type atomic when optionalOldSelf is set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10802,7 +10802,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of list of type set when optionalOldSelf is set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10830,7 +10830,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid transition rule on element of map of unrecognized type when optionalOldSelf is set",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10858,7 +10858,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid setting optionalOldSelf to true if oldSelf is not used",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10879,7 +10879,7 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
},
{
name: "forbid setting optionalOldSelf to false if oldSelf is not used",
- opts: validationOptions{requireStructuralSchema: true},
+ opts: ValidationOptions{RequireStructuralSchema: true},
input: apiextensions.CustomResourceValidation{
OpenAPIV3Schema: &apiextensions.JSONSchemaProps{
Type: "object",
@@ -10902,10 +10902,10 @@ func TestValidateCustomResourceDefinitionValidation(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctx := context.TODO()
- if tt.opts.celEnvironmentSet == nil {
- tt.opts.celEnvironmentSet = environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true)
+ if tt.opts.CELEnvironmentSet == nil {
+ tt.opts.CELEnvironmentSet = environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true)
}
- got := validateCustomResourceDefinitionValidation(ctx, &tt.input, tt.statusEnabled, tt.opts, field.NewPath("spec", "validation"))
+ got := ValidateCustomResourceDefinitionValidation(ctx, &tt.input, tt.statusEnabled, tt.opts, field.NewPath("spec", "validation"))
seenErrs := make([]bool, len(got))
@@ -11253,7 +11253,7 @@ func Test_validateDeprecationWarning(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
- if got := validateDeprecationWarning(tt.deprecated, tt.warning); !reflect.DeepEqual(got, tt.want) {
+ if got := ValidateDeprecationWarning(tt.deprecated, tt.warning); !reflect.DeepEqual(got, tt.want) {
t.Errorf("validateDeprecationWarning() = %v, want %v", got, tt.want)
}
})
@@ -11433,13 +11433,13 @@ func TestCelContext(t *testing.T) {
}
celContext := RootCELContext(tt.schema)
celContext.converter = converter
- opts := validationOptions{
- celEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
+ opts := ValidationOptions{
+ CELEnvironmentSet: environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion(), true),
}
openAPIV3Schema := &specStandardValidatorV3{
- allowDefaults: opts.allowDefaults,
- disallowDefaultsReason: opts.disallowDefaultsReason,
- requireValidPropertyType: opts.requireValidPropertyType,
+ allowDefaults: opts.AllowDefaults,
+ disallowDefaultsReason: opts.DisallowDefaultsReason,
+ requireValidPropertyType: opts.RequireValidPropertyType,
}
errors := ValidateCustomResourceDefinitionOpenAPISchema(tt.schema, field.NewPath("openAPIV3Schema"), openAPIV3Schema, true, &opts, celContext).AllErrors()
if len(errors) != 0 {
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go
index 1a730d3310cdd..005136b678583 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go
@@ -22,12 +22,14 @@ import (
"net/http"
"time"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers"
+ "k8s.io/apiextensions-apiserver/pkg/apiserver/conversion"
+
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/install"
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
- "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
- externalinformers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions"
"k8s.io/apiextensions-apiserver/pkg/controller/apiapproval"
"k8s.io/apiextensions-apiserver/pkg/controller/establish"
"k8s.io/apiextensions-apiserver/pkg/controller/finalizer"
@@ -35,6 +37,7 @@ import (
openapicontroller "k8s.io/apiextensions-apiserver/pkg/controller/openapi"
openapiv3controller "k8s.io/apiextensions-apiserver/pkg/controller/openapiv3"
"k8s.io/apiextensions-apiserver/pkg/controller/status"
+ "k8s.io/apiextensions-apiserver/pkg/kcp"
"k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -47,7 +50,6 @@ import (
"k8s.io/apiserver/pkg/registry/rest"
genericapiserver "k8s.io/apiserver/pkg/server"
serverstorage "k8s.io/apiserver/pkg/server/storage"
- "k8s.io/apiserver/pkg/util/webhook"
"k8s.io/klog/v2"
)
@@ -83,10 +85,18 @@ type ExtraConfig struct {
// the CRD Establishing will be hold by 5 seconds.
MasterCount int
- // ServiceResolver is used in CR webhook converters to resolve webhook's service names
- ServiceResolver webhook.ServiceResolver
- // AuthResolverWrapper is used in CR webhook converters
- AuthResolverWrapper webhook.AuthenticationInfoResolverWrapper
+ // ConversionFactory is used to provider converters for CRs.
+ ConversionFactory conversion.Factory
+
+ Client kcpapiextensionsv1client.ClusterInterface
+ Informers kcpapiextensionsv1informers.SharedInformerFactory
+
+ // KCP
+ ClusterAwareCRDLister kcp.ClusterAwareCRDClusterLister
+ TableConverterProvider TableConverterProvider
+ // DisableServerSideApply deactivates Server Side Apply for a specific API server instead of globally through the feature gate
+ // used with the embedded cache server in kcp
+ DisableServerSideApply bool
}
type Config struct {
@@ -108,7 +118,17 @@ type CustomResourceDefinitions struct {
GenericAPIServer *genericapiserver.GenericAPIServer
// provided for easier embedding
- Informers externalinformers.SharedInformerFactory
+ Informers kcpapiextensionsv1informers.SharedInformerFactory
+
+ DiscoveryGroupLister discovery.GroupLister
+
+ crdHandler *crdHandler
+ versionDiscoveryHandler *versionDiscoveryHandler
+ groupDiscoveryHandler *groupDiscoveryHandler
+ rootDiscoveryHandler *rootDiscoveryHandler
+
+ // KCP
+ ClusterAwareCRDLister kcp.ClusterAwareCRDClusterLister
}
// Complete fills in any fields not set that are required to have valid data. It's mutating the receiver.
@@ -161,48 +181,73 @@ func (c completedConfig) New(delegationTarget genericapiserver.DelegationTarget)
return nil, err
}
- crdClient, err := clientset.NewForConfig(s.GenericAPIServer.LoopbackClientConfig)
- if err != nil {
- // it's really bad that this is leaking here, but until we can fix the test (which I'm pretty sure isn't even testing what it wants to test),
- // we need to be able to move forward
- return nil, fmt.Errorf("failed to create clientset: %v", err)
+ crdClient := c.ExtraConfig.Client
+ if crdClient == nil {
+ crdClient, err = kcpapiextensionsv1client.NewForConfig(s.GenericAPIServer.LoopbackClientConfig)
+ if err != nil {
+ // it's really bad that this is leaking here, but until we can fix the test (which I'm pretty sure isn't even testing what it wants to test),
+ // we need to be able to move forward
+ return nil, fmt.Errorf("failed to create clientset: %v", err)
+ }
}
- s.Informers = externalinformers.NewSharedInformerFactory(crdClient, 5*time.Minute)
+
+ s.Informers = c.ExtraConfig.Informers
+ if s.Informers == nil {
+ s.Informers = kcpapiextensionsv1informers.NewSharedInformerFactory(crdClient, 5*time.Minute)
+ }
+
+ s.ClusterAwareCRDLister = c.ExtraConfig.ClusterAwareCRDLister
delegateHandler := delegationTarget.UnprotectedHandler()
if delegateHandler == nil {
delegateHandler = http.NotFoundHandler()
}
- versionDiscoveryHandler := &versionDiscoveryHandler{
- discovery: map[schema.GroupVersion]*discovery.APIVersionHandler{},
+ s.versionDiscoveryHandler = &versionDiscoveryHandler{
+ crdLister: c.ExtraConfig.ClusterAwareCRDLister,
delegate: delegateHandler,
}
- groupDiscoveryHandler := &groupDiscoveryHandler{
- discovery: map[string]*discovery.APIGroupHandler{},
+
+ s.groupDiscoveryHandler = &groupDiscoveryHandler{
+ crdLister: c.ExtraConfig.ClusterAwareCRDLister,
+ delegate: delegateHandler,
+ }
+
+ s.rootDiscoveryHandler = &rootDiscoveryHandler{
+ crdLister: c.ExtraConfig.ClusterAwareCRDLister,
delegate: delegateHandler,
}
+ s.DiscoveryGroupLister = s.rootDiscoveryHandler
+
establishingController := establish.NewEstablishingController(s.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdClient.ApiextensionsV1())
+
crdHandler, err := NewCustomResourceDefinitionHandler(
- versionDiscoveryHandler,
- groupDiscoveryHandler,
+ s.versionDiscoveryHandler,
+ s.groupDiscoveryHandler,
s.Informers.Apiextensions().V1().CustomResourceDefinitions(),
delegateHandler,
c.ExtraConfig.CRDRESTOptionsGetter,
c.GenericConfig.AdmissionControl,
establishingController,
- c.ExtraConfig.ServiceResolver,
- c.ExtraConfig.AuthResolverWrapper,
+ c.ExtraConfig.ConversionFactory,
c.ExtraConfig.MasterCount,
s.GenericAPIServer.Authorizer,
c.GenericConfig.RequestTimeout,
time.Duration(c.GenericConfig.MinRequestTimeout)*time.Second,
apiGroupInfo.StaticOpenAPISpec,
c.GenericConfig.MaxRequestBodyBytes,
+ c.ExtraConfig.DisableServerSideApply,
)
if err != nil {
return nil, err
}
+ s.crdHandler = crdHandler
+
+ // Begin kcp additions
+ crdHandler.clusterAwareCRDLister = c.ExtraConfig.ClusterAwareCRDLister
+ crdHandler.tableConverterProvider = c.ExtraConfig.TableConverterProvider
+ // End kcp additions
+
s.GenericAPIServer.Handler.NonGoRestfulMux.Handle("/apis", crdHandler)
s.GenericAPIServer.Handler.NonGoRestfulMux.HandlePrefix("/apis/", crdHandler)
s.GenericAPIServer.RegisterDestroyFunc(crdHandler.destroy)
@@ -211,8 +256,10 @@ func (c completedConfig) New(delegationTarget genericapiserver.DelegationTarget)
if aggregatedDiscoveryManager != nil {
aggregatedDiscoveryManager = aggregatedDiscoveryManager.WithSource(aggregated.CRDSource)
}
- discoveryController := NewDiscoveryController(s.Informers.Apiextensions().V1().CustomResourceDefinitions(), versionDiscoveryHandler, groupDiscoveryHandler, aggregatedDiscoveryManager)
- namingController := status.NewNamingConditionController(klog.TODO() /* for contextual logging */, s.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdClient.ApiextensionsV1())
+ // HACK: Added to allow serving core resources registered through CRDs (for the KCP scenario)
+ s.GenericAPIServer.Handler.NonGoRestfulMux.UnlistedHandlePrefix("/api/v1/", crdHandler)
+
+ namingController := status.NewNamingConditionController(klog.TODO() /* for contextual logging */, s.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdClient.ApiextensionsV1(), s.ClusterAwareCRDLister)
nonStructuralSchemaController := nonstructuralschema.NewConditionController(s.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdClient.ApiextensionsV1())
apiApprovalController := apiapproval.NewKubernetesAPIApprovalPolicyConformantConditionController(s.Informers.Apiextensions().V1().CustomResourceDefinitions(), crdClient.ApiextensionsV1())
finalizingController := finalizer.NewCRDFinalizer(
@@ -248,13 +295,6 @@ func (c completedConfig) New(delegationTarget genericapiserver.DelegationTarget)
go apiApprovalController.Run(5, hookContext.Done())
go finalizingController.Run(5, hookContext.Done())
- discoverySyncedCh := make(chan struct{})
- go discoveryController.Run(hookContext.Done(), discoverySyncedCh)
- select {
- case <-hookContext.Done():
- case <-discoverySyncedCh:
- }
-
return nil
})
// we don't want to report healthy until we can handle all CRDs that have already been registered. Waiting for the informer
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter.go
index 7fa43af8eec16..51efc1cfceedd 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter.go
@@ -23,15 +23,30 @@ import (
autoscalingv1 "k8s.io/api/autoscaling/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiextensionsfeatures "k8s.io/apiextensions-apiserver/pkg/features"
+ apivalidation "k8s.io/apimachinery/pkg/api/validation"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ metav1validation "k8s.io/apimachinery/pkg/apis/meta/v1/validation"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
+ "k8s.io/apimachinery/pkg/util/validation/field"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/apiserver/pkg/util/webhook"
typedscheme "k8s.io/client-go/kubernetes/scheme"
)
+// Factory is able to create a new CRConverter for crd.
+type Factory interface {
+ // NewConverter returns a CRConverter capable of converting crd's versions.
+ //
+ // For proper conversion, the returned CRConverter must be used via NewDelegatingConverter.
+ //
+ // When implementing a CRConverter, you do not need to: test for valid API versions or no-op
+ // conversions, handle field selector logic, or handle scale conversions; these are all handled
+ // via NewDelegatingConverter.
+ NewConverter(crd *apiextensionsv1.CustomResourceDefinition) (CRConverter, error)
+}
+
// CRConverterFactory is the factory for all CR converters.
type CRConverterFactory struct {
// webhookConverterFactory is the factory for webhook converters.
@@ -43,7 +58,7 @@ type CRConverterFactory struct {
// apiextensions-apiserver runs.
var converterMetricFactorySingleton = newConverterMetricFactory()
-// NewCRConverterFactory creates a new CRConverterFactory
+// NewCRConverterFactory creates a new CRConverterFactory that supports none and webhook conversion strategies.
func NewCRConverterFactory(serviceResolver webhook.ServiceResolver, authResolverWrapper webhook.AuthenticationInfoResolverWrapper) (*CRConverterFactory, error) {
converterFactory := &CRConverterFactory{}
webhookConverterFactory, err := newWebhookConverterFactory(serviceResolver, authResolverWrapper)
@@ -54,28 +69,30 @@ func NewCRConverterFactory(serviceResolver webhook.ServiceResolver, authResolver
return converterFactory, nil
}
-// NewConverter returns a new CR converter based on the conversion settings in crd object.
-func (m *CRConverterFactory) NewConverter(crd *apiextensionsv1.CustomResourceDefinition) (safe, unsafe runtime.ObjectConvertor, err error) {
- validVersions := map[schema.GroupVersion]bool{}
- for _, version := range crd.Spec.Versions {
- validVersions[schema.GroupVersion{Group: crd.Spec.Group, Version: version.Name}] = true
- }
-
- var converter crConverterInterface
+// NewConverter creates a new CRConverter based on the crd's conversion strategy. Supported strategies are none and
+// webhook.
+func (f *CRConverterFactory) NewConverter(crd *apiextensionsv1.CustomResourceDefinition) (CRConverter, error) {
switch crd.Spec.Conversion.Strategy {
case apiextensionsv1.NoneConverter:
- converter = &nopConverter{}
+ return NewNOPConverter(), nil
case apiextensionsv1.WebhookConverter:
- converter, err = m.webhookConverterFactory.NewWebhookConverter(crd)
+ converter, err := f.webhookConverterFactory.NewWebhookConverter(crd)
if err != nil {
- return nil, nil, err
+ return nil, err
}
- converter, err = converterMetricFactorySingleton.addMetrics(crd.Name, converter)
- if err != nil {
- return nil, nil, err
- }
- default:
- return nil, nil, fmt.Errorf("unknown conversion strategy %q for CRD %s", crd.Spec.Conversion.Strategy, crd.Name)
+ return converterMetricFactorySingleton.addMetrics(crd.Name, converter)
+ }
+
+ return nil, fmt.Errorf("unknown conversion strategy %q for CRD %s", crd.Spec.Conversion.Strategy, crd.Name)
+}
+
+// NewDelegatingConverter returns new safe and unsafe converters based on the conversion settings in
+// crd. These converters contain logic common to all converters, and they delegate the actual
+// specific version-to-version conversion logic to the delegate.
+func NewDelegatingConverter(crd *apiextensionsv1.CustomResourceDefinition, delegate CRConverter) (safe, unsafe runtime.ObjectConvertor, err error) {
+ validVersions := map[schema.GroupVersion]bool{}
+ for _, version := range crd.Spec.Versions {
+ validVersions[schema.GroupVersion{Group: crd.Spec.Group, Version: version.Name}] = true
}
// Determine whether we should expect to be asked to "convert" autoscaling/v1 Scale types
@@ -95,35 +112,49 @@ func (m *CRConverterFactory) NewConverter(crd *apiextensionsv1.CustomResourceDef
}
}
- unsafe = &crConverter{
+ unsafe = &delegatingCRConverter{
convertScale: convertScale,
validVersions: validVersions,
clusterScoped: crd.Spec.Scope == apiextensionsv1.ClusterScoped,
- converter: converter,
+ converter: delegate,
selectableFields: selectableFields,
+ // If this is a wildcard partial metadata CRD variant, we don't require that the CRD serves the appropriate
+ // version, because the schema does not matter.
+ requireValidVersion: !strings.HasSuffix(string(crd.UID), ".wildcard.partial-metadata"),
}
return &safeConverterWrapper{unsafe}, unsafe, nil
}
-// crConverterInterface is the interface all cr converters must implement
-type crConverterInterface interface {
+// CRConverter is the interface all CR converters must implement
+type CRConverter interface {
// Convert converts in object to the given gvk and returns the converted object.
// Note that the function may mutate in object and return it. A safe wrapper will make sure
// a safe converter will be returned.
- Convert(in runtime.Object, targetGVK schema.GroupVersion) (runtime.Object, error)
+ Convert(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error)
}
-// crConverter extends the delegate converter with generic CR conversion behaviour. The delegate will implement the
+// CRConverterFunc wraps a CR conversion func into a CRConverter.
+type CRConverterFunc func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error)
+
+func (fn CRConverterFunc) Convert(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ return fn(in, targetGV)
+}
+
+// delegatingCRConverter extends the delegate converter with generic CR conversion behaviour. The delegate will implement the
// user defined conversion strategy given in the CustomResourceDefinition.
-type crConverter struct {
+type delegatingCRConverter struct {
convertScale bool
- converter crConverterInterface
+ converter CRConverter
validVersions map[schema.GroupVersion]bool
clusterScoped bool
selectableFields map[schema.GroupVersion]sets.Set[string]
+
+ // If true, require that the CRD serves the appropriate version
+ requireValidVersion bool
}
-func (c *crConverter) ConvertFieldLabel(gvk schema.GroupVersionKind, label, value string) (string, string, error) {
+func (c *delegatingCRConverter) ConvertFieldLabel(gvk schema.GroupVersionKind, label, value string) (string, string, error) {
+ // We currently only support metadata.namespace and metadata.name.
switch {
case label == "metadata.name":
return label, value, nil
@@ -140,7 +171,7 @@ func (c *crConverter) ConvertFieldLabel(gvk schema.GroupVersionKind, label, valu
}
}
-func (c *crConverter) Convert(in, out, context interface{}) error {
+func (c *delegatingCRConverter) Convert(in, out, context interface{}) error {
// Special-case typed scale conversion if this custom resource supports a scale endpoint
if c.convertScale {
_, isInScale := in.(*autoscalingv1.Scale)
@@ -178,39 +209,154 @@ func (c *crConverter) Convert(in, out, context interface{}) error {
// The in object can be a single object or a UnstructuredList. CRD storage implementation creates an
// UnstructuredList with the request's GV, populates it from storage, then calls conversion to convert
// the individual items. This function assumes it never gets a v1.List.
-func (c *crConverter) ConvertToVersion(in runtime.Object, target runtime.GroupVersioner) (runtime.Object, error) {
+func (c *delegatingCRConverter) ConvertToVersion(in runtime.Object, target runtime.GroupVersioner) (runtime.Object, error) {
+ // Special-case typed scale conversion if this custom resource supports a scale endpoint
+ if c.convertScale {
+ if _, isInScale := in.(*autoscalingv1.Scale); isInScale {
+ return typedscheme.Scheme.ConvertToVersion(in, target)
+ }
+ }
+
fromGVK := in.GetObjectKind().GroupVersionKind()
toGVK, ok := target.KindForGroupVersionKinds([]schema.GroupVersionKind{fromGVK})
if !ok {
// TODO: should this be a typed error?
return nil, fmt.Errorf("%v is unstructured and is not suitable for converting to %q", fromGVK.String(), target)
}
- // Special-case typed scale conversion if this custom resource supports a scale endpoint
- if c.convertScale {
- if _, isInScale := in.(*autoscalingv1.Scale); isInScale {
- return typedscheme.Scheme.ConvertToVersion(in, target)
- }
+
+ isList := false
+ var list *unstructured.UnstructuredList
+ switch t := in.(type) {
+ case *unstructured.Unstructured:
+ list = &unstructured.UnstructuredList{Items: []unstructured.Unstructured{*t}}
+ case *unstructured.UnstructuredList:
+ list = t
+ isList = true
+ default:
+ return nil, fmt.Errorf("unexpected type %T", in)
}
- if !c.validVersions[toGVK.GroupVersion()] {
- return nil, fmt.Errorf("request to convert CR to an invalid group/version: %s", toGVK.GroupVersion().String())
+ desiredAPIVersion := toGVK.GroupVersion().String()
+ if c.requireValidVersion && !c.validVersions[toGVK.GroupVersion()] {
+ return nil, fmt.Errorf("request to convert CR to an invalid group/version: %s", desiredAPIVersion)
}
// Note that even if the request is for a list, the GV of the request UnstructuredList is what
// is expected to convert to. As mentioned in the function's document, it is not expected to
// get a v1.List.
- if !c.validVersions[fromGVK.GroupVersion()] {
+ if c.requireValidVersion && !c.validVersions[fromGVK.GroupVersion()] {
return nil, fmt.Errorf("request to convert CR from an invalid group/version: %s", fromGVK.GroupVersion().String())
}
- // Check list item's apiVersion
- if list, ok := in.(*unstructured.UnstructuredList); ok {
- for i := range list.Items {
- expectedGV := list.Items[i].GroupVersionKind().GroupVersion()
- if !c.validVersions[expectedGV] {
- return nil, fmt.Errorf("request to convert CR list failed, list index %d has invalid group/version: %s", i, expectedGV.String())
+
+ objectsToConvert, err := kcpGetObjectsToConvert(list, desiredAPIVersion, c.validVersions, c.requireValidVersion)
+ if err != nil {
+ return nil, err
+ }
+
+ objCount := len(objectsToConvert)
+ if objCount == 0 {
+ // no objects needed conversion
+ if !isList {
+ // for a single item, return as-is
+ return in, nil
+ }
+ // for a list, set the version of the top-level list object (all individual objects are already in the correct version)
+ list.SetAPIVersion(desiredAPIVersion)
+ return list, nil
+ }
+
+ // A smoke test in API machinery calls the converter on empty objects during startup. The test is initiated here:
+ // https://github.com/kubernetes/kubernetes/blob/dbb448bbdcb9e440eee57024ffa5f1698956a054/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go#L201
+ if isEmptyUnstructuredObject(in) {
+ converted, err := NewNOPConverter().Convert(list, toGVK.GroupVersion())
+ if err != nil {
+ return nil, err
+ }
+ if !isList {
+ return &converted.Items[0], nil
+ }
+ return converted, nil
+ }
+
+ // Deepcopy ObjectMeta because the converter might mutate the objects, and we
+ // need the original ObjectMeta furthermore for validation and restoration.
+ for i := range objectsToConvert {
+ original := objectsToConvert[i].Object
+ objectsToConvert[i].Object = make(map[string]interface{}, len(original))
+ for k, v := range original {
+ if k == "metadata" {
+ v = runtime.DeepCopyJSONValue(v)
}
+ objectsToConvert[i].Object[k] = v
}
}
- return c.converter.Convert(in, toGVK.GroupVersion())
+
+ // Do the (potentially mutating) conversion.
+ convertedObjects, err := c.converter.Convert(&unstructured.UnstructuredList{
+ Object: list.Object,
+ Items: objectsToConvert,
+ }, toGVK.GroupVersion())
+ if err != nil {
+ return nil, fmt.Errorf("conversion for %v failed: %w", in.GetObjectKind().GroupVersionKind(), err)
+ }
+ if len(convertedObjects.Items) != len(objectsToConvert) {
+ return nil, fmt.Errorf("conversion for %v returned %d objects, expected %d", in.GetObjectKind().GroupVersionKind(), len(convertedObjects.Items), len(objectsToConvert))
+ }
+
+ // Fill back in the converted objects from the response at the right spots.
+ // The response list might be sparse because objects had the right version already.
+ convertedList := list
+ convertedList.SetAPIVersion(desiredAPIVersion)
+ convertedIndex := 0
+ for i := range convertedList.Items {
+ original := &convertedList.Items[i]
+ if original.GetAPIVersion() == desiredAPIVersion {
+ // This item has not been sent for conversion, and therefore does not show up in the response.
+ // convertedList has the right item already.
+ continue
+ }
+ converted := &convertedObjects.Items[convertedIndex]
+ convertedIndex++
+ if expected, got := toGVK.GroupVersion(), converted.GetObjectKind().GroupVersionKind().GroupVersion(); expected != got {
+ return nil, fmt.Errorf("conversion for %v returned invalid converted object at index %v: invalid groupVersion (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), convertedIndex, expected, got)
+ }
+ if expected, got := original.GetObjectKind().GroupVersionKind().Kind, converted.GetObjectKind().GroupVersionKind().Kind; expected != got {
+ return nil, fmt.Errorf("conversion for %v returned invalid converted object at index %v: invalid kind (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), convertedIndex, expected, got)
+ }
+ if err := validateConvertedObject(original, converted); err != nil {
+ return nil, fmt.Errorf("conversion for %v returned invalid converted object at index %v: %v", in.GetObjectKind().GroupVersionKind(), convertedIndex, err)
+ }
+ if err := restoreObjectMeta(original, converted); err != nil {
+ return nil, fmt.Errorf("conversion for %v returned invalid metadata in object at index %v: %v", in.GetObjectKind().GroupVersionKind(), convertedIndex, err)
+ }
+ convertedList.Items[i] = *converted
+ }
+
+ if isList {
+ return convertedList, nil
+ }
+
+ return &convertedList.Items[0], nil
+}
+
+func kcpGetObjectsToConvert(
+ list *unstructured.UnstructuredList,
+ desiredAPIVersion string,
+ validVersions map[schema.GroupVersion]bool,
+ requireValidVersion bool,
+) ([]unstructured.Unstructured, error) {
+ var objectsToConvert []unstructured.Unstructured
+ for i := range list.Items {
+ expectedGV := list.Items[i].GroupVersionKind().GroupVersion()
+ if requireValidVersion && !validVersions[expectedGV] {
+ return nil, fmt.Errorf("request to convert CR list failed, list index %d has invalid group/version: %s", i, expectedGV.String())
+ }
+
+ // Only sent item for conversion, if the apiVersion is different
+ if list.Items[i].GetAPIVersion() != desiredAPIVersion {
+ objectsToConvert = append(objectsToConvert, list.Items[i])
+ }
+ }
+ return objectsToConvert, nil
}
// safeConverterWrapper is a wrapper over an unsafe object converter that makes copy of the input and then delegate to the unsafe converter.
@@ -238,3 +384,118 @@ func (c *safeConverterWrapper) Convert(in, out, context interface{}) error {
func (c *safeConverterWrapper) ConvertToVersion(in runtime.Object, target runtime.GroupVersioner) (runtime.Object, error) {
return c.unsafe.ConvertToVersion(in.DeepCopyObject(), target)
}
+
+// isEmptyUnstructuredObject returns true if in is an empty unstructured object, i.e. an unstructured object that does
+// not have any field except apiVersion and kind.
+func isEmptyUnstructuredObject(in runtime.Object) bool {
+ u, ok := in.(*unstructured.Unstructured)
+ if !ok {
+ return false
+ }
+ if len(u.Object) != 2 {
+ return false
+ }
+ if _, ok := u.Object["kind"]; !ok {
+ return false
+ }
+ if _, ok := u.Object["apiVersion"]; !ok {
+ return false
+ }
+ return true
+}
+
+// validateConvertedObject checks that ObjectMeta fields match, with the exception of
+// labels and annotations.
+func validateConvertedObject(in, out *unstructured.Unstructured) error {
+ if e, a := in.GetKind(), out.GetKind(); e != a {
+ return fmt.Errorf("must have the same kind: %v != %v", e, a)
+ }
+ if e, a := in.GetName(), out.GetName(); e != a {
+ return fmt.Errorf("must have the same name: %v != %v", e, a)
+ }
+ if e, a := in.GetNamespace(), out.GetNamespace(); e != a {
+ return fmt.Errorf("must have the same namespace: %v != %v", e, a)
+ }
+ if e, a := in.GetUID(), out.GetUID(); e != a {
+ return fmt.Errorf("must have the same UID: %v != %v", e, a)
+ }
+ return nil
+}
+
+// restoreObjectMeta copies metadata from original into converted, while preserving labels and annotations from converted.
+func restoreObjectMeta(original, converted *unstructured.Unstructured) error {
+ cm, found := converted.Object["metadata"]
+ om, previouslyFound := original.Object["metadata"]
+ switch {
+ case !found && !previouslyFound:
+ return nil
+ case previouslyFound && !found:
+ return fmt.Errorf("missing metadata in converted object")
+ case !previouslyFound && found:
+ om = map[string]interface{}{}
+ }
+
+ convertedMeta, ok := cm.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("invalid metadata of type %T in converted object", cm)
+ }
+ originalMeta, ok := om.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("invalid metadata of type %T in input object", om)
+ }
+
+ result := converted
+ if previouslyFound {
+ result.Object["metadata"] = originalMeta
+ } else {
+ result.Object["metadata"] = map[string]interface{}{}
+ }
+ resultMeta := result.Object["metadata"].(map[string]interface{})
+
+ for _, fld := range []string{"labels", "annotations"} {
+ obj, found := convertedMeta[fld]
+ if !found || obj == nil {
+ delete(resultMeta, fld)
+ continue
+ }
+
+ convertedField, ok := obj.(map[string]interface{})
+ if !ok {
+ return fmt.Errorf("invalid metadata.%s of type %T in converted object", fld, obj)
+ }
+ originalField, ok := originalMeta[fld].(map[string]interface{})
+ if !ok && originalField[fld] != nil {
+ return fmt.Errorf("invalid metadata.%s of type %T in original object", fld, originalMeta[fld])
+ }
+
+ somethingChanged := len(originalField) != len(convertedField)
+ for k, v := range convertedField {
+ if _, ok := v.(string); !ok {
+ return fmt.Errorf("metadata.%s[%s] must be a string, but is %T in converted object", fld, k, v)
+ }
+ if originalField[k] != interface{}(v) {
+ somethingChanged = true
+ }
+ }
+
+ if somethingChanged {
+ stringMap := make(map[string]string, len(convertedField))
+ for k, v := range convertedField {
+ stringMap[k] = v.(string)
+ }
+ var errs field.ErrorList
+ if fld == "labels" {
+ errs = metav1validation.ValidateLabels(stringMap, field.NewPath("metadata", "labels"))
+ } else {
+ errs = apivalidation.ValidateAnnotations(stringMap, field.NewPath("metadata", "annotation"))
+ }
+ if len(errs) > 0 {
+ return errs.ToAggregate()
+ }
+ }
+
+ resultMeta[fld] = convertedField
+ }
+
+ return nil
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter_test.go
index 3866dbd36bf7d..5a0cefb673877 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter_test.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/converter_test.go
@@ -17,15 +17,17 @@ limitations under the License.
package conversion
import (
+ "fmt"
"reflect"
"strings"
"testing"
+ "github.com/google/go-cmp/cmp"
+
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
- "k8s.io/apiserver/pkg/util/webhook"
)
func TestConversion(t *testing.T) {
@@ -46,6 +48,7 @@ func TestConversion(t *testing.T) {
SourceObject: &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.com/v1",
+ "metadata": map[string]interface{}{"name": "foo1"},
"other": "data",
"kind": "foo",
},
@@ -53,6 +56,7 @@ func TestConversion(t *testing.T) {
ExpectedObject: &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.com/v2",
+ "metadata": map[string]interface{}{"name": "foo1"},
"other": "data",
"kind": "foo",
},
@@ -86,6 +90,7 @@ func TestConversion(t *testing.T) {
{
Object: map[string]interface{}{
"apiVersion": "example.com/v1",
+ "metadata": map[string]interface{}{"name": "foo1"},
"kind": "foo",
"other": "data",
},
@@ -93,6 +98,7 @@ func TestConversion(t *testing.T) {
{
Object: map[string]interface{}{
"apiVersion": "example.com/v1",
+ "metadata": map[string]interface{}{"name": "foo2"},
"kind": "foo",
"other": "data2",
},
@@ -108,6 +114,7 @@ func TestConversion(t *testing.T) {
{
Object: map[string]interface{}{
"apiVersion": "example.com/v2",
+ "metadata": map[string]interface{}{"name": "foo1"},
"kind": "foo",
"other": "data",
},
@@ -115,6 +122,7 @@ func TestConversion(t *testing.T) {
{
Object: map[string]interface{}{
"apiVersion": "example.com/v2",
+ "metadata": map[string]interface{}{"name": "foo2"},
"kind": "foo",
"other": "data2",
},
@@ -152,44 +160,679 @@ func TestConversion(t *testing.T) {
},
ExpectedFailure: "invalid group/version: example.com/v3",
},
+ {
+ Name: "list_with_invalid_gv",
+ ValidVersions: []string{"example.com/v1", "example.com/v2"},
+ ClusterScoped: false,
+ ToVersion: "example.com/v2",
+ SourceObject: &unstructured.UnstructuredList{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "fooList",
+ },
+ Items: []unstructured.Unstructured{
+ {
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "foo",
+ "other": "data",
+ },
+ },
+ {
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v3",
+ "kind": "foo",
+ "other": "data2",
+ },
+ },
+ },
+ },
+ ExpectedFailure: "invalid group/version: example.com/v3",
+ },
}
- CRConverterFactory, err := NewCRConverterFactory(nil, func(resolver webhook.AuthenticationInfoResolver) webhook.AuthenticationInfoResolver { return nil })
- if err != nil {
- t.Fatalf("Cannot create conversion factory: %v", err)
- }
for _, test := range tests {
- testCRD := apiextensionsv1.CustomResourceDefinition{
- Spec: apiextensionsv1.CustomResourceDefinitionSpec{
- Conversion: &apiextensionsv1.CustomResourceConversion{
- Strategy: apiextensionsv1.NoneConverter,
- },
- },
- }
- for _, v := range test.ValidVersions {
- gv, _ := schema.ParseGroupVersion(v)
- testCRD.Spec.Versions = append(testCRD.Spec.Versions, apiextensionsv1.CustomResourceDefinitionVersion{Name: gv.Version, Served: true})
- testCRD.Spec.Group = gv.Group
- }
- safeConverter, _, err := CRConverterFactory.NewConverter(&testCRD)
- if err != nil {
- t.Fatalf("Cannot create converter: %v", err)
- }
- o := test.SourceObject.DeepCopyObject()
- toVersion, _ := schema.ParseGroupVersion(test.ToVersion)
- toVersions := schema.GroupVersions{toVersion}
- actual, err := safeConverter.ConvertToVersion(o, toVersions)
- if test.ExpectedFailure != "" {
- if err == nil || !strings.Contains(err.Error(), test.ExpectedFailure) {
- t.Fatalf("%s: Expected the call to fail with error message `%s` but err=%v", test.Name, test.ExpectedFailure, err)
+ t.Run(test.Name, func(t *testing.T) {
+ testCRD := apiextensionsv1.CustomResourceDefinition{
+ Spec: apiextensionsv1.CustomResourceDefinitionSpec{
+ Conversion: &apiextensionsv1.CustomResourceConversion{
+ Strategy: apiextensionsv1.NoneConverter,
+ },
+ },
}
- } else {
+ for _, v := range test.ValidVersions {
+ gv, _ := schema.ParseGroupVersion(v)
+ testCRD.Spec.Versions = append(testCRD.Spec.Versions, apiextensionsv1.CustomResourceDefinitionVersion{Name: gv.Version, Served: true})
+ testCRD.Spec.Group = gv.Group
+ }
+ safeConverter, _, err := NewDelegatingConverter(&testCRD, NewNOPConverter())
if err != nil {
- t.Fatalf("%s: conversion failed with : %v", test.Name, err)
+ t.Fatalf("Cannot create converter: %v", err)
+ }
+ o := test.SourceObject.DeepCopyObject()
+ toVersion, _ := schema.ParseGroupVersion(test.ToVersion)
+ toVersions := schema.GroupVersions{toVersion}
+ actual, err := safeConverter.ConvertToVersion(o, toVersions)
+ if test.ExpectedFailure != "" {
+ if err == nil || !strings.Contains(err.Error(), test.ExpectedFailure) {
+ t.Fatalf("%s: Expected the call to fail with error message `%s` but err=%v", test.Name, test.ExpectedFailure, err)
+ }
+ } else {
+ if err != nil {
+ t.Fatalf("%s: conversion failed with : %v", test.Name, err)
+ }
+ if !reflect.DeepEqual(test.ExpectedObject, actual) {
+ t.Fatalf("%s: Expected = %v, Actual = %v", test.Name, test.ExpectedObject, actual)
+ }
+ }
+ })
+ }
+}
+
+func TestGetObjectsToConvert(t *testing.T) {
+ v1Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v1", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv1"}}}
+ v2Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v2", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv2"}}}
+ v3Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v3", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv3"}}}
+
+ testcases := []struct {
+ Name string
+ List *unstructured.UnstructuredList
+ APIVersion string
+ ValidVersions map[schema.GroupVersion]bool
+
+ ExpectObjects []unstructured.Unstructured
+ ExpectError bool
+ }{
+ {
+ Name: "empty list",
+ List: &unstructured.UnstructuredList{},
+ APIVersion: "foo/v1",
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v1"}: true,
+ },
+ ExpectObjects: nil,
+ },
+ {
+ Name: "one-item list, in desired version",
+ List: &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{*v1Object},
+ },
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v1"}: true,
+ },
+ APIVersion: "foo/v1",
+ ExpectObjects: nil,
+ },
+ {
+ Name: "one-item list, not in desired version",
+ List: &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{*v2Object},
+ },
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v1"}: true,
+ {Group: "foo", Version: "v2"}: true,
+ },
+ APIVersion: "foo/v1",
+ ExpectObjects: []unstructured.Unstructured{*v2Object},
+ },
+ {
+ Name: "multi-item list, in desired version",
+ List: &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{*v1Object, *v1Object, *v1Object},
+ },
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v1"}: true,
+ {Group: "foo", Version: "v2"}: true,
+ },
+ APIVersion: "foo/v1",
+ ExpectObjects: nil,
+ },
+ {
+ Name: "multi-item list, mixed versions",
+ List: &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{*v1Object, *v2Object, *v3Object},
+ },
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v1"}: true,
+ {Group: "foo", Version: "v2"}: true,
+ {Group: "foo", Version: "v3"}: true,
+ },
+ APIVersion: "foo/v1",
+ ExpectObjects: []unstructured.Unstructured{*v2Object, *v3Object},
+ },
+ {
+ Name: "multi-item list, invalid versions",
+ List: &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{*v1Object, *v2Object, *v3Object},
+ },
+ ValidVersions: map[schema.GroupVersion]bool{
+ {Group: "foo", Version: "v2"}: true,
+ {Group: "foo", Version: "v3"}: true,
+ },
+ APIVersion: "foo/v1",
+ ExpectObjects: nil,
+ ExpectError: true,
+ },
+ }
+ for _, tc := range testcases {
+ t.Run(tc.Name, func(t *testing.T) {
+ objects, err := kcpGetObjectsToConvert(tc.List, tc.APIVersion, tc.ValidVersions)
+ gotError := err != nil
+ if e, a := tc.ExpectError, gotError; e != a {
+ t.Fatalf("error: expected %t, got %t", e, a)
+ }
+ if !reflect.DeepEqual(objects, tc.ExpectObjects) {
+ t.Errorf("unexpected diff: %s", cmp.Diff(tc.ExpectObjects, objects))
+ }
+ })
+ }
+}
+
+func TestDelegatingCRConverterConvertToVersion(t *testing.T) {
+ type args struct {
+ in runtime.Object
+ target runtime.GroupVersioner
+ }
+ tests := []struct {
+ name string
+ converter CRConverter
+ args args
+ want runtime.Object
+ wantErr bool
+ }{
+ {
+ name: "empty",
+ converter: NewNOPConverter(),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{}},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{}},
+ },
+ {
+ name: "happy path in-place",
+ converter: NewNOPConverter(),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ },
+ {
+ name: "happy path copying",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ return NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ },
+ {
+ name: "mutating name",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ ret.Items[0].SetName("mutated")
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "mutating uid",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ ret.Items[0].SetUID("mutated")
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "mutating namespace",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ ret.Items[0].SetNamespace("mutated")
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "mutating kind",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ ret.Items[0].SetKind("Moo")
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "mutating labels and annotations",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+
+ labels := ret.Items[0].GetLabels()
+ labels["foo"] = "bar"
+ ret.Items[0].SetLabels(labels)
+
+ annotations := ret.Items[0].GetAnnotations()
+ annotations["foo"] = "bar"
+ ret.Items[0].SetAnnotations(annotations)
+
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{
+ "name": "foo1",
+ "labels": map[string]interface{}{"a": "b"},
+ "annotations": map[string]interface{}{"c": "d"},
+ },
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{
+ "name": "foo1",
+ "labels": map[string]interface{}{"a": "b", "foo": "bar"},
+ "annotations": map[string]interface{}{"c": "d", "foo": "bar"},
+ },
+ "spec": map[string]interface{}{},
+ },
+ },
+ },
+ {
+ name: "mutating any other metadata",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ ret, _ := NewNOPConverter().Convert(in.DeepCopy(), targetGVK)
+ ret.Items[0].SetFinalizers([]string{"foo"})
+ return ret, nil
+ }),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ },
+ },
+ },
+ {
+ name: "empty metadata",
+ converter: NewNOPConverter(),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{},
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{},
+ "spec": map[string]interface{}{},
+ },
+ },
+ },
+ {
+ name: "missing metadata",
+ converter: NewNOPConverter(),
+ args: args{
+ in: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "spec": map[string]interface{}{},
+ },
+ },
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ want: &unstructured.Unstructured{
+ Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "spec": map[string]interface{}{},
+ },
+ },
+ },
+ {
+ name: "convertor error",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ return nil, fmt.Errorf("boom")
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "invalid number returned",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ in.Items[0].SetGroupVersionKind(targetGV.WithKind(in.Items[0].GroupVersionKind().Kind))
+ in.Items = in.Items[:1]
+ return in, nil
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "partial conversion",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ in.Items[0].SetGroupVersionKind(targetGV.WithKind(in.Items[0].GroupVersionKind().Kind))
+ return in, nil
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "invalid single version",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ in.Items[0].SetGroupVersionKind(targetGV.WithKind(in.Items[0].GroupVersionKind().Kind))
+ return in, nil
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v3",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ {
+ name: "invalid list version",
+ converter: CRConverterFunc(func(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ in.Items[0].SetGroupVersionKind(targetGV.WithKind(in.Items[0].GroupVersionKind().Kind))
+ return in, nil
+ }),
+ args: args{
+ in: &unstructured.UnstructuredList{Object: map[string]interface{}{
+ "apiVersion": "example.com/v3",
+ "kind": "FooList",
+ }, Items: []unstructured.Unstructured{
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo1"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v2",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo2"},
+ "spec": map[string]interface{}{},
+ }},
+ {Object: map[string]interface{}{
+ "apiVersion": "example.com/v1",
+ "kind": "Foo",
+ "metadata": map[string]interface{}{"name": "foo3"},
+ "spec": map[string]interface{}{},
+ }},
+ }},
+ target: schema.GroupVersion{Group: "example.com", Version: "v2"},
+ },
+ wantErr: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ c := &delegatingCRConverter{
+ converter: tt.converter,
+ validVersions: map[schema.GroupVersion]bool{
+ {Group: "example.com", Version: "v1"}: true,
+ {Group: "example.com", Version: "v2"}: true,
+ },
+ }
+ got, err := c.ConvertToVersion(tt.args.in, tt.args.target)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("ConvertToVersion() error = %v, wantErr %v", err, tt.wantErr)
+ return
}
- if !reflect.DeepEqual(test.ExpectedObject, actual) {
- t.Fatalf("%s: Expected = %v, Actual = %v", test.Name, test.ExpectedObject, actual)
+ if !reflect.DeepEqual(got, tt.want) {
+ t.Errorf("ConvertToVersion() got = %v, want %v", got, tt.want)
}
- }
+ })
}
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/metrics.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/metrics.go
index df24becb3fe9e..4a2d7b1ab2326 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/metrics.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/metrics.go
@@ -22,7 +22,7 @@ import (
"sync"
"time"
- "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
@@ -44,15 +44,15 @@ func newConverterMetricFactory() *converterMetricFactory {
return &converterMetricFactory{durations: map[string]*metrics.HistogramVec{}, factoryLock: sync.Mutex{}}
}
-var _ crConverterInterface = &converterMetric{}
+var _ CRConverter = &converterMetric{}
type converterMetric struct {
- delegate crConverterInterface
+ delegate CRConverter
latencies *metrics.HistogramVec
crdName string
}
-func (c *converterMetricFactory) addMetrics(crdName string, converter crConverterInterface) (crConverterInterface, error) {
+func (c *converterMetricFactory) addMetrics(crdName string, converter CRConverter) (CRConverter, error) {
c.factoryLock.Lock()
defer c.factoryLock.Unlock()
metric, exists := c.durations["webhook"]
@@ -74,7 +74,7 @@ func (c *converterMetricFactory) addMetrics(crdName string, converter crConverte
return &converterMetric{latencies: metric, delegate: converter, crdName: crdName}, nil
}
-func (m *converterMetric) Convert(in runtime.Object, targetGV schema.GroupVersion) (runtime.Object, error) {
+func (m *converterMetric) Convert(in *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
start := time.Now()
obj, err := m.delegate.Convert(in, targetGV)
fromVersion := in.GetObjectKind().GroupVersionKind().Version
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/nop_converter.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/nop_converter.go
index 8254fdfc0b9b5..23f03afc4d946 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/nop_converter.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/nop_converter.go
@@ -18,7 +18,6 @@ package conversion
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
- "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
@@ -26,16 +25,18 @@ import (
type nopConverter struct {
}
-var _ crConverterInterface = &nopConverter{}
+// NewNOPConverter creates a new no-op converter. The only "conversion" it performs is to set the group and version to
+// targetGV.
+func NewNOPConverter() *nopConverter {
+ return &nopConverter{}
+}
+
+var _ CRConverter = &nopConverter{}
-// ConvertToVersion converts in object to the given gv in place and returns the same `in` object.
-func (c *nopConverter) Convert(in runtime.Object, targetGV schema.GroupVersion) (runtime.Object, error) {
- // Run the converter on the list items instead of list itself
- if list, ok := in.(*unstructured.UnstructuredList); ok {
- for i := range list.Items {
- list.Items[i].SetGroupVersionKind(targetGV.WithKind(list.Items[i].GroupVersionKind().Kind))
- }
+// Convert converts in object to the given gv in place and returns the same `in` object.
+func (c *nopConverter) Convert(list *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ for i := range list.Items {
+ list.Items[i].SetGroupVersionKind(targetGV.WithKind(list.Items[i].GroupVersionKind().Kind))
}
- in.GetObjectKind().SetGroupVersionKind(targetGV.WithKind(in.GetObjectKind().GroupVersionKind().Kind))
- return in, nil
+ return list, nil
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter.go
index 95788af8a87ca..a62a861ebcc1e 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter.go
@@ -26,15 +26,12 @@ import (
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
- apivalidation "k8s.io/apimachinery/pkg/api/validation"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
- metav1validation "k8s.io/apimachinery/pkg/apis/meta/v1/validation"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/uuid"
- "k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/apiserver/pkg/util/webhook"
"k8s.io/client-go/rest"
"k8s.io/component-base/tracing"
@@ -96,7 +93,7 @@ func webhookClientConfigForCRD(crd *v1.CustomResourceDefinition) *webhook.Client
return &ret
}
-var _ crConverterInterface = &webhookConverter{}
+var _ CRConverter = &webhookConverter{}
func (f *webhookConverterFactory) NewWebhookConverter(crd *v1.CustomResourceDefinition) (*webhookConverter, error) {
restClient, err := f.clientManager.HookClient(*webhookClientConfigForCRD(crd))
@@ -113,36 +110,21 @@ func (f *webhookConverterFactory) NewWebhookConverter(crd *v1.CustomResourceDefi
}, nil
}
-// getObjectsToConvert returns a list of objects requiring conversion.
-// if obj is a list, getObjectsToConvert returns a (potentially empty) list of the items that are not already in the desired version.
-// if obj is not a list, and is already in the desired version, getObjectsToConvert returns an empty list.
-// if obj is not a list, and is not already in the desired version, getObjectsToConvert returns a list containing only obj.
-func getObjectsToConvert(obj runtime.Object, apiVersion string) []runtime.RawExtension {
- listObj, isList := obj.(*unstructured.UnstructuredList)
- var objects []runtime.RawExtension
- if isList {
- for i := range listObj.Items {
- // Only sent item for conversion, if the apiVersion is different
- if listObj.Items[i].GetAPIVersion() != apiVersion {
- objects = append(objects, runtime.RawExtension{Object: &listObj.Items[i]})
- }
- }
- } else {
- if obj.GetObjectKind().GroupVersionKind().GroupVersion().String() != apiVersion {
- objects = []runtime.RawExtension{{Object: obj}}
+// createConversionReviewObjects returns ConversionReview request and response objects for the first supported version found in conversionReviewVersions.
+func createConversionReviewObjects(conversionReviewVersions []string, objects *unstructured.UnstructuredList, apiVersion string, requestUID types.UID) (request, response runtime.Object, err error) {
+ rawObjects := make([]runtime.RawExtension, len(objects.Items))
+ for i := range objects.Items {
+ rawObjects[i] = runtime.RawExtension{
+ Object: &objects.Items[i],
}
}
- return objects
-}
-// createConversionReviewObjects returns ConversionReview request and response objects for the first supported version found in conversionReviewVersions.
-func createConversionReviewObjects(conversionReviewVersions []string, objects []runtime.RawExtension, apiVersion string, requestUID types.UID) (request, response runtime.Object, err error) {
for _, version := range conversionReviewVersions {
switch version {
case v1beta1.SchemeGroupVersion.Version:
return &v1beta1.ConversionReview{
Request: &v1beta1.ConversionRequest{
- Objects: objects,
+ Objects: rawObjects,
DesiredAPIVersion: apiVersion,
UID: requestUID,
},
@@ -151,7 +133,7 @@ func createConversionReviewObjects(conversionReviewVersions []string, objects []
case v1.SchemeGroupVersion.Version:
return &v1.ConversionReview{
Request: &v1.ConversionRequest{
- Objects: objects,
+ Objects: rawObjects,
DesiredAPIVersion: apiVersion,
UID: requestUID,
},
@@ -162,9 +144,13 @@ func createConversionReviewObjects(conversionReviewVersions []string, objects []
return nil, nil, fmt.Errorf("no supported conversion review versions")
}
-func getRawExtensionObject(rx runtime.RawExtension) (runtime.Object, error) {
+func getRawExtensionObject(rx runtime.RawExtension) (*unstructured.Unstructured, error) {
if rx.Object != nil {
- return rx.Object, nil
+ u, ok := rx.Object.(*unstructured.Unstructured)
+ if !ok {
+ return nil, fmt.Errorf("unexpected type %T", rx.Object)
+ }
+ return u, nil
}
u := unstructured.Unstructured{}
err := u.UnmarshalJSON(rx.Raw)
@@ -227,40 +213,16 @@ func getConvertedObjectsFromResponse(expectedUID types.UID, response runtime.Obj
}
}
-func (c *webhookConverter) Convert(in runtime.Object, toGV schema.GroupVersion) (runtime.Object, error) {
+func (c *webhookConverter) Convert(in *unstructured.UnstructuredList, toGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
ctx := context.TODO()
- // In general, the webhook should not do any defaulting or validation. A special case of that is an empty object
- // conversion that must result an empty object and practically is the same as nopConverter.
- // A smoke test in API machinery calls the converter on empty objects. As this case happens consistently
- // it special cased here not to call webhook converter. The test initiated here:
- // https://github.com/kubernetes/kubernetes/blob/dbb448bbdcb9e440eee57024ffa5f1698956a054/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go#L201
- if isEmptyUnstructuredObject(in) {
- return c.nopConverter.Convert(in, toGV)
- }
- t := time.Now()
- listObj, isList := in.(*unstructured.UnstructuredList)
-
requestUID := uuid.NewUUID()
desiredAPIVersion := toGV.String()
- objectsToConvert := getObjectsToConvert(in, desiredAPIVersion)
- request, response, err := createConversionReviewObjects(c.conversionReviewVersions, objectsToConvert, desiredAPIVersion, requestUID)
+ request, response, err := createConversionReviewObjects(c.conversionReviewVersions, in, desiredAPIVersion, requestUID)
if err != nil {
return nil, err
}
-
- objCount := len(objectsToConvert)
- if objCount == 0 {
- Metrics.ObserveConversionWebhookSuccess(ctx, time.Since(t))
- // no objects needed conversion
- if !isList {
- // for a single item, return as-is
- return in, nil
- }
- // for a list, set the version of the top-level list object (all individual objects are already in the correct version)
- out := listObj.DeepCopy()
- out.SetAPIVersion(toGV.String())
- return out, nil
- }
+ t := time.Now()
+ objCount := len(in.Items)
ctx, span := tracing.Start(ctx, "Call conversion webhook",
attribute.String("custom-resource-definition", c.name),
@@ -275,218 +237,28 @@ func (c *webhookConverter) Convert(in runtime.Object, toGV schema.GroupVersion)
// TODO: Figure out if adding one second timeout make sense here.
r := c.restClient.Post().Body(request).Do(ctx)
if err := r.Into(response); err != nil {
- // TODO: Return a webhook specific error to be able to convert it to meta.Status
Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookCallFailure)
+ // TODO: Return a webhook specific error to be able to convert it to meta.Status
return nil, fmt.Errorf("conversion webhook for %v failed: %v", in.GetObjectKind().GroupVersionKind(), err)
}
span.AddEvent("Request completed")
convertedObjects, err := getConvertedObjectsFromResponse(requestUID, response)
if err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookMalformedResponseFailure)
+ Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookCallFailure)
return nil, fmt.Errorf("conversion webhook for %v failed: %v", in.GetObjectKind().GroupVersionKind(), err)
}
- if len(convertedObjects) != len(objectsToConvert) {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookPartialResponseFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned %d objects, expected %d", in.GetObjectKind().GroupVersionKind(), len(convertedObjects), len(objectsToConvert))
- }
-
- if isList {
- // start a deepcopy of the input and fill in the converted objects from the response at the right spots.
- // The response list might be sparse because objects had the right version already.
- convertedList := listObj.DeepCopy()
- convertedIndex := 0
- for i := range convertedList.Items {
- original := &convertedList.Items[i]
- if original.GetAPIVersion() == toGV.String() {
- // This item has not been sent for conversion, and therefore does not show up in the response.
- // convertedList has the right item already.
- continue
- }
- converted, err := getRawExtensionObject(convertedObjects[convertedIndex])
- if err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid converted object at index %v: %v", in.GetObjectKind().GroupVersionKind(), convertedIndex, err)
- }
- if expected, got := toGV, converted.GetObjectKind().GroupVersionKind().GroupVersion(); expected != got {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid converted object at index %v: invalid groupVersion (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), convertedIndex, expected, got)
- }
- if expected, got := original.GetObjectKind().GroupVersionKind().Kind, converted.GetObjectKind().GroupVersionKind().Kind; expected != got {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid converted object at index %v: invalid kind (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), convertedIndex, expected, got)
- }
- unstructConverted, ok := converted.(*unstructured.Unstructured)
- if !ok {
- // this should not happened
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid converted object at index %v: invalid type, expected=Unstructured, got=%T", in.GetObjectKind().GroupVersionKind(), convertedIndex, converted)
- }
- if err := validateConvertedObject(original, unstructConverted); err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid converted object at index %v: %v", in.GetObjectKind().GroupVersionKind(), convertedIndex, err)
- }
- if err := restoreObjectMeta(original, unstructConverted); err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid metadata in object at index %v: %v", in.GetObjectKind().GroupVersionKind(), convertedIndex, err)
- }
- convertedIndex++
- convertedList.Items[i] = *unstructConverted
+ out := &unstructured.UnstructuredList{}
+ out.Items = make([]unstructured.Unstructured, len(convertedObjects))
+ for i := range convertedObjects {
+ u, err := getRawExtensionObject(convertedObjects[i])
+ if err != nil {
+ return nil, err
}
- convertedList.SetAPIVersion(toGV.String())
- Metrics.ObserveConversionWebhookSuccess(ctx, time.Since(t))
- return convertedList, nil
+ out.Items[i] = *u
}
- if len(convertedObjects) != 1 {
- // This should not happened
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookNoObjectsReturnedFailure)
- return nil, fmt.Errorf("conversion webhook for %v failed, no objects returned", in.GetObjectKind())
- }
- converted, err := getRawExtensionObject(convertedObjects[0])
- if err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, err
- }
- if e, a := toGV, converted.GetObjectKind().GroupVersionKind().GroupVersion(); e != a {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid object at index 0: invalid groupVersion (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), e, a)
- }
- if e, a := in.GetObjectKind().GroupVersionKind().Kind, converted.GetObjectKind().GroupVersionKind().Kind; e != a {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid object at index 0: invalid kind (expected %v, received %v)", in.GetObjectKind().GroupVersionKind(), e, a)
- }
- unstructConverted, ok := converted.(*unstructured.Unstructured)
- if !ok {
- // this should not happened
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v failed, unexpected type %T at index 0", in.GetObjectKind().GroupVersionKind(), converted)
- }
- unstructIn, ok := in.(*unstructured.Unstructured)
- if !ok {
- // this should not happened
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v failed unexpected input type %T", in.GetObjectKind().GroupVersionKind(), in)
- }
- if err := validateConvertedObject(unstructIn, unstructConverted); err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid object: %v", in.GetObjectKind().GroupVersionKind(), err)
- }
- if err := restoreObjectMeta(unstructIn, unstructConverted); err != nil {
- Metrics.ObserveConversionWebhookFailure(ctx, time.Since(t), ConversionWebhookInvalidConvertedObjectFailure)
- return nil, fmt.Errorf("conversion webhook for %v returned invalid metadata: %v", in.GetObjectKind().GroupVersionKind(), err)
- }
Metrics.ObserveConversionWebhookSuccess(ctx, time.Since(t))
- return converted, nil
-}
-
-// validateConvertedObject checks that ObjectMeta fields match, with the exception of
-// labels and annotations.
-func validateConvertedObject(in, out *unstructured.Unstructured) error {
- if e, a := in.GetKind(), out.GetKind(); e != a {
- return fmt.Errorf("must have the same kind: %v != %v", e, a)
- }
- if e, a := in.GetName(), out.GetName(); e != a {
- return fmt.Errorf("must have the same name: %v != %v", e, a)
- }
- if e, a := in.GetNamespace(), out.GetNamespace(); e != a {
- return fmt.Errorf("must have the same namespace: %v != %v", e, a)
- }
- if e, a := in.GetUID(), out.GetUID(); e != a {
- return fmt.Errorf("must have the same UID: %v != %v", e, a)
- }
- return nil
-}
-
-// restoreObjectMeta deep-copies metadata from original into converted, while preserving labels and annotations from converted.
-func restoreObjectMeta(original, converted *unstructured.Unstructured) error {
- obj, found := converted.Object["metadata"]
- if !found {
- return fmt.Errorf("missing metadata in converted object")
- }
- responseMetaData, ok := obj.(map[string]interface{})
- if !ok {
- return fmt.Errorf("invalid metadata of type %T in converted object", obj)
- }
-
- if _, ok := original.Object["metadata"]; !ok {
- // the original will always have metadata. But just to be safe, let's clear in converted
- // with an empty object instead of nil, to be able to add labels and annotations below.
- converted.Object["metadata"] = map[string]interface{}{}
- } else {
- converted.Object["metadata"] = runtime.DeepCopyJSONValue(original.Object["metadata"])
- }
-
- obj = converted.Object["metadata"]
- convertedMetaData, ok := obj.(map[string]interface{})
- if !ok {
- return fmt.Errorf("invalid metadata of type %T in input object", obj)
- }
-
- for _, fld := range []string{"labels", "annotations"} {
- obj, found := responseMetaData[fld]
- if !found || obj == nil {
- delete(convertedMetaData, fld)
- continue
- }
- responseField, ok := obj.(map[string]interface{})
- if !ok {
- return fmt.Errorf("invalid metadata.%s of type %T in converted object", fld, obj)
- }
-
- originalField, ok := convertedMetaData[fld].(map[string]interface{})
- if !ok && convertedMetaData[fld] != nil {
- return fmt.Errorf("invalid metadata.%s of type %T in original object", fld, convertedMetaData[fld])
- }
-
- somethingChanged := len(originalField) != len(responseField)
- for k, v := range responseField {
- if _, ok := v.(string); !ok {
- return fmt.Errorf("metadata.%s[%s] must be a string, but is %T in converted object", fld, k, v)
- }
- if originalField[k] != interface{}(v) {
- somethingChanged = true
- }
- }
-
- if somethingChanged {
- stringMap := make(map[string]string, len(responseField))
- for k, v := range responseField {
- stringMap[k] = v.(string)
- }
- var errs field.ErrorList
- if fld == "labels" {
- errs = metav1validation.ValidateLabels(stringMap, field.NewPath("metadata", "labels"))
- } else {
- errs = apivalidation.ValidateAnnotations(stringMap, field.NewPath("metadata", "annotation"))
- }
- if len(errs) > 0 {
- return errs.ToAggregate()
- }
- }
-
- convertedMetaData[fld] = responseField
- }
-
- return nil
-}
-
-// isEmptyUnstructuredObject returns true if in is an empty unstructured object, i.e. an unstructured object that does
-// not have any field except apiVersion and kind.
-func isEmptyUnstructuredObject(in runtime.Object) bool {
- u, ok := in.(*unstructured.Unstructured)
- if !ok {
- return false
- }
- if len(u.Object) != 2 {
- return false
- }
- if _, ok := u.Object["kind"]; !ok {
- return false
- }
- if _, ok := u.Object["apiVersion"]; !ok {
- return false
- }
- return true
+ return out, nil
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter_test.go
index 5c6766c09c1a0..e295375e883e7 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter_test.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/webhook_converter_test.go
@@ -61,7 +61,7 @@ func TestRestoreObjectMeta(t *testing.T) {
{"invalid original metadata",
map[string]interface{}{"metadata": []interface{}{"foo"}},
map[string]interface{}{"metadata": map[string]interface{}{}, "spec": map[string]interface{}{}},
- map[string]interface{}{"metadata": []interface{}{"foo"}, "spec": map[string]interface{}{}},
+ map[string]interface{}{"metadata": map[string]interface{}{}, "spec": map[string]interface{}{}},
true,
},
{"changed label, annotations and non-label",
@@ -206,81 +206,19 @@ func TestRestoreObjectMeta(t *testing.T) {
}
}
-func TestGetObjectsToConvert(t *testing.T) {
- v1Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v1", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv1"}}}
- v2Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v2", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv2"}}}
- v3Object := &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v3", "kind": "Widget", "metadata": map[string]interface{}{"name": "myv3"}}}
-
- testcases := []struct {
- Name string
- Object runtime.Object
- APIVersion string
-
- ExpectObjects []runtime.RawExtension
- }{
- {
- Name: "empty list",
- Object: &unstructured.UnstructuredList{},
- APIVersion: "foo/v1",
- ExpectObjects: nil,
- },
- {
- Name: "one-item list, in desired version",
- Object: &unstructured.UnstructuredList{
- Items: []unstructured.Unstructured{*v1Object},
- },
- APIVersion: "foo/v1",
- ExpectObjects: nil,
- },
- {
- Name: "one-item list, not in desired version",
- Object: &unstructured.UnstructuredList{
- Items: []unstructured.Unstructured{*v2Object},
- },
- APIVersion: "foo/v1",
- ExpectObjects: []runtime.RawExtension{{Object: v2Object}},
- },
- {
- Name: "multi-item list, in desired version",
- Object: &unstructured.UnstructuredList{
- Items: []unstructured.Unstructured{*v1Object, *v1Object, *v1Object},
- },
- APIVersion: "foo/v1",
- ExpectObjects: nil,
- },
- {
- Name: "multi-item list, mixed versions",
- Object: &unstructured.UnstructuredList{
- Items: []unstructured.Unstructured{*v1Object, *v2Object, *v3Object},
+func TestCreateConversionReviewObjects(t *testing.T) {
+ objects := &unstructured.UnstructuredList{
+ Items: []unstructured.Unstructured{
+ {
+ Object: map[string]interface{}{"apiVersion": "foo/v2", "Kind": "Widget"},
},
- APIVersion: "foo/v1",
- ExpectObjects: []runtime.RawExtension{{Object: v2Object}, {Object: v3Object}},
- },
- {
- Name: "single item, in desired version",
- Object: v1Object,
- APIVersion: "foo/v1",
- ExpectObjects: nil,
- },
- {
- Name: "single item, not in desired version",
- Object: v2Object,
- APIVersion: "foo/v1",
- ExpectObjects: []runtime.RawExtension{{Object: v2Object}},
},
}
- for _, tc := range testcases {
- t.Run(tc.Name, func(t *testing.T) {
- if objects := getObjectsToConvert(tc.Object, tc.APIVersion); !reflect.DeepEqual(objects, tc.ExpectObjects) {
- t.Errorf("unexpected diff: %s", cmp.Diff(tc.ExpectObjects, objects))
- }
- })
- }
-}
-func TestCreateConversionReviewObjects(t *testing.T) {
- objects := []runtime.RawExtension{
- {Object: &unstructured.Unstructured{Object: map[string]interface{}{"apiVersion": "foo/v2", "Kind": "Widget"}}},
+ rawObjects := []runtime.RawExtension{
+ {
+ Object: &objects.Items[0],
+ },
}
testcases := []struct {
@@ -300,7 +238,7 @@ func TestCreateConversionReviewObjects(t *testing.T) {
Name: "v1",
Versions: []string{"v1", "v1beta1", "v2"},
ExpectRequest: &v1.ConversionReview{
- Request: &v1.ConversionRequest{UID: "uid", DesiredAPIVersion: "foo/v1", Objects: objects},
+ Request: &v1.ConversionRequest{UID: "uid", DesiredAPIVersion: "foo/v1", Objects: rawObjects},
Response: &v1.ConversionResponse{},
},
ExpectResponse: &v1.ConversionReview{},
@@ -309,7 +247,7 @@ func TestCreateConversionReviewObjects(t *testing.T) {
Name: "v1beta1",
Versions: []string{"v1beta1", "v1", "v2"},
ExpectRequest: &v1beta1.ConversionReview{
- Request: &v1beta1.ConversionRequest{UID: "uid", DesiredAPIVersion: "foo/v1", Objects: objects},
+ Request: &v1beta1.ConversionRequest{UID: "uid", DesiredAPIVersion: "foo/v1", Objects: rawObjects},
Response: &v1beta1.ConversionResponse{},
},
ExpectResponse: &v1beta1.ConversionReview{},
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/wildcard_partial_metadata_converter_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/wildcard_partial_metadata_converter_kcp.go
new file mode 100644
index 0000000000000..cd3821e035420
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/conversion/wildcard_partial_metadata_converter_kcp.go
@@ -0,0 +1,58 @@
+/*
+Copyright 2023 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package conversion
+
+import (
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apiserver/pkg/endpoints/handlers"
+)
+
+type kcpWildcardPartialMetadataConverter struct {
+}
+
+func NewKCPWildcardPartialMetadataConverter() *kcpWildcardPartialMetadataConverter {
+ return &kcpWildcardPartialMetadataConverter{}
+}
+
+var _ CRConverter = &kcpWildcardPartialMetadataConverter{}
+
+// Convert is a NOP converter that additionally stores the original APIVersion of each item in the annotation
+// kcp.io/original-api-version. This is necessary for kcp with wildcard partial metadata list/watch requests.
+// For example, if the request is for /clusters/*/apis/kcp.io/v1/widgets, and it's a partial metadata request, the
+// server returns ALL widgets, regardless of their API version. But because this is a partial metadata request, the
+// API version of the returned object is always meta.k8s.io/$version (could be v1 or v1beta1). Any client needing to
+// modify or delete the returned object must know its exact API version. Therefore, we set this annotation with the
+// actual original API version of the object. Clients can use it when constructing dynamic clients to guarantee they
+// // are using the correct API version.
+func (c *kcpWildcardPartialMetadataConverter) Convert(list *unstructured.UnstructuredList, targetGV schema.GroupVersion) (*unstructured.UnstructuredList, error) {
+ for i := range list.Items {
+ item := &list.Items[i]
+
+ // First preserve the actual API version
+ annotations := item.GetAnnotations()
+ if annotations == nil {
+ annotations = make(map[string]string)
+ }
+ annotations[handlers.KCPOriginalAPIVersionAnnotation] = item.GetAPIVersion()
+ item.SetAnnotations(annotations)
+
+ // Now that we've preserved it, we can change it to the targetGV.
+ item.SetGroupVersionKind(targetGV.WithKind(item.GroupVersionKind().Kind))
+ }
+ return list, nil
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery.go
index f40c33791b6a7..d9c5b6df6f0c5 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery.go
@@ -17,20 +17,28 @@ limitations under the License.
package apiserver
import (
+ "context"
"net/http"
+ "sort"
"strings"
- "sync"
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ autoscaling "k8s.io/api/autoscaling/v1"
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apiextensions-apiserver/pkg/kcp"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/version"
"k8s.io/apiserver/pkg/endpoints/discovery"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
)
type versionDiscoveryHandler struct {
- // TODO, writing is infrequent, optimize this
- discoveryLock sync.RWMutex
- discovery map[schema.GroupVersion]*discovery.APIVersionHandler
-
- delegate http.Handler
+ crdLister kcp.ClusterAwareCRDClusterLister
+ delegate http.Handler
}
func (r *versionDiscoveryHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
@@ -40,43 +48,124 @@ func (r *versionDiscoveryHandler) ServeHTTP(w http.ResponseWriter, req *http.Req
r.delegate.ServeHTTP(w, req)
return
}
- discovery, ok := r.getDiscovery(schema.GroupVersion{Group: pathParts[1], Version: pathParts[2]})
- if !ok {
- r.delegate.ServeHTTP(w, req)
+
+ clusterName, wildcard, err := genericapirequest.ClusterNameOrWildcardFrom(req.Context())
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
+ if wildcard {
+ // this is the only case where wildcard works for a list because this is our special CRD lister that handles it.
+ clusterName = "*"
+ }
- discovery.ServeHTTP(w, req)
-}
+ requestedGroup := pathParts[1]
+ requestedVersion := pathParts[2]
-func (r *versionDiscoveryHandler) getDiscovery(gv schema.GroupVersion) (*discovery.APIVersionHandler, bool) {
- r.discoveryLock.RLock()
- defer r.discoveryLock.RUnlock()
+ crds, err := r.crdLister.Cluster(clusterName).List(req.Context(), labels.Everything())
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
- ret, ok := r.discovery[gv]
- return ret, ok
-}
+ apiResources := APIResourcesForGroupVersion(requestedGroup, requestedVersion, crds)
-func (r *versionDiscoveryHandler) setDiscovery(gv schema.GroupVersion, discovery *discovery.APIVersionHandler) {
- r.discoveryLock.Lock()
- defer r.discoveryLock.Unlock()
+ resourceListerFunc := discovery.APIResourceListerFunc(func() []metav1.APIResource {
+ return apiResources
+ })
- r.discovery[gv] = discovery
+ discovery.NewAPIVersionHandler(Codecs, schema.GroupVersion{Group: requestedGroup, Version: requestedVersion}, resourceListerFunc).ServeHTTP(w, req)
}
-func (r *versionDiscoveryHandler) unsetDiscovery(gv schema.GroupVersion) {
- r.discoveryLock.Lock()
- defer r.discoveryLock.Unlock()
+func APIResourcesForGroupVersion(requestedGroup, requestedVersion string, crds []*apiextensionsv1.CustomResourceDefinition) []metav1.APIResource {
+ apiResourcesForDiscovery := []metav1.APIResource{}
+
+ for _, crd := range crds {
+ if requestedGroup != crd.Spec.Group {
+ continue
+ }
+
+ if !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
+ continue
+ }
+
+ var (
+ storageVersionHash string
+ subresources *apiextensionsv1.CustomResourceSubresources
+ foundVersion = false
+ )
+
+ for _, v := range crd.Spec.Versions {
+ if !v.Served {
+ continue
+ }
+
+ // HACK: support the case when we add core resources through CRDs (KCP scenario)
+ groupVersion := crd.Spec.Group + "/" + v.Name
+ if crd.Spec.Group == "" {
+ groupVersion = v.Name
+ }
+
+ gv := metav1.GroupVersion{Group: groupVersion, Version: v.Name}
- delete(r.discovery, gv)
+ if v.Name == requestedVersion {
+ foundVersion = true
+ subresources = v.Subresources
+ }
+ if v.Storage {
+ storageVersionHash = discovery.StorageVersionHash(logicalcluster.From(crd), gv.Group, gv.Version, crd.Spec.Names.Kind)
+ }
+ }
+
+ if !foundVersion {
+ // This CRD doesn't have the requested version
+ continue
+ }
+
+ verbs := metav1.Verbs([]string{"delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"})
+ // if we're terminating we don't allow some verbs
+ if apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Terminating) {
+ verbs = metav1.Verbs([]string{"delete", "deletecollection", "get", "list", "watch"})
+ }
+
+ apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
+ Name: crd.Status.AcceptedNames.Plural,
+ SingularName: crd.Status.AcceptedNames.Singular,
+ Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
+ Kind: crd.Status.AcceptedNames.Kind,
+ Verbs: verbs,
+ ShortNames: crd.Status.AcceptedNames.ShortNames,
+ Categories: crd.Status.AcceptedNames.Categories,
+ StorageVersionHash: storageVersionHash,
+ })
+
+ if subresources != nil && subresources.Status != nil {
+ apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
+ Name: crd.Status.AcceptedNames.Plural + "/status",
+ Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
+ Kind: crd.Status.AcceptedNames.Kind,
+ Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
+ })
+ }
+
+ if subresources != nil && subresources.Scale != nil {
+ apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
+ Group: autoscaling.GroupName,
+ Version: "v1",
+ Kind: "Scale",
+ Name: crd.Status.AcceptedNames.Plural + "/scale",
+ Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
+ Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
+ })
+ }
+ }
+
+ return apiResourcesForDiscovery
}
type groupDiscoveryHandler struct {
- // TODO, writing is infrequent, optimize this
- discoveryLock sync.RWMutex
- discovery map[string]*discovery.APIGroupHandler
-
- delegate http.Handler
+ crdLister kcp.ClusterAwareCRDClusterLister
+ delegate http.Handler
}
func (r *groupDiscoveryHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
@@ -86,35 +175,155 @@ func (r *groupDiscoveryHandler) ServeHTTP(w http.ResponseWriter, req *http.Reque
r.delegate.ServeHTTP(w, req)
return
}
- discovery, ok := r.getDiscovery(pathParts[1])
- if !ok {
+
+ clusterName, wildcard, err := genericapirequest.ClusterNameOrWildcardFrom(req.Context())
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if wildcard {
+ // this is the only case where wildcard works for a list because this is our special CRD lister that handles it.
+ clusterName = "*"
+ }
+
+ apiVersionsForDiscovery := []metav1.GroupVersionForDiscovery{}
+ versionsForDiscoveryMap := map[metav1.GroupVersion]bool{}
+
+ requestedGroup := pathParts[1]
+
+ crds, err := r.crdLister.Cluster(clusterName).List(req.Context(), labels.Everything())
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ foundGroup := false
+ for _, crd := range crds {
+ if requestedGroup != crd.Spec.Group {
+ continue
+ }
+
+ if !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
+ continue
+ }
+
+ for _, v := range crd.Spec.Versions {
+ if !v.Served {
+ continue
+ }
+ // If there is any Served version, that means the group should show up in discovery
+ foundGroup = true
+
+ // HACK: support the case when we add core resources through CRDs (KCP scenario)
+ groupVersion := crd.Spec.Group + "/" + v.Name
+ if crd.Spec.Group == "" {
+ groupVersion = v.Name
+ }
+
+ gv := metav1.GroupVersion{Group: crd.Spec.Group, Version: v.Name}
+
+ if !versionsForDiscoveryMap[gv] {
+ versionsForDiscoveryMap[gv] = true
+ apiVersionsForDiscovery = append(apiVersionsForDiscovery, metav1.GroupVersionForDiscovery{
+ GroupVersion: groupVersion,
+ Version: v.Name,
+ })
+ }
+ }
+ }
+
+ sortGroupDiscoveryByKubeAwareVersion(apiVersionsForDiscovery)
+
+ if !foundGroup {
r.delegate.ServeHTTP(w, req)
return
}
- discovery.ServeHTTP(w, req)
-}
+ apiGroup := metav1.APIGroup{
+ Name: requestedGroup,
+ Versions: apiVersionsForDiscovery,
+ // the preferred versions for a group is the first item in
+ // apiVersionsForDiscovery after it put in the right ordered
+ PreferredVersion: apiVersionsForDiscovery[0],
+ }
-func (r *groupDiscoveryHandler) getDiscovery(group string) (*discovery.APIGroupHandler, bool) {
- r.discoveryLock.RLock()
- defer r.discoveryLock.RUnlock()
+ discovery.NewAPIGroupHandler(Codecs, apiGroup).ServeHTTP(w, req)
+}
- ret, ok := r.discovery[group]
- return ret, ok
+type rootDiscoveryHandler struct {
+ crdLister kcp.ClusterAwareCRDClusterLister
+ delegate http.Handler
}
-func (r *groupDiscoveryHandler) setDiscovery(group string, discovery *discovery.APIGroupHandler) {
- r.discoveryLock.Lock()
- defer r.discoveryLock.Unlock()
+func (r *rootDiscoveryHandler) Groups(ctx context.Context, _ *http.Request) ([]metav1.APIGroup, error) {
+ apiVersionsForDiscovery := map[string][]metav1.GroupVersionForDiscovery{}
+ versionsForDiscoveryMap := map[string]map[metav1.GroupVersion]bool{}
- r.discovery[group] = discovery
-}
+ clusterName, wildcard, err := genericapirequest.ClusterNameOrWildcardFrom(ctx)
+ if err != nil {
+ return nil, err
+ }
+ if wildcard {
+ // this is the only case where wildcard works for a list because this is our special CRD lister that handles it.
+ clusterName = "*"
+ }
+
+ crds, err := r.crdLister.Cluster(clusterName).List(ctx, labels.Everything())
+ if err != nil {
+ return []metav1.APIGroup{}, err
+ }
+ for _, crd := range crds {
+ if !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
+ continue
+ }
+
+ for _, v := range crd.Spec.Versions {
+ if !v.Served {
+ continue
+ }
-func (r *groupDiscoveryHandler) unsetDiscovery(group string) {
- r.discoveryLock.Lock()
- defer r.discoveryLock.Unlock()
+ if crd.Spec.Group == "" {
+ // Don't include CRDs in the core ("") group in /apis discovery. They
+ // instead are in /api/v1 handled elsewhere.
+ continue
+ }
+ groupVersion := crd.Spec.Group + "/" + v.Name
- delete(r.discovery, group)
+ gv := metav1.GroupVersion{Group: crd.Spec.Group, Version: v.Name}
+
+ m, ok := versionsForDiscoveryMap[crd.Spec.Group]
+ if !ok {
+ m = make(map[metav1.GroupVersion]bool)
+ }
+
+ if !m[gv] {
+ m[gv] = true
+ groupVersions := apiVersionsForDiscovery[crd.Spec.Group]
+ groupVersions = append(groupVersions, metav1.GroupVersionForDiscovery{
+ GroupVersion: groupVersion,
+ Version: v.Name,
+ })
+ apiVersionsForDiscovery[crd.Spec.Group] = groupVersions
+ }
+
+ versionsForDiscoveryMap[crd.Spec.Group] = m
+ }
+ }
+
+ for _, versions := range apiVersionsForDiscovery {
+ sortGroupDiscoveryByKubeAwareVersion(versions)
+
+ }
+
+ groupList := make([]metav1.APIGroup, 0, len(apiVersionsForDiscovery))
+ for group, versions := range apiVersionsForDiscovery {
+ g := metav1.APIGroup{
+ Name: group,
+ Versions: versions,
+ PreferredVersion: versions[0],
+ }
+ groupList = append(groupList, g)
+ }
+ return groupList, nil
}
// splitPath returns the segments for a URL path.
@@ -125,3 +334,9 @@ func splitPath(path string) []string {
}
return strings.Split(path, "/")
}
+
+func sortGroupDiscoveryByKubeAwareVersion(gd []metav1.GroupVersionForDiscovery) {
+ sort.Slice(gd, func(i, j int) bool {
+ return version.CompareKubeAwareVersionStrings(gd[i].Version, gd[j].Version) > 0
+ })
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller.go
deleted file mode 100644
index 1e8ffbc69cab5..0000000000000
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller.go
+++ /dev/null
@@ -1,397 +0,0 @@
-/*
-Copyright 2017 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package apiserver
-
-import (
- "context"
- "errors"
- "fmt"
- "sort"
- "time"
-
- "k8s.io/klog/v2"
-
- apidiscoveryv2 "k8s.io/api/apidiscovery/v2"
- autoscaling "k8s.io/api/autoscaling/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/labels"
- "k8s.io/apimachinery/pkg/runtime/schema"
- utilruntime "k8s.io/apimachinery/pkg/util/runtime"
- "k8s.io/apimachinery/pkg/util/wait"
- "k8s.io/apimachinery/pkg/version"
- "k8s.io/apiserver/pkg/endpoints/discovery"
- discoveryendpoint "k8s.io/apiserver/pkg/endpoints/discovery/aggregated"
- "k8s.io/client-go/tools/cache"
- "k8s.io/client-go/util/workqueue"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
-)
-
-type DiscoveryController struct {
- versionHandler *versionDiscoveryHandler
- groupHandler *groupDiscoveryHandler
- resourceManager discoveryendpoint.ResourceManager
-
- crdLister listers.CustomResourceDefinitionLister
- crdsSynced cache.InformerSynced
-
- // To allow injection for testing.
- syncFn func(version schema.GroupVersion) error
-
- queue workqueue.TypedRateLimitingInterface[schema.GroupVersion]
-}
-
-func NewDiscoveryController(
- crdInformer informers.CustomResourceDefinitionInformer,
- versionHandler *versionDiscoveryHandler,
- groupHandler *groupDiscoveryHandler,
- resourceManager discoveryendpoint.ResourceManager,
-) *DiscoveryController {
- c := &DiscoveryController{
- versionHandler: versionHandler,
- groupHandler: groupHandler,
- resourceManager: resourceManager,
- crdLister: crdInformer.Lister(),
- crdsSynced: crdInformer.Informer().HasSynced,
-
- queue: workqueue.NewTypedRateLimitingQueueWithConfig(
- workqueue.DefaultTypedControllerRateLimiter[schema.GroupVersion](),
- workqueue.TypedRateLimitingQueueConfig[schema.GroupVersion]{Name: "DiscoveryController"},
- ),
- }
-
- crdInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
- AddFunc: c.addCustomResourceDefinition,
- UpdateFunc: c.updateCustomResourceDefinition,
- DeleteFunc: c.deleteCustomResourceDefinition,
- })
-
- c.syncFn = c.sync
-
- return c
-}
-
-func (c *DiscoveryController) sync(version schema.GroupVersion) error {
-
- apiVersionsForDiscovery := []metav1.GroupVersionForDiscovery{}
- apiResourcesForDiscovery := []metav1.APIResource{}
- aggregatedAPIResourcesForDiscovery := []apidiscoveryv2.APIResourceDiscovery{}
- versionsForDiscoveryMap := map[metav1.GroupVersion]bool{}
-
- crds, err := c.crdLister.List(labels.Everything())
- if err != nil {
- return err
- }
- foundVersion := false
- foundGroup := false
- for _, crd := range crds {
- if !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
- continue
- }
-
- if crd.Spec.Group != version.Group {
- continue
- }
-
- foundThisVersion := false
- var storageVersionHash string
- for _, v := range crd.Spec.Versions {
- if !v.Served {
- continue
- }
- // If there is any Served version, that means the group should show up in discovery
- foundGroup = true
-
- gv := metav1.GroupVersion{Group: crd.Spec.Group, Version: v.Name}
- if !versionsForDiscoveryMap[gv] {
- versionsForDiscoveryMap[gv] = true
- apiVersionsForDiscovery = append(apiVersionsForDiscovery, metav1.GroupVersionForDiscovery{
- GroupVersion: crd.Spec.Group + "/" + v.Name,
- Version: v.Name,
- })
- }
- if v.Name == version.Version {
- foundThisVersion = true
- }
- if v.Storage {
- storageVersionHash = discovery.StorageVersionHash(gv.Group, gv.Version, crd.Spec.Names.Kind)
- }
- }
-
- if !foundThisVersion {
- continue
- }
- foundVersion = true
-
- verbs := metav1.Verbs([]string{"delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"})
- // if we're terminating we don't allow some verbs
- if apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Terminating) {
- verbs = metav1.Verbs([]string{"delete", "deletecollection", "get", "list", "watch"})
- }
-
- apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
- Name: crd.Status.AcceptedNames.Plural,
- SingularName: crd.Status.AcceptedNames.Singular,
- Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
- Kind: crd.Status.AcceptedNames.Kind,
- Verbs: verbs,
- ShortNames: crd.Status.AcceptedNames.ShortNames,
- Categories: crd.Status.AcceptedNames.Categories,
- StorageVersionHash: storageVersionHash,
- })
-
- subresources, err := apiextensionshelpers.GetSubresourcesForVersion(crd, version.Version)
- if err != nil {
- return err
- }
-
- if c.resourceManager != nil {
- var scope apidiscoveryv2.ResourceScope
- if crd.Spec.Scope == apiextensionsv1.NamespaceScoped {
- scope = apidiscoveryv2.ScopeNamespace
- } else {
- scope = apidiscoveryv2.ScopeCluster
- }
- apiResourceDiscovery := apidiscoveryv2.APIResourceDiscovery{
- Resource: crd.Status.AcceptedNames.Plural,
- SingularResource: crd.Status.AcceptedNames.Singular,
- Scope: scope,
- ResponseKind: &metav1.GroupVersionKind{
- Group: version.Group,
- Version: version.Version,
- Kind: crd.Status.AcceptedNames.Kind,
- },
- Verbs: verbs,
- ShortNames: crd.Status.AcceptedNames.ShortNames,
- Categories: crd.Status.AcceptedNames.Categories,
- }
- if subresources != nil && subresources.Status != nil {
- apiResourceDiscovery.Subresources = append(apiResourceDiscovery.Subresources, apidiscoveryv2.APISubresourceDiscovery{
- Subresource: "status",
- ResponseKind: &metav1.GroupVersionKind{
- Group: version.Group,
- Version: version.Version,
- Kind: crd.Status.AcceptedNames.Kind,
- },
- Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
- })
- }
- if subresources != nil && subresources.Scale != nil {
- apiResourceDiscovery.Subresources = append(apiResourceDiscovery.Subresources, apidiscoveryv2.APISubresourceDiscovery{
- Subresource: "scale",
- ResponseKind: &metav1.GroupVersionKind{
- Group: autoscaling.GroupName,
- Version: "v1",
- Kind: "Scale",
- },
- Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
- })
-
- }
- aggregatedAPIResourcesForDiscovery = append(aggregatedAPIResourcesForDiscovery, apiResourceDiscovery)
- }
-
- if subresources != nil && subresources.Status != nil {
- apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
- Name: crd.Status.AcceptedNames.Plural + "/status",
- Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
- Kind: crd.Status.AcceptedNames.Kind,
- Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
- })
- }
-
- if subresources != nil && subresources.Scale != nil {
- apiResourcesForDiscovery = append(apiResourcesForDiscovery, metav1.APIResource{
- Group: autoscaling.GroupName,
- Version: "v1",
- Kind: "Scale",
- Name: crd.Status.AcceptedNames.Plural + "/scale",
- Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
- Verbs: metav1.Verbs([]string{"get", "patch", "update"}),
- })
- }
- }
-
- if !foundGroup {
- c.groupHandler.unsetDiscovery(version.Group)
- c.versionHandler.unsetDiscovery(version)
-
- if c.resourceManager != nil {
- c.resourceManager.RemoveGroup(version.Group)
- }
- return nil
- }
-
- sortGroupDiscoveryByKubeAwareVersion(apiVersionsForDiscovery)
-
- apiGroup := metav1.APIGroup{
- Name: version.Group,
- Versions: apiVersionsForDiscovery,
- // the preferred versions for a group is the first item in
- // apiVersionsForDiscovery after it put in the right ordered
- PreferredVersion: apiVersionsForDiscovery[0],
- }
- c.groupHandler.setDiscovery(version.Group, discovery.NewAPIGroupHandler(Codecs, apiGroup))
-
- if !foundVersion {
- c.versionHandler.unsetDiscovery(version)
-
- if c.resourceManager != nil {
- c.resourceManager.RemoveGroupVersion(metav1.GroupVersion{
- Group: version.Group,
- Version: version.Version,
- })
- }
- return nil
- }
- c.versionHandler.setDiscovery(version, discovery.NewAPIVersionHandler(Codecs, version, discovery.APIResourceListerFunc(func() []metav1.APIResource {
- return apiResourcesForDiscovery
- })))
-
- sort.Slice(aggregatedAPIResourcesForDiscovery, func(i, j int) bool {
- return aggregatedAPIResourcesForDiscovery[i].Resource < aggregatedAPIResourcesForDiscovery[j].Resource
- })
- if c.resourceManager != nil {
- c.resourceManager.AddGroupVersion(version.Group, apidiscoveryv2.APIVersionDiscovery{
- Freshness: apidiscoveryv2.DiscoveryFreshnessCurrent,
- Version: version.Version,
- Resources: aggregatedAPIResourcesForDiscovery,
- })
- // Default priority for CRDs
- c.resourceManager.SetGroupVersionPriority(metav1.GroupVersion(version), 1000, 100)
- }
- return nil
-}
-
-func sortGroupDiscoveryByKubeAwareVersion(gd []metav1.GroupVersionForDiscovery) {
- sort.Slice(gd, func(i, j int) bool {
- return version.CompareKubeAwareVersionStrings(gd[i].Version, gd[j].Version) > 0
- })
-}
-
-func (c *DiscoveryController) Run(stopCh <-chan struct{}, synchedCh chan<- struct{}) {
- defer utilruntime.HandleCrash()
- defer c.queue.ShutDown()
- defer klog.Info("Shutting down DiscoveryController")
-
- klog.Info("Starting DiscoveryController")
-
- if !cache.WaitForCacheSync(stopCh, c.crdsSynced) {
- utilruntime.HandleError(fmt.Errorf("timed out waiting for caches to sync"))
- return
- }
-
- // initially sync all group versions to make sure we serve complete discovery
- if err := wait.PollUntilContextCancel(context.Background(), time.Second, true, func(ctx context.Context) (bool, error) {
- crds, err := c.crdLister.List(labels.Everything())
- if err != nil {
- utilruntime.HandleError(fmt.Errorf("failed to initially list CRDs: %v", err))
- return false, nil
- }
- for _, crd := range crds {
- for _, v := range crd.Spec.Versions {
- gv := schema.GroupVersion{Group: crd.Spec.Group, Version: v.Name}
- if err := c.sync(gv); err != nil {
- utilruntime.HandleError(fmt.Errorf("failed to initially sync CRD version %v: %v", gv, err))
- return false, nil
- }
- }
- }
- return true, nil
- }); err != nil {
- if errors.Is(err, context.DeadlineExceeded) {
- utilruntime.HandleError(fmt.Errorf("timed out waiting for initial discovery sync"))
- return
- }
- utilruntime.HandleError(fmt.Errorf("unexpected error: %w", err))
- return
- }
- close(synchedCh)
-
- // only start one worker thread since its a slow moving API
- go wait.Until(c.runWorker, time.Second, stopCh)
-
- <-stopCh
-}
-
-func (c *DiscoveryController) runWorker() {
- for c.processNextWorkItem() {
- }
-}
-
-// processNextWorkItem deals with one key off the queue. It returns false when it's time to quit.
-func (c *DiscoveryController) processNextWorkItem() bool {
- key, quit := c.queue.Get()
- if quit {
- return false
- }
- defer c.queue.Done(key)
-
- err := c.syncFn(key)
- if err == nil {
- c.queue.Forget(key)
- return true
- }
-
- utilruntime.HandleError(fmt.Errorf("%v failed with: %v", key, err))
- c.queue.AddRateLimited(key)
-
- return true
-}
-
-func (c *DiscoveryController) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- for _, v := range obj.Spec.Versions {
- c.queue.Add(schema.GroupVersion{Group: obj.Spec.Group, Version: v.Name})
- }
-}
-
-func (c *DiscoveryController) addCustomResourceDefinition(obj interface{}) {
- castObj := obj.(*apiextensionsv1.CustomResourceDefinition)
- klog.V(4).Infof("Adding customresourcedefinition %s", castObj.Name)
- c.enqueue(castObj)
-}
-
-func (c *DiscoveryController) updateCustomResourceDefinition(oldObj, newObj interface{}) {
- castNewObj := newObj.(*apiextensionsv1.CustomResourceDefinition)
- castOldObj := oldObj.(*apiextensionsv1.CustomResourceDefinition)
- klog.V(4).Infof("Updating customresourcedefinition %s", castOldObj.Name)
- // Enqueue both old and new object to make sure we remove and add appropriate Versions.
- // The working queue will resolve any duplicates and only changes will stay in the queue.
- c.enqueue(castNewObj)
- c.enqueue(castOldObj)
-}
-
-func (c *DiscoveryController) deleteCustomResourceDefinition(obj interface{}) {
- castObj, ok := obj.(*apiextensionsv1.CustomResourceDefinition)
- if !ok {
- tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
- if !ok {
- klog.Errorf("Couldn't get object from tombstone %#v", obj)
- return
- }
- castObj, ok = tombstone.Obj.(*apiextensionsv1.CustomResourceDefinition)
- if !ok {
- klog.Errorf("Tombstone contained object that is not expected %#v", obj)
- return
- }
- }
- klog.V(4).Infof("Deleting customresourcedefinition %q", castObj.Name)
- c.enqueue(castObj)
-}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller_test.go
deleted file mode 100644
index 4ca778e7c2ba6..0000000000000
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_discovery_controller_test.go
+++ /dev/null
@@ -1,423 +0,0 @@
-/*
-Copyright 2022 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package apiserver
-
-import (
- "context"
- "testing"
- "time"
-
- "github.com/stretchr/testify/require"
- apidiscoveryv2 "k8s.io/api/apidiscovery/v2"
- v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
- "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/fake"
- "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime/schema"
- "k8s.io/apiserver/pkg/endpoints/discovery"
- "k8s.io/apiserver/pkg/endpoints/discovery/aggregated"
-)
-
-var coolFooCRD = &v1.CustomResourceDefinition{
- TypeMeta: metav1.TypeMeta{
- APIVersion: "apiextensions.k8s.io/v1",
- Kind: "CustomResourceDefinition",
- },
- ObjectMeta: metav1.ObjectMeta{
- Name: "coolfoo.stable.example.com",
- },
- Spec: v1.CustomResourceDefinitionSpec{
- Group: "stable.example.com",
- Names: v1.CustomResourceDefinitionNames{
- Plural: "coolfoos",
- Singular: "coolfoo",
- ShortNames: []string{"foo"},
- Kind: "CoolFoo",
- ListKind: "CoolFooList",
- Categories: []string{"cool"},
- },
- Scope: v1.ClusterScoped,
- Versions: []v1.CustomResourceDefinitionVersion{
- {
- Name: "v1",
- Served: true,
- Storage: true,
- Deprecated: false,
- Subresources: &v1.CustomResourceSubresources{
- // This CRD has a /status subresource
- Status: &v1.CustomResourceSubresourceStatus{},
- },
- Schema: &v1.CustomResourceValidation{
- // Unused by discovery
- OpenAPIV3Schema: &v1.JSONSchemaProps{},
- },
- },
- },
- Conversion: &v1.CustomResourceConversion{},
- PreserveUnknownFields: false,
- },
- Status: v1.CustomResourceDefinitionStatus{
- Conditions: []v1.CustomResourceDefinitionCondition{
- {
- Type: v1.Established,
- Status: v1.ConditionTrue,
- },
- },
- },
-}
-
-var coolBarCRD = &v1.CustomResourceDefinition{
- TypeMeta: metav1.TypeMeta{
- APIVersion: "apiextensions.k8s.io/v1",
- Kind: "CustomResourceDefinition",
- },
- ObjectMeta: metav1.ObjectMeta{
- Name: "coolbar.stable.example.com",
- },
- Spec: v1.CustomResourceDefinitionSpec{
- Group: "stable.example.com",
- Names: v1.CustomResourceDefinitionNames{
- Plural: "coolbars",
- Singular: "coolbar",
- ShortNames: []string{"bar"},
- Kind: "CoolBar",
- ListKind: "CoolBarList",
- Categories: []string{"cool"},
- },
- Scope: v1.ClusterScoped,
- Versions: []v1.CustomResourceDefinitionVersion{
- {
- Name: "v1",
- Served: true,
- Storage: true,
- Deprecated: false,
- Schema: &v1.CustomResourceValidation{
- // Unused by discovery
- OpenAPIV3Schema: &v1.JSONSchemaProps{},
- },
- },
- },
- Conversion: &v1.CustomResourceConversion{},
- PreserveUnknownFields: false,
- },
- Status: v1.CustomResourceDefinitionStatus{
- Conditions: []v1.CustomResourceDefinitionCondition{
- {
- Type: v1.Established,
- Status: v1.ConditionTrue,
- },
- },
- },
-}
-
-var coolFooDiscovery apidiscoveryv2.APIVersionDiscovery = apidiscoveryv2.APIVersionDiscovery{
- Version: "v1",
- Freshness: apidiscoveryv2.DiscoveryFreshnessCurrent,
- Resources: []apidiscoveryv2.APIResourceDiscovery{
- {
- Resource: "coolfoos",
- Scope: apidiscoveryv2.ScopeCluster,
- SingularResource: "coolfoo",
- Verbs: []string{"delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"},
- ShortNames: []string{"foo"},
- Categories: []string{"cool"},
- ResponseKind: &metav1.GroupVersionKind{
- Group: "stable.example.com",
- Version: "v1",
- Kind: "CoolFoo",
- },
- Subresources: []apidiscoveryv2.APISubresourceDiscovery{
- {
- Subresource: "status",
- Verbs: []string{"get", "patch", "update"},
- AcceptedTypes: nil, // is this correct?
- ResponseKind: &metav1.GroupVersionKind{
- Group: "stable.example.com",
- Version: "v1",
- Kind: "CoolFoo",
- },
- },
- },
- },
- },
-}
-
-var mergedDiscovery apidiscoveryv2.APIVersionDiscovery = apidiscoveryv2.APIVersionDiscovery{
- Version: "v1",
- Freshness: apidiscoveryv2.DiscoveryFreshnessCurrent,
- Resources: []apidiscoveryv2.APIResourceDiscovery{
- {
- Resource: "coolbars",
- Scope: apidiscoveryv2.ScopeCluster,
- SingularResource: "coolbar",
- Verbs: []string{"delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"},
- ShortNames: []string{"bar"},
- Categories: []string{"cool"},
- ResponseKind: &metav1.GroupVersionKind{
- Group: "stable.example.com",
- Version: "v1",
- Kind: "CoolBar",
- },
- }, {
- Resource: "coolfoos",
- Scope: apidiscoveryv2.ScopeCluster,
- SingularResource: "coolfoo",
- Verbs: []string{"delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"},
- ShortNames: []string{"foo"},
- Categories: []string{"cool"},
- ResponseKind: &metav1.GroupVersionKind{
- Group: "stable.example.com",
- Version: "v1",
- Kind: "CoolFoo",
- },
- Subresources: []apidiscoveryv2.APISubresourceDiscovery{
- {
- Subresource: "status",
- Verbs: []string{"get", "patch", "update"},
- AcceptedTypes: nil, // is this correct?
- ResponseKind: &metav1.GroupVersionKind{
- Group: "stable.example.com",
- Version: "v1",
- Kind: "CoolFoo",
- },
- },
- },
- },
- },
-}
-
-func init() {
- // Not testing against an apiserver, so just assume names are accepted
- coolFooCRD.Status.AcceptedNames = coolFooCRD.Spec.Names
- coolBarCRD.Status.AcceptedNames = coolBarCRD.Spec.Names
-}
-
-// Provides an apiextensions-apiserver client
-type testEnvironment struct {
- clientset.Interface
-
- // Discovery test details
- versionDiscoveryHandler
- groupDiscoveryHandler
-
- aggregated.FakeResourceManager
-}
-
-func (env *testEnvironment) Start(ctx context.Context) {
- discoverySyncedCh := make(chan struct{})
-
- factory := externalversions.NewSharedInformerFactoryWithOptions(
- env.Interface, 30*time.Second)
-
- discoveryController := NewDiscoveryController(
- factory.Apiextensions().V1().CustomResourceDefinitions(),
- &env.versionDiscoveryHandler,
- &env.groupDiscoveryHandler,
- env.FakeResourceManager,
- )
-
- factory.Start(ctx.Done())
- go discoveryController.Run(ctx.Done(), discoverySyncedCh)
-
- select {
- case <-discoverySyncedCh:
- case <-ctx.Done():
- }
-}
-
-func setup() *testEnvironment {
- env := &testEnvironment{
- Interface: fake.NewSimpleClientset(),
- FakeResourceManager: aggregated.NewFakeResourceManager(),
- versionDiscoveryHandler: versionDiscoveryHandler{
- discovery: make(map[schema.GroupVersion]*discovery.APIVersionHandler),
- },
- groupDiscoveryHandler: groupDiscoveryHandler{
- discovery: make(map[string]*discovery.APIGroupHandler),
- },
- }
-
- return env
-}
-
-func TestResourceManagerExistingCRD(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- defer cancel()
-
- env := setup()
- _, err := env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Create(
- ctx,
- coolFooCRD,
- metav1.CreateOptions{
- FieldManager: "resource-manager-test",
- },
- )
-
- require.NoError(t, err)
-
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, coolFooDiscovery)
- for _, v := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().
- SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: v.Name}, 1000, 100)
- }
-
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, coolFooDiscovery)
- for _, v := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().
- SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: v.Name}, 1000, 100)
- }
-
- env.Start(ctx)
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-}
-
-// Tests that if a CRD is added a runtime, the discovery controller will
-// put its information in the discovery document
-func TestResourceManagerAddedCRD(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- defer cancel()
-
- env := setup()
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, coolFooDiscovery)
- for _, v := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().
- SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: v.Name}, 1000, 100)
- }
-
- env.Start(ctx)
-
- // Create CRD after the controller has already started
- _, err := env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Create(
- ctx,
- coolFooCRD,
- metav1.CreateOptions{
- FieldManager: "resource-manager-test",
- },
- )
-
- require.NoError(t, err)
-
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-}
-
-// Test that having multiple CRDs in the same version will add both
-// versions to discovery.
-func TestMultipleCRDSameVersion(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- defer cancel()
-
- env := setup()
- env.Start(ctx)
-
- _, err := env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Create(
- ctx,
- coolFooCRD,
- metav1.CreateOptions{
- FieldManager: "resource-manager-test",
- },
- )
-
- require.NoError(t, err)
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, coolFooDiscovery)
- for _, versionEntry := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: versionEntry.Name}, 1000, 100)
- }
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-
- _, err = env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Create(
- ctx,
- coolBarCRD,
- metav1.CreateOptions{
- FieldManager: "resource-manager-test",
- },
- )
- require.NoError(t, err)
-
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, mergedDiscovery)
- for _, versionEntry := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: versionEntry.Name}, 1000, 100)
- }
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-}
-
-// Tests that if a CRD is deleted at runtime, the discovery controller will
-// remove its information from its ResourceManager
-func TestDiscoveryControllerResourceManagerRemovedCRD(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- defer cancel()
-
- env := setup()
- env.Start(ctx)
-
- // Create CRD after the controller has already started
- _, err := env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Create(
- ctx,
- coolFooCRD,
- metav1.CreateOptions{},
- )
-
- require.NoError(t, err)
-
- // Wait for the Controller to pick up the Create event and add it to the
- // Resource Manager
- env.FakeResourceManager.Expect().
- AddGroupVersion(coolFooCRD.Spec.Group, coolFooDiscovery)
- for _, versionEntry := range coolFooCRD.Spec.Versions {
- env.FakeResourceManager.Expect().SetGroupVersionPriority(metav1.GroupVersion{Group: coolFooCRD.Spec.Group, Version: versionEntry.Name}, 1000, 100)
- }
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-
- err = env.Interface.
- ApiextensionsV1().
- CustomResourceDefinitions().
- Delete(ctx, coolFooCRD.Name, metav1.DeleteOptions{})
-
- require.NoError(t, err)
-
- // Wait for the Controller to detect there are no more CRDs of this group
- // and remove the entire group
- env.FakeResourceManager.Expect().RemoveGroup(coolFooCRD.Spec.Group)
-
- err = env.FakeResourceManager.WaitForActions(ctx, 1*time.Second)
- require.NoError(t, err)
-}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go
index 9afeb2f80ce0d..d7d2dada210f5 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler.go
@@ -17,7 +17,9 @@ limitations under the License.
package apiserver
import (
+ "bytes"
"fmt"
+ "io/ioutil"
"net/http"
"sort"
"strings"
@@ -25,7 +27,9 @@ import (
"sync/atomic"
"time"
- "sigs.k8s.io/structured-merge-diff/v4/fieldpath"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
apiextensionsinternal "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
@@ -36,18 +40,18 @@ import (
schemaobjectmeta "k8s.io/apiextensions-apiserver/pkg/apiserver/schema/objectmeta"
structuralpruning "k8s.io/apiextensions-apiserver/pkg/apiserver/schema/pruning"
apiservervalidation "k8s.io/apiextensions-apiserver/pkg/apiserver/validation"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
"k8s.io/apiextensions-apiserver/pkg/controller/establish"
"k8s.io/apiextensions-apiserver/pkg/controller/finalizer"
"k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
"k8s.io/apiextensions-apiserver/pkg/crdserverscheme"
+ "k8s.io/apiextensions-apiserver/pkg/kcp"
"k8s.io/apiextensions-apiserver/pkg/registry/customresource"
"k8s.io/apiextensions-apiserver/pkg/registry/customresource/tableconvertor"
-
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
+ apivalidation "k8s.io/apimachinery/pkg/api/validation"
+ "k8s.io/apimachinery/pkg/api/validation/path"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
@@ -60,6 +64,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/serializer/versioning"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/managedfields"
+ utilnet "k8s.io/apimachinery/pkg/util/net"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
utilwaitgroup "k8s.io/apimachinery/pkg/util/waitgroup"
@@ -69,21 +74,31 @@ import (
"k8s.io/apiserver/pkg/endpoints/handlers"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
"k8s.io/apiserver/pkg/endpoints/metrics"
+ "k8s.io/apiserver/pkg/endpoints/openapi"
apirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/features"
+ kcpapi "k8s.io/apiserver/pkg/kcp"
"k8s.io/apiserver/pkg/registry/generic"
+ genericregistry "k8s.io/apiserver/pkg/registry/generic/registry"
+ "k8s.io/apiserver/pkg/registry/rest"
genericfilters "k8s.io/apiserver/pkg/server/filters"
utilfeature "k8s.io/apiserver/pkg/util/feature"
- "k8s.io/apiserver/pkg/util/webhook"
+ utilopenapi "k8s.io/apiserver/pkg/util/openapi"
"k8s.io/apiserver/pkg/warning"
+ clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/scale"
"k8s.io/client-go/scale/scheme/autoscalingv1"
"k8s.io/client-go/tools/cache"
"k8s.io/klog/v2"
"k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/validation/spec"
+ "sigs.k8s.io/structured-merge-diff/v4/fieldpath"
)
+// KcpValidateNameAnnotationKey is the annotation key used to indicate that a CRD should be validated
+// not as the default DNS subdomain.
+const KcpValidateNameAnnotationKey = "internal.kcp.io/validate-name"
+
// crdHandler serves the `/apis` endpoint.
// This is registered as a filter so that it never collides with any explicitly registered endpoints
type crdHandler struct {
@@ -97,7 +112,9 @@ type crdHandler struct {
// which is suited for most read and rarely write cases
customStorage atomic.Value
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
+ clusterAwareCRDLister kcp.ClusterAwareCRDClusterLister
+ crdIndexer cache.Indexer
delegate http.Handler
restOptionsGetter generic.RESTOptionsGetter
@@ -128,6 +145,12 @@ type crdHandler struct {
// The limit on the request size that would be accepted and decoded in a write request
// 0 means no limit.
maxRequestBodyBytes int64
+
+ tableConverterProvider TableConverterProvider
+
+ // disableServerSideApply allows to deactivate Server Side Apply for a specific API server instead of globally through the feature gate
+ // used for embedded cache server with kcp
+ disableServerSideApply bool
}
// crdInfo stores enough information to serve the storage for the custom resource
@@ -164,27 +187,35 @@ type crdInfo struct {
// crdStorageMap goes from customresourcedefinition to its storage
type crdStorageMap map[types.UID]*crdInfo
+const byGroupResource = "byGroupResource"
+
func NewCustomResourceDefinitionHandler(
versionDiscoveryHandler *versionDiscoveryHandler,
groupDiscoveryHandler *groupDiscoveryHandler,
- crdInformer informers.CustomResourceDefinitionInformer,
+ crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer,
delegate http.Handler,
restOptionsGetter generic.RESTOptionsGetter,
admission admission.Interface,
establishingController *establish.EstablishingController,
- serviceResolver webhook.ServiceResolver,
- authResolverWrapper webhook.AuthenticationInfoResolverWrapper,
+ converterFactory conversion.Factory,
masterCount int,
authorizer authorizer.Authorizer,
requestTimeout time.Duration,
minRequestTimeout time.Duration,
staticOpenAPISpec map[string]*spec.Schema,
- maxRequestBodyBytes int64) (*crdHandler, error) {
+ maxRequestBodyBytes int64,
+ disableServerSideApply bool,
+) (*crdHandler, error) {
+ if converterFactory == nil {
+ return nil, fmt.Errorf("converterFactory is required")
+ }
+
ret := &crdHandler{
versionDiscoveryHandler: versionDiscoveryHandler,
groupDiscoveryHandler: groupDiscoveryHandler,
customStorage: atomic.Value{},
crdLister: crdInformer.Lister(),
+ crdIndexer: crdInformer.Informer().GetIndexer(),
delegate: delegate,
restOptionsGetter: restOptionsGetter,
admission: admission,
@@ -195,6 +226,7 @@ func NewCustomResourceDefinitionHandler(
minRequestTimeout: minRequestTimeout,
staticOpenAPISpec: staticOpenAPISpec,
maxRequestBodyBytes: maxRequestBodyBytes,
+ disableServerSideApply: disableServerSideApply,
}
crdInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: ret.createCustomResourceDefinition,
@@ -203,11 +235,27 @@ func NewCustomResourceDefinitionHandler(
ret.removeDeadStorage()
},
})
- crConverterFactory, err := conversion.NewCRConverterFactory(serviceResolver, authResolverWrapper)
- if err != nil {
- return nil, err
+
+ // kcp: needed to be able to accurately preserve/remove storage for wildcard partial metadata requests
+ if _, exists := crdInformer.Informer().GetIndexer().GetIndexers()[byGroupResource]; !exists {
+ if err := crdInformer.Informer().GetIndexer().AddIndexers(cache.Indexers{
+ byGroupResource: func(obj interface{}) ([]string, error) {
+ crd, ok := obj.(*apiextensionsv1.CustomResourceDefinition)
+ if !ok {
+ return nil, fmt.Errorf("unable to process obj in byName index: unexpected type %T", obj)
+ }
+
+ group := crd.Spec.Group
+ if group == "" {
+ group = "core"
+ }
+
+ return []string{crd.Spec.Names.Plural + "." + group}, nil
+ },
+ }); err != nil {
+ return nil, fmt.Errorf("error adding byName index to CRD lister: %v", err)
+ }
}
- ret.converterFactory = crConverterFactory
ret.customStorage.Store(crdStorageMap{})
@@ -251,8 +299,23 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
return
}
- crdName := requestInfo.Resource + "." + requestInfo.APIGroup
- crd, err := r.crdLister.Get(crdName)
+ clusterName, wildcard, err := apirequest.ClusterNameOrWildcardFrom(req.Context())
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if wildcard {
+ // this is the only case where wildcard works for a list because this is our special CRD lister that handles it.
+ clusterName = "*"
+ }
+
+ group := requestInfo.APIGroup
+ if group == "" {
+ group = "core"
+ }
+
+ crdName := requestInfo.Resource + "." + group
+ crd, err := r.clusterAwareCRDLister.Cluster(clusterName).Get(req.Context(), crdName)
if apierrors.IsNotFound(err) {
r.delegate.ServeHTTP(w, req)
return
@@ -260,7 +323,7 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if err != nil {
utilruntime.HandleError(err)
responsewriters.ErrorNegotiated(
- apierrors.NewInternalError(fmt.Errorf("error resolving resource")),
+ apierrors.NewInternalError(fmt.Errorf("error resolving resource: %v", err)),
Codecs, schema.GroupVersion{Group: requestInfo.APIGroup, Version: requestInfo.APIVersion}, w, req,
)
return
@@ -278,7 +341,9 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
return
}
- if !apiextensionshelpers.HasServedCRDVersion(crd, requestInfo.APIVersion) {
+ wildcardPartialMetadata := strings.HasSuffix(string(crd.UID), ".wildcard.partial-metadata")
+ // For wildcard partial metadata requests, we don't care if the CRD serves the version being requested or not.
+ if !wildcardPartialMetadata && !apiextensionshelpers.HasServedCRDVersion(crd, requestInfo.APIVersion) {
r.delegate.ServeHTTP(w, req)
return
}
@@ -293,9 +358,15 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
return
}
+ // kcp: wrap the context with a custom resource indicator. This is required for the storage code to handle
+ // partial metadata wildcard requests correctly (the number of path segments varies whether the resource is
+ // a built-in type (e.g. configmaps) or a custom resource.
+ req = utilnet.CloneRequest(req)
+ req = req.WithContext(kcpapi.WithCustomResourceIndicator(req.Context()))
+
terminating := apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Terminating)
- crdInfo, err := r.getOrCreateServingInfoFor(crd.UID, crd.Name)
+ crdInfo, err := r.getOrCreateServingInfoFor(crd)
if apierrors.IsNotFound(err) {
r.delegate.ServeHTTP(w, req)
return
@@ -308,7 +379,9 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
)
return
}
- if !hasServedCRDVersion(crdInfo.spec, requestInfo.APIVersion) {
+
+ // For wildcard partial metadata requests, we don't care if the CRD serves the version being requested or not.
+ if !wildcardPartialMetadata && !hasServedCRDVersion(crdInfo.spec, requestInfo.APIVersion) {
r.delegate.ServeHTTP(w, req)
return
}
@@ -331,15 +404,45 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
supportedTypes = append(supportedTypes, string(types.ApplyCBORPatchType))
}
+ // HACK: Support resources of the client-go scheme the way existing clients expect it:
+ // - Support Strategic Merge Patch (used by default on these resources by kubectl)
+ // - Support the Protobuf content type on Create / Update resources
+ // (by simply converting the request to the json content type),
+ // since protobuf content type is expected to be supported in a number of client
+ // contexts (like controller-runtime for example)
+ if clientgoscheme.Scheme.IsGroupRegistered(requestInfo.APIGroup) {
+ supportedTypes = append(supportedTypes, string(types.StrategicMergePatchType))
+ req, err := ConvertProtobufRequestsToJson(verb, req, schema.GroupVersionKind{
+ Group: requestInfo.APIGroup,
+ Version: requestInfo.APIVersion,
+ Kind: crd.Spec.Names.Kind,
+ })
+ if err != nil {
+ responsewriters.ErrorNegotiated(
+ apierrors.NewInternalError(err),
+ Codecs, schema.GroupVersion{Group: requestInfo.APIGroup, Version: requestInfo.APIVersion}, w, req,
+ )
+ return
+ }
+ }
+
+ if !r.disableServerSideApply {
+ supportedTypes = append(supportedTypes, string(types.ApplyPatchType))
+ }
+
var handlerFunc http.HandlerFunc
- subresources, err := apiextensionshelpers.GetSubresourcesForVersion(crd, requestInfo.APIVersion)
- if err != nil {
- utilruntime.HandleError(err)
- responsewriters.ErrorNegotiated(
- apierrors.NewInternalError(fmt.Errorf("could not properly serve the subresource")),
- Codecs, schema.GroupVersion{Group: requestInfo.APIGroup, Version: requestInfo.APIVersion}, w, req,
- )
- return
+ var subresources *apiextensionsv1.CustomResourceSubresources
+ // Subresources (scale, status) are not applicable for wildcard partial metadata requests
+ if !wildcardPartialMetadata {
+ subresources, err = apiextensionshelpers.GetSubresourcesForVersion(crd, requestInfo.APIVersion)
+ if err != nil {
+ utilruntime.HandleError(err)
+ responsewriters.ErrorNegotiated(
+ apierrors.NewInternalError(fmt.Errorf("could not properly serve the subresource")),
+ Codecs, schema.GroupVersion{Group: requestInfo.APIGroup, Version: requestInfo.APIVersion}, w, req,
+ )
+ return
+ }
}
switch {
case subresource == "status" && subresources != nil && subresources.Status != nil:
@@ -363,9 +466,77 @@ func (r *crdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
}
}
+// HACK: In some contexts, like the controller-runtime library used by the Operator SDK, all the resources of the
+// client-go scheme are created / updated using the protobuf content type.
+// However when these resources are in fact added as CRDs, in the KCP minimal API server scenario, these resources cannot
+// be created / updated since the protobuf (de)serialization is not supported for CRDs.
+// So in this case we just convert the protobuf request to a Json one (using the `client-go` scheme decoder/encoder),
+// before letting the CRD handler serve it.
+//
+// A real, long-term and non-hacky, fix for this problem would be as follows:
+// When a request for an unsupported serialization is returned, the server should reject it with a 406
+// and provide a list of supported content types.
+// client-go should then examine whether it can satisfy such a request by encoding the object with a different scheme.
+// This would require a KEP but is in keeping with content negotiation on GET / WATCH in Kube
+func ConvertProtobufRequestsToJson(verb string, req *http.Request, gvk schema.GroupVersionKind) (*http.Request, error) {
+ if (verb == "CREATE" || verb == "UPDATE") &&
+ req.Header.Get("Content-Type") == runtime.ContentTypeProtobuf {
+ resource, err := clientgoscheme.Scheme.New(gvk)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil, fmt.Errorf("Error when converting a protobuf request to a json request on a client-go resource added as a CRD")
+ }
+ reader, err := req.Body, nil
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil, fmt.Errorf("Error when converting a protobuf request to a json request on a client-go resource added as a CRD")
+ }
+ defer reader.Close()
+ buf := new(bytes.Buffer)
+ _, err = buf.ReadFrom(reader)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil, fmt.Errorf("Error when converting a protobuf request to a json request on a client-go resource added as a CRD")
+ }
+
+ // get bytes through IO operations
+ protobuf.NewSerializer(clientgoscheme.Scheme, clientgoscheme.Scheme).Decode(buf.Bytes(), &gvk, resource)
+ buf = new(bytes.Buffer)
+ json.NewSerializerWithOptions(json.DefaultMetaFactory, clientgoscheme.Scheme, clientgoscheme.Scheme, json.SerializerOptions{Yaml: false, Pretty: false, Strict: true}).
+ Encode(resource, buf)
+ req.Body = ioutil.NopCloser(buf)
+ req.ContentLength = int64(buf.Len())
+ req.Header.Set("Content-Type", runtime.ContentTypeJSON)
+ }
+ return req, nil
+}
+
func (r *crdHandler) serveResource(w http.ResponseWriter, req *http.Request, requestInfo *apirequest.RequestInfo, crdInfo *crdInfo, crd *apiextensionsv1.CustomResourceDefinition, terminating bool, supportedTypes []string) http.HandlerFunc {
+ wildcardPartialMetadata := strings.HasSuffix(string(crd.UID), ".wildcard.partial-metadata")
+
requestScope := crdInfo.requestScopes[requestInfo.APIVersion]
+ if requestScope == nil && wildcardPartialMetadata {
+ // If requestScope is nil and this is a wildcard partial metadata request, it means the request was for e.g.
+ // v1 but the initial CRD used to create the wildcard partial metadata variant doesn't have v1. This is ok!
+ // Because this is a wildcard partial metadata request, we need *any* requestScope for *any* valid version
+ // from this CRD. Iterate through the valid requestScopes and pick the first one.
+ for _, s := range crdInfo.requestScopes {
+ requestScope = s
+ break
+ }
+ }
+
storage := crdInfo.storages[requestInfo.APIVersion].CustomResource
+ if storage == nil && wildcardPartialMetadata {
+ // If storage is nil and this is a wildcard partial metadata request, it means the request was for e.g.
+ // v1 but the initial CRD used to create the wildcard partial metadata variant doesn't have v1. This is ok!
+ // Because this is a wildcard partial metadata request, we need *any* storage for *any* valid version
+ // from this CRD. Iterate through the valid storages and pick the first one.
+ for _, s := range crdInfo.storages {
+ storage = s.CustomResource
+ break
+ }
+ }
switch requestInfo.Verb {
case "get":
@@ -390,6 +561,7 @@ func (r *crdHandler) serveResource(w http.ResponseWriter, req *http.Request, req
responsewriters.ErrorNegotiated(err, Codecs, schema.GroupVersion{Group: requestInfo.APIGroup, Version: requestInfo.APIVersion}, w, req)
return nil
}
+
return handlers.CreateResource(storage, requestScope, r.admission)
case "update":
return handlers.UpdateResource(storage, requestScope, r.admission)
@@ -484,9 +656,9 @@ func (r *crdHandler) updateCustomResourceDefinition(oldObj, newObj interface{})
if !apiextensionshelpers.IsCRDConditionTrue(newCRD, apiextensionsv1.Established) &&
apiextensionshelpers.IsCRDConditionTrue(newCRD, apiextensionsv1.NamesAccepted) {
if r.masterCount > 1 {
- r.establishingController.QueueCRD(newCRD.Name, 5*time.Second)
+ r.establishingController.QueueCRD(newCRD.Name, logicalcluster.From(newCRD), 5*time.Second)
} else {
- r.establishingController.QueueCRD(newCRD.Name, 0)
+ r.establishingController.QueueCRD(newCRD.Name, logicalcluster.From(newCRD), 0)
}
}
@@ -546,6 +718,26 @@ func (r *crdHandler) removeDeadStorage() {
storageMap2[crd.UID] = storageMap[crd.UID]
}
}
+
+ // kcp: preserve partial metadata, one per GroupResource (randomly) that has at least one CRD
+ for uid, crdInfo := range storageMap {
+ if strings.HasSuffix(string(uid), ".wildcard.partial-metadata") {
+ groupResource := strings.TrimSuffix(string(uid), ".wildcard.partial-metadata")
+
+ crdsForGroupResource, err := r.crdIndexer.ByIndex(byGroupResource, groupResource)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("error retrieving CRDs for %q from index: %v", groupResource, err))
+ }
+
+ if len(crdsForGroupResource) > 0 {
+ storageMap2[uid] = crdInfo
+ klog.V(6).InfoS("Preserving wildcard partial metadata storage because at least 1 CRD for this group resource still exists", "crd", groupResource)
+ } else {
+ klog.V(4).InfoS("Removing wildcard partial metadata storage because no CRDs for this group resource still exist", "crd", groupResource)
+ }
+ }
+ }
+
r.customStorage.Store(storageMap2)
for uid, crdInfo := range storageMap {
@@ -575,7 +767,7 @@ func (r *crdHandler) tearDown(oldInfo *crdInfo) {
for _, storage := range oldInfo.storages {
// destroy only the main storage. Those for the subresources share cacher and etcd clients.
- storage.CustomResource.DestroyFunc()
+ storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
}
}
@@ -591,7 +783,7 @@ func (r *crdHandler) destroy() {
// DestroyFunc have to be implemented in idempotent way,
// so the potential race with r.tearDown() (being called
// from a goroutine) is safe.
- storage.CustomResource.DestroyFunc()
+ storage.CustomResource.Destroy()
}
}
}
@@ -599,18 +791,18 @@ func (r *crdHandler) destroy() {
// GetCustomResourceListerCollectionDeleter returns the ListerCollectionDeleter of
// the given crd.
func (r *crdHandler) GetCustomResourceListerCollectionDeleter(crd *apiextensionsv1.CustomResourceDefinition) (finalizer.ListerCollectionDeleter, error) {
- info, err := r.getOrCreateServingInfoFor(crd.UID, crd.Name)
+ info, err := r.getOrCreateServingInfoFor(crd)
if err != nil {
return nil, err
}
return info.storages[info.storageVersion].CustomResource, nil
}
-// getOrCreateServingInfoFor gets the CRD serving info for the given CRD UID if the key exists in the storage map.
-// Otherwise the function fetches the up-to-date CRD using the given CRD name and creates CRD serving info.
-func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crdInfo, error) {
+// getOrCreateServingInfoFor gets the CRD serving info for the given CRD (by its UID) if the key exists in the storage map.
+// Otherwise the function creates CRD serving info.
+func (r *crdHandler) getOrCreateServingInfoFor(crd *apiextensionsv1.CustomResourceDefinition) (*crdInfo, error) {
storageMap := r.customStorage.Load().(crdStorageMap)
- if ret, ok := storageMap[uid]; ok {
+ if ret, ok := storageMap[crd.UID]; ok {
return ret, nil
}
@@ -621,10 +813,11 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
// If updateCustomResourceDefinition sees an update and happens later, the storage will be deleted and
// we will re-create the updated storage on demand. If updateCustomResourceDefinition happens before,
// we make sure that we observe the same up-to-date CRD.
- crd, err := r.crdLister.Get(name)
+ crd, err := r.clusterAwareCRDLister.Cluster(logicalcluster.From(crd)).Refresh(crd)
if err != nil {
return nil, err
}
+
storageMap = r.customStorage.Load().(crdStorageMap)
if ret, ok := storageMap[crd.UID]; ok {
return ret, nil
@@ -679,10 +872,39 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
structuralSchemas[v.Name] = s
}
- openAPIModels, err := buildOpenAPIModelsForApply(r.staticOpenAPISpec, crd)
+ openAPIModels, err := buildOpenAPIModelsForApply(r.staticOpenAPISpec, crd, r.disableServerSideApply)
+ var modelsByGKV openapi.ModelsByGKV
if err != nil {
utilruntime.HandleError(fmt.Errorf("error building openapi models for %s: %v", crd.Name, err))
openAPIModels = nil
+ } else if openAPIModels != nil {
+ specV3 := &spec3.OpenAPI{
+ Version: "3.0.0",
+ Info: &spec.Info{
+ InfoProps: spec.InfoProps{
+ Title: "Kubernetes CRD Swagger",
+ Version: "v0.1.0",
+ },
+ },
+ Components: &spec3.Components{
+ Schemas: map[string]*spec.Schema{},
+ },
+ Paths: &spec3.Paths{},
+ }
+ for name, model := range openAPIModels {
+ specV3.Components.Schemas[name] = model
+ }
+ protoModels, err := utilopenapi.ToProtoModelsV3(specV3)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("error gathering openapi models by GKV for %s: %v", crd.Name, err))
+ modelsByGKV = nil
+ } else {
+ modelsByGKV, err = openapi.GetModelsByGKV(protoModels)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("error gathering openapi models by GKV for %s: %v", crd.Name, err))
+ modelsByGKV = nil
+ }
+ }
}
var typeConverter managedfields.TypeConverter = managedfields.NewDeducedTypeConverter()
@@ -693,7 +915,16 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
}
}
- safeConverter, unsafeConverter, err := r.converterFactory.NewConverter(crd)
+ converter, err := r.converterFactory.NewConverter(crd)
+ if err != nil {
+ return nil, fmt.Errorf("error creating converter for %s: %w", crd.Name, err)
+ }
+
+ if strings.HasSuffix(string(crd.UID), ".wildcard.partial-metadata") {
+ converter = conversion.NewKCPWildcardPartialMetadataConverter()
+ }
+
+ safeConverter, unsafeConverter, err := conversion.NewDelegatingConverter(crd, converter)
if err != nil {
return nil, err
}
@@ -719,6 +950,11 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
replicasPathInCustomResource[schema.GroupVersion{Group: crd.Spec.Group, Version: v.Name}.String()] = path
}
+ kcpValidateName := apivalidation.NameIsDNSSubdomain
+ if crd.Annotations[KcpValidateNameAnnotationKey] == "path-segment" {
+ kcpValidateName = path.ValidatePathSegmentName
+ }
+
for _, v := range crd.Spec.Versions {
// In addition to Unstructured objects (Custom Resources), we also may sometimes need to
// decode unversioned Options objects, so we delegate to parameterScheme for such types.
@@ -802,14 +1038,21 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
}
}
- columns, err := getColumnsForVersion(crd, v.Name)
- if err != nil {
- utilruntime.HandleError(err)
- return nil, fmt.Errorf("the server could not properly serve the CR columns")
+ var table rest.TableConvertor
+ if r.tableConverterProvider != nil {
+ table = r.tableConverterProvider.GetTableConverter(crd.Spec.Group, crd.Status.AcceptedNames.Kind, crd.Status.AcceptedNames.ListKind)
}
- table, err := tableconvertor.New(columns)
- if err != nil {
- klog.V(2).Infof("The CRD for %v has an invalid printer specification, falling back to default printing: %v", kind, err)
+
+ if table == nil {
+ columns, err := getColumnsForVersion(crd, v.Name)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil, fmt.Errorf("the server could not properly serve the CR columns")
+ }
+ table, err = tableconvertor.New(columns)
+ if err != nil {
+ klog.V(2).Infof("The CRD for %v has an invalid printer specification, falling back to default printing: %v", kind, err)
+ }
}
listKind := schema.GroupVersionKind{Group: crd.Spec.Group, Version: v.Name, Kind: crd.Status.AcceptedNames.ListKind}
@@ -827,6 +1070,7 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
typer,
crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
kind,
+ kcpValidateName,
validator,
statusValidator,
structuralSchemas[v.Name],
@@ -834,14 +1078,17 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
scaleSpec,
v.SelectableFields,
),
- crdConversionRESTOptionsGetter{
- RESTOptionsGetter: r.restOptionsGetter,
- converter: safeConverter,
- decoderVersion: schema.GroupVersion{Group: crd.Spec.Group, Version: v.Name},
- encoderVersion: schema.GroupVersion{Group: crd.Spec.Group, Version: storageVersion},
- structuralSchemas: structuralSchemas,
- structuralSchemaGK: kind.GroupKind(),
- preserveUnknownFields: crd.Spec.PreserveUnknownFields,
+ apiBindingAwareCRDRESTOptionsGetter{
+ delegate: crdConversionRESTOptionsGetter{
+ RESTOptionsGetter: r.restOptionsGetter,
+ converter: safeConverter,
+ decoderVersion: schema.GroupVersion{Group: crd.Spec.Group, Version: v.Name},
+ encoderVersion: schema.GroupVersion{Group: crd.Spec.Group, Version: storageVersion},
+ structuralSchemas: structuralSchemas,
+ structuralSchemaGK: kind.GroupKind(),
+ preserveUnknownFields: crd.Spec.PreserveUnknownFields,
+ },
+ crd: crd,
},
crd.Status.AcceptedNames.Categories,
table,
@@ -948,17 +1195,21 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
Authorizer: r.authorizer,
MaxRequestBodyBytes: r.maxRequestBodyBytes,
+
+ OpenapiModels: modelsByGKV,
}
- resetFields := storages[v.Name].CustomResource.GetResetFields()
- reqScope, err = scopeWithFieldManager(
- typeConverter,
- reqScope,
- resetFields,
- "",
- )
- if err != nil {
- return nil, err
+ if !r.disableServerSideApply {
+ resetFields := storages[v.Name].CustomResource.GetResetFields()
+ reqScope, err = scopeWithFieldManager(
+ typeConverter,
+ reqScope,
+ resetFields,
+ "",
+ )
+ if err != nil {
+ return nil, err
+ }
}
requestScopes[v.Name] = &reqScope
@@ -991,7 +1242,7 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
}
scaleScope.TableConvertor = scaleTable
- if subresources != nil && subresources.Scale != nil {
+ if subresources != nil && subresources.Scale != nil && !r.disableServerSideApply {
scaleScope, err = scopeWithFieldManager(
typeConverter,
scaleScope,
@@ -1014,7 +1265,7 @@ func (r *crdHandler) getOrCreateServingInfoFor(uid types.UID, name string) (*crd
ClusterScoped: clusterScoped,
}
- if subresources != nil && subresources.Status != nil {
+ if subresources != nil && subresources.Status != nil && !r.disableServerSideApply {
resetFields := storages[v.Name].Status.GetResetFields()
statusScope, err = scopeWithFieldManager(
typeConverter,
@@ -1416,7 +1667,10 @@ func hasServedCRDVersion(spec *apiextensionsv1.CustomResourceDefinitionSpec, ver
// buildOpenAPIModelsForApply constructs openapi models from any validation schemas specified in the custom resource,
// and merges it with the models defined in the static OpenAPI spec.
// Returns nil models ifthe static spec is nil, or an error is encountered.
-func buildOpenAPIModelsForApply(staticOpenAPISpec map[string]*spec.Schema, crd *apiextensionsv1.CustomResourceDefinition) (map[string]*spec.Schema, error) {
+func buildOpenAPIModelsForApply(staticOpenAPISpec map[string]*spec.Schema, crd *apiextensionsv1.CustomResourceDefinition, disableServerSideApply bool) (map[string]*spec.Schema, error) {
+ if disableServerSideApply {
+ return nil, nil
+ }
if staticOpenAPISpec == nil {
return nil, nil
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_kcp.go
new file mode 100644
index 0000000000000..4a1a85cbf39f1
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_kcp.go
@@ -0,0 +1,91 @@
+/*
+Copyright 2022 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package apiserver
+
+import (
+ structuralschema "k8s.io/apiextensions-apiserver/pkg/apiserver/schema"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/runtime/serializer/json"
+ "k8s.io/apimachinery/pkg/util/managedfields"
+ "k8s.io/apiserver/pkg/endpoints/handlers"
+ "sigs.k8s.io/structured-merge-diff/v4/fieldpath"
+)
+
+func ScopeWithFieldManager(typeConverter managedfields.TypeConverter, reqScope handlers.RequestScope, resetFields map[fieldpath.APIVersion]*fieldpath.Set, subresource string) (handlers.RequestScope, error) {
+ return scopeWithFieldManager(typeConverter, reqScope, resetFields, subresource)
+}
+
+func NewUnstructuredNegotiatedSerializer(
+ typer runtime.ObjectTyper,
+ creator runtime.ObjectCreater,
+ converter runtime.ObjectConvertor,
+ structuralSchemas map[string]*structuralschema.Structural, // by version
+ structuralSchemaGK schema.GroupKind,
+ preserveUnknownFields bool,
+) unstructuredNegotiatedSerializer {
+ return unstructuredNegotiatedSerializer{
+ typer: typer,
+ creator: creator,
+ converter: converter,
+ structuralSchemas: structuralSchemas,
+ structuralSchemaGK: structuralSchemaGK,
+ preserveUnknownFields: preserveUnknownFields,
+
+ supportedMediaTypes: []runtime.SerializerInfo{
+ {
+ MediaType: "application/json",
+ MediaTypeType: "application",
+ MediaTypeSubType: "json",
+ EncodesAsText: true,
+ Serializer: json.NewSerializer(json.DefaultMetaFactory, creator, typer, false),
+ PrettySerializer: json.NewSerializer(json.DefaultMetaFactory, creator, typer, true),
+ StrictSerializer: json.NewSerializerWithOptions(json.DefaultMetaFactory, creator, typer, json.SerializerOptions{
+ Strict: true,
+ }),
+ StreamSerializer: &runtime.StreamSerializerInfo{
+ EncodesAsText: true,
+ Serializer: json.NewSerializer(json.DefaultMetaFactory, creator, typer, false),
+ Framer: json.Framer,
+ },
+ },
+ {
+ MediaType: "application/yaml",
+ MediaTypeType: "application",
+ MediaTypeSubType: "yaml",
+ EncodesAsText: true,
+ Serializer: json.NewYAMLSerializer(json.DefaultMetaFactory, creator, typer),
+ StrictSerializer: json.NewSerializerWithOptions(json.DefaultMetaFactory, creator, typer, json.SerializerOptions{
+ Yaml: true,
+ Strict: true,
+ }),
+ },
+ },
+ }
+}
+
+func NewUnstructuredDefaulter(
+ delegate runtime.ObjectDefaulter,
+ structuralSchemas map[string]*structuralschema.Structural, // by version
+ structuralSchemaGK schema.GroupKind,
+) unstructuredDefaulter {
+ return unstructuredDefaulter{
+ delegate: delegate,
+ structuralSchemas: structuralSchemas,
+ structuralSchemaGK: structuralSchemaGK,
+ }
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_tableconverter_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_tableconverter_kcp.go
new file mode 100644
index 0000000000000..40c9762251082
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_tableconverter_kcp.go
@@ -0,0 +1,28 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package apiserver
+
+import (
+ "k8s.io/apiserver/pkg/registry/rest"
+)
+
+// TableConverterProvider provides a rest.TableConverter for a given group, kind, and listKind.
+type TableConverterProvider interface {
+ // GetTableConverter returns a rest.TableConverter for a given group, kind, and listKind, or nil if
+ // the provider is unable to do so.
+ GetTableConverter(group, kind, listKind string) rest.TableConvertor
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_test.go
index 3e77fe7197aeb..4a7870200107f 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_test.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/customresource_handler_test.go
@@ -177,20 +177,24 @@ func TestRouting(t *testing.T) {
crdLister: crdLister,
delegate: delegate,
versionDiscoveryHandler: &versionDiscoveryHandler{
- discovery: map[schema.GroupVersion]*discovery.APIVersionHandler{
- customV1: discovery.NewAPIVersionHandler(Codecs, customV1, discovery.APIResourceListerFunc(func() []metav1.APIResource {
- return nil
- })),
+ discovery: map[string]map[schema.GroupVersion]*discovery.APIVersionHandler{
+ "": {
+ customV1: discovery.NewAPIVersionHandler(Codecs, customV1, discovery.APIResourceListerFunc(func() []metav1.APIResource {
+ return nil
+ })),
+ },
},
delegate: delegate,
},
groupDiscoveryHandler: &groupDiscoveryHandler{
- discovery: map[string]*discovery.APIGroupHandler{
- "custom": discovery.NewAPIGroupHandler(Codecs, metav1.APIGroup{
- Name: customV1.Group,
- Versions: []metav1.GroupVersionForDiscovery{{GroupVersion: customV1.String(), Version: customV1.Version}},
- PreferredVersion: metav1.GroupVersionForDiscovery{GroupVersion: customV1.String(), Version: customV1.Version},
- }),
+ discovery: map[string]map[string]*discovery.APIGroupHandler{
+ "": {
+ "custom": discovery.NewAPIGroupHandler(Codecs, metav1.APIGroup{
+ Name: customV1.Group,
+ Versions: []metav1.GroupVersionForDiscovery{{GroupVersion: customV1.String(), Version: customV1.Version}},
+ PreferredVersion: metav1.GroupVersionForDiscovery{GroupVersion: customV1.String(), Version: customV1.Version},
+ }),
+ },
},
delegate: delegate,
},
@@ -514,12 +518,12 @@ func testHandlerConversion(t *testing.T, enableWatchCache bool) {
func(r webhook.AuthenticationInfoResolver) webhook.AuthenticationInfoResolver { return r },
1,
dummyAuthorizerImpl{},
- time.Minute, time.Minute, nil, 3*1024*1024)
+ time.Minute, time.Minute, nil, 3*1024*1024, true)
if err != nil {
t.Fatal(err)
}
- crdInfo, err := handler.getOrCreateServingInfoFor(crd.UID, crd.Name)
+ crdInfo, err := handler.getOrCreateServingInfoFor(crd)
if err != nil {
t.Fatal(err)
}
@@ -1045,7 +1049,7 @@ func TestBuildOpenAPIModelsForApply(t *testing.T) {
for i, test := range tests {
crd.Spec.Versions[0].Schema = &test
- models, err := buildOpenAPIModelsForApply(convertedDefs, &crd)
+ models, err := buildOpenAPIModelsForApply(convertedDefs, &crd, true)
if err != nil {
t.Fatalf("failed to convert to apply model: %v", err)
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/rest_options_getter_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/rest_options_getter_kcp.go
new file mode 100644
index 0000000000000..d99300de58268
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/rest_options_getter_kcp.go
@@ -0,0 +1,72 @@
+package apiserver
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apiserver/pkg/registry/generic"
+ "k8s.io/apiserver/pkg/storage/storagebackend"
+)
+
+type apiBindingAwareCRDRESTOptionsGetter struct {
+ delegate generic.RESTOptionsGetter
+ crd *apiextensionsv1.CustomResourceDefinition
+}
+
+func (t apiBindingAwareCRDRESTOptionsGetter) GetRESTOptions(resource schema.GroupResource, obj runtime.Object) (generic.RESTOptions, error) {
+ ret, err := t.delegate.GetRESTOptions(resource, obj)
+ if err != nil {
+ return ret, err
+ }
+
+ // assign some KCP metadata that are used by the reflector from the watch cache
+ ret.StorageConfig.KcpExtraStorageMetadata = &storagebackend.KcpStorageMetadata{IsCRD: true}
+
+ // Priority 1: wildcard partial metadata requests. These have been assigned a fake UID that ends with
+ // .wildcard.partial-metadata. If this is present, we don't want to modify the ResourcePrefix, which means that
+ // a wildcard partial metadata list/watch request will return every CR from every CRD for that group-resource, which
+ // could include instances from normal CRDs as well as those coming from CRDs with different identities. This would
+ // return e.g. everything under
+ //
+ // - /registry/mygroup.io/widgets/customresources/...
+ // - /registry/mygroup.io/widgets/identity1234/...
+ // - /registry/mygroup.io/widgets/identity4567/...
+ if strings.HasSuffix(string(t.crd.UID), ".wildcard.partial-metadata") {
+ ret.StorageConfig.KcpExtraStorageMetadata.Cluster = genericapirequest.Cluster{Wildcard: true, PartialMetadataRequest: true}
+ return ret, nil
+ }
+
+ ret.StorageConfig.KcpExtraStorageMetadata.Cluster.Wildcard = true
+
+ // Normal CRDs (not coming from an APIBinding) are stored in e.g. /registry/mygroup.io/widgets//...
+ if _, bound := t.crd.Annotations["apis.kcp.io/bound-crd"]; !bound {
+ ret.ResourcePrefix += "/customresources"
+
+ clusterName := logicalcluster.From(t.crd)
+ if clusterName != "system:system-crds" {
+ // For all normal CRDs outside of the system:system-crds logical cluster, tell the watch cache the name
+ // of the logical cluster to use, and turn off wildcarding. This ensures the watch cache is just for
+ // this logical cluster.
+ ret.StorageConfig.KcpExtraStorageMetadata.Cluster.Name = clusterName
+ ret.StorageConfig.KcpExtraStorageMetadata.Cluster.Wildcard = false
+ }
+ return ret, nil
+ }
+
+ // Bound CRDs must have the associated identity annotation
+ apiIdentity := t.crd.Annotations["apis.kcp.io/identity"]
+ if apiIdentity == "" {
+ return generic.RESTOptions{}, fmt.Errorf("missing 'apis.kcp.io/identity' annotation on CRD %s|%s for %s.%s", logicalcluster.From(t.crd), t.crd.Name, t.crd.Spec.Names.Plural, t.crd.Spec.Group)
+ }
+
+ // Modify the ResourcePrefix so it results in e.g. /registry/mygroup.io/widgets/identity4567/...
+ ret.ResourcePrefix += "/" + apiIdentity
+
+ return ret, err
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/kubeopenapi.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/kubeopenapi.go
index df78ba77e614a..1ce06852922b8 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/kubeopenapi.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/kubeopenapi.go
@@ -80,12 +80,22 @@ func (x *Extensions) toKubeOpenAPI(ret *spec.Schema) {
}
if len(x.XListMapKeys) > 0 {
ret.VendorExtensible.AddExtension("x-kubernetes-list-map-keys", x.XListMapKeys)
+ ret.VendorExtensible.AddExtension("x-kubernetes-patch-merge-key", x.XListMapKeys[0])
}
if x.XListType != nil {
ret.VendorExtensible.AddExtension("x-kubernetes-list-type", *x.XListType)
+ if *x.XListType == "map" || *x.XListType == "set" {
+ ret.VendorExtensible.AddExtension("x-kubernetes-patch-strategy", "merge")
+ }
+ if *x.XListType == "atomic" {
+ ret.VendorExtensible.AddExtension("x-kubernetes-patch-strategy", "replace")
+ }
}
if x.XMapType != nil {
ret.VendorExtensible.AddExtension("x-kubernetes-map-type", *x.XMapType)
+ if *x.XMapType == "atomic" {
+ ret.VendorExtensible.AddExtension("x-kubernetes-patch-strategy", "replace")
+ }
}
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/cmd/server/options/options.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/cmd/server/options/options.go
index 37f0e31cea94d..3aef6d96caec3 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/cmd/server/options/options.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/cmd/server/options/options.go
@@ -28,6 +28,7 @@ import (
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apiextensions-apiserver/pkg/apiserver"
+ "k8s.io/apiextensions-apiserver/pkg/apiserver/conversion"
generatedopenapi "k8s.io/apiextensions-apiserver/pkg/generated/openapi"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructuredscheme"
@@ -119,13 +120,19 @@ func (o CustomResourceDefinitionsServerOptions) Config() (*apiserver.Config, err
return nil, err
}
+ serviceResolver := &serviceResolver{serverConfig.SharedInformerFactory.Core().V1().Services().Lister()}
+ authResolverWrapper := webhook.NewDefaultAuthenticationInfoResolverWrapper(nil, nil, serverConfig.LoopbackClientConfig, noopoteltrace.NewTracerProvider())
+ conversionFactory, err := conversion.NewCRConverterFactory(serviceResolver, authResolverWrapper)
+ if err != nil {
+ return nil, err
+ }
+
serverConfig.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(openapi.GetOpenAPIDefinitionsWithoutDisabledFeatures(generatedopenapi.GetOpenAPIDefinitions), openapinamer.NewDefinitionNamer(apiserver.Scheme, scheme.Scheme))
config := &apiserver.Config{
GenericConfig: serverConfig,
ExtraConfig: apiserver.ExtraConfig{
CRDRESTOptionsGetter: NewCRDRESTOptionsGetter(*o.RecommendedOptions.Etcd, serverConfig.ResourceTransformers, serverConfig.StorageObjectCountTracker),
- ServiceResolver: &serviceResolver{serverConfig.SharedInformerFactory.Core().V1().Services().Lister()},
- AuthResolverWrapper: webhook.NewDefaultAuthenticationInfoResolverWrapper(nil, nil, serverConfig.LoopbackClientConfig, noopoteltrace.NewTracerProvider()),
+ ConversionFactory: conversionFactory,
},
}
return config, nil
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/apiapproval/apiapproval_controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/apiapproval/apiapproval_controller.go
index ac1d46e4db85a..cf99bad4ce72c 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/apiapproval/apiapproval_controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/apiapproval/apiapproval_controller.go
@@ -22,11 +22,13 @@ import (
"sync"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client/typed/apiextensions/v1"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+
"k8s.io/apiextensions-apiserver/pkg/apihelpers"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- client "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -38,9 +40,9 @@ import (
// KubernetesAPIApprovalPolicyConformantConditionController is maintaining the KubernetesAPIApprovalPolicyConformant condition.
type KubernetesAPIApprovalPolicyConformantConditionController struct {
- crdClient client.CustomResourceDefinitionsGetter
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdSynced cache.InformerSynced
// To allow injection for testing.
@@ -55,10 +57,7 @@ type KubernetesAPIApprovalPolicyConformantConditionController struct {
}
// NewKubernetesAPIApprovalPolicyConformantConditionController constructs a KubernetesAPIApprovalPolicyConformant schema condition controller.
-func NewKubernetesAPIApprovalPolicyConformantConditionController(
- crdInformer informers.CustomResourceDefinitionInformer,
- crdClient client.CustomResourceDefinitionsGetter,
-) *KubernetesAPIApprovalPolicyConformantConditionController {
+func NewKubernetesAPIApprovalPolicyConformantConditionController(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer, crdClient kcpapiextensionsv1client.ApiextensionsV1ClusterInterface) *KubernetesAPIApprovalPolicyConformantConditionController {
c := &KubernetesAPIApprovalPolicyConformantConditionController{
crdClient: crdClient,
crdLister: crdInformer.Lister(),
@@ -128,7 +127,12 @@ func calculateCondition(crd *apiextensionsv1.CustomResourceDefinition) *apiexten
}
func (c *KubernetesAPIApprovalPolicyConformantConditionController) sync(key string) error {
- inCustomResourceDefinition, err := c.crdLister.Get(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ inCustomResourceDefinition, err := c.crdLister.Cluster(clusterName).Get(name)
if apierrors.IsNotFound(err) {
return nil
}
@@ -163,7 +167,7 @@ func (c *KubernetesAPIApprovalPolicyConformantConditionController) sync(key stri
crd := inCustomResourceDefinition.DeepCopy()
apihelpers.SetCRDCondition(crd, *cond)
- _, err = c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ _, err = c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
@@ -226,9 +230,9 @@ func (c *KubernetesAPIApprovalPolicyConformantConditionController) processNextWo
}
func (c *KubernetesAPIApprovalPolicyConformantConditionController) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
if err != nil {
- utilruntime.HandleError(fmt.Errorf("Couldn't get key for object %#v: %v", obj, err))
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
return
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/establish/establishing_controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/establish/establishing_controller.go
index 65eb247e5601d..22a07432a262c 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/establish/establishing_controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/establish/establishing_controller.go
@@ -21,6 +21,14 @@ import (
"fmt"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client/typed/apiextensions/v1"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -30,18 +38,12 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- client "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
)
// EstablishingController controls how and when CRD is established.
type EstablishingController struct {
- crdClient client.CustomResourceDefinitionsGetter
- crdLister listers.CustomResourceDefinitionLister
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdSynced cache.InformerSynced
// To allow injection for testing.
@@ -51,8 +53,8 @@ type EstablishingController struct {
}
// NewEstablishingController creates new EstablishingController.
-func NewEstablishingController(crdInformer informers.CustomResourceDefinitionInformer,
- crdClient client.CustomResourceDefinitionsGetter) *EstablishingController {
+func NewEstablishingController(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer,
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter) *EstablishingController {
ec := &EstablishingController{
crdClient: crdClient,
crdLister: crdInformer.Lister(),
@@ -69,8 +71,8 @@ func NewEstablishingController(crdInformer informers.CustomResourceDefinitionInf
}
// QueueCRD adds CRD into the establishing queue.
-func (ec *EstablishingController) QueueCRD(key string, timeout time.Duration) {
- ec.queue.AddAfter(key, timeout)
+func (ec *EstablishingController) QueueCRD(name string, clusterName logicalcluster.Name, timeout time.Duration) {
+ ec.queue.AddAfter(kcpcache.ToClusterAwareKey(clusterName.String(), "", name), timeout)
}
// Run starts the EstablishingController.
@@ -119,7 +121,12 @@ func (ec *EstablishingController) processNextWorkItem() bool {
// sync is used to turn CRDs into the Established state.
func (ec *EstablishingController) sync(key string) error {
- cachedCRD, err := ec.crdLister.Get(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ cachedCRD, err := ec.crdLister.Cluster(clusterName).Get(name)
if apierrors.IsNotFound(err) {
return nil
}
@@ -158,7 +165,7 @@ func (ec *EstablishingController) sync(key string) error {
}
// Update server with new CRD condition.
- _, err = ec.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ _, err = ec.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/finalizer/crd_finalizer.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/finalizer/crd_finalizer.go
index c569642d95338..b445c9002cb5d 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/finalizer/crd_finalizer.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/finalizer/crd_finalizer.go
@@ -22,8 +22,14 @@ import (
"reflect"
"time"
- "k8s.io/klog/v2"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client/typed/apiextensions/v1"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -37,12 +43,7 @@ import (
"k8s.io/apiserver/pkg/registry/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- client "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
+ "k8s.io/klog/v2"
)
// OverlappingBuiltInResources returns the set of built-in group/resources that are persisted
@@ -57,10 +58,10 @@ func OverlappingBuiltInResources() map[schema.GroupResource]bool {
// CRDFinalizer is a controller that finalizes the CRD by deleting all the CRs associated with it.
type CRDFinalizer struct {
- crdClient client.CustomResourceDefinitionsGetter
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter
crClientGetter CRClientGetter
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdSynced cache.InformerSynced
// To allow injection for testing.
@@ -83,11 +84,7 @@ type CRClientGetter interface {
}
// NewCRDFinalizer creates a new CRDFinalizer.
-func NewCRDFinalizer(
- crdInformer informers.CustomResourceDefinitionInformer,
- crdClient client.CustomResourceDefinitionsGetter,
- crClientGetter CRClientGetter,
-) *CRDFinalizer {
+func NewCRDFinalizer(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer, crdClient kcpapiextensionsv1client.ApiextensionsV1ClusterInterface, crClientGetter CRClientGetter) *CRDFinalizer {
c := &CRDFinalizer{
crdClient: crdClient,
crdLister: crdInformer.Lister(),
@@ -110,7 +107,12 @@ func NewCRDFinalizer(
}
func (c *CRDFinalizer) sync(key string) error {
- cachedCRD, err := c.crdLister.Get(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ cachedCRD, err := c.crdLister.Cluster(clusterName).Get(name)
if apierrors.IsNotFound(err) {
return nil
}
@@ -132,7 +134,7 @@ func (c *CRDFinalizer) sync(key string) error {
Reason: "InstanceDeletionInProgress",
Message: "CustomResource deletion is in progress",
})
- crd, err = c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ crd, err = c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
@@ -155,7 +157,7 @@ func (c *CRDFinalizer) sync(key string) error {
cond, deleteErr := c.deleteInstances(crd)
apiextensionshelpers.SetCRDCondition(crd, cond)
if deleteErr != nil {
- if _, err = c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{}); err != nil {
+ if _, err = c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{}); err != nil {
utilruntime.HandleError(err)
}
return deleteErr
@@ -170,7 +172,7 @@ func (c *CRDFinalizer) sync(key string) error {
}
apiextensionshelpers.CRDRemoveFinalizer(crd, apiextensionsv1.CustomResourceCleanupFinalizer)
- _, err = c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ _, err = c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
@@ -193,7 +195,10 @@ func (c *CRDFinalizer) deleteInstances(crd *apiextensionsv1.CustomResourceDefini
}, err
}
- ctx := genericapirequest.NewContext()
+ ctx := genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{
+ Name: logicalcluster.From(crd),
+ })
+
allResources, err := crClient.List(ctx, nil)
if err != nil {
return apiextensionsv1.CustomResourceDefinitionCondition{
@@ -306,7 +311,7 @@ func (c *CRDFinalizer) processNextWorkItem() bool {
}
func (c *CRDFinalizer) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
if err != nil {
utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
return
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/nonstructuralschema/nonstructuralschema_controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/nonstructuralschema/nonstructuralschema_controller.go
index 55e467d9c0549..5bbcff435fd85 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/nonstructuralschema/nonstructuralschema_controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/nonstructuralschema/nonstructuralschema_controller.go
@@ -22,6 +22,15 @@ import (
"sync"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client/typed/apiextensions/v1"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsinternal "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apiextensions-apiserver/pkg/apiserver/schema"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -30,21 +39,13 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsinternal "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- "k8s.io/apiextensions-apiserver/pkg/apiserver/schema"
- client "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
)
// ConditionController is maintaining the NonStructuralSchema condition.
type ConditionController struct {
- crdClient client.CustomResourceDefinitionsGetter
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdSynced cache.InformerSynced
// To allow injection for testing.
@@ -59,10 +60,7 @@ type ConditionController struct {
}
// NewConditionController constructs a non-structural schema condition controller.
-func NewConditionController(
- crdInformer informers.CustomResourceDefinitionInformer,
- crdClient client.CustomResourceDefinitionsGetter,
-) *ConditionController {
+func NewConditionController(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer, crdClient kcpapiextensionsv1client.ApiextensionsV1ClusterInterface) *ConditionController {
c := &ConditionController{
crdClient: crdClient,
crdLister: crdInformer.Lister(),
@@ -133,7 +131,12 @@ func calculateCondition(in *apiextensionsv1.CustomResourceDefinition) *apiextens
}
func (c *ConditionController) sync(key string) error {
- inCustomResourceDefinition, err := c.crdLister.Get(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ inCustomResourceDefinition, err := c.crdLister.Cluster(clusterName).Get(name)
if apierrors.IsNotFound(err) {
return nil
}
@@ -169,7 +172,7 @@ func (c *ConditionController) sync(key string) error {
apiextensionshelpers.SetCRDCondition(crd, *cond)
}
- _, err = c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ _, err = c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
@@ -232,7 +235,7 @@ func (c *ConditionController) processNextWorkItem() bool {
}
func (c *ConditionController) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
if err != nil {
utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
return
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder.go
index 51cde0cac154d..53e524d781fe8 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder.go
@@ -155,11 +155,17 @@ func generateBuilder(crd *apiextensionsv1.CustomResourceDefinition, version stri
scale := &v1.Scale{}
routes := make([]*restful.RouteBuilder, 0)
- root := fmt.Sprintf("/apis/%s/%s/%s", b.group, b.version, b.plural)
+ // HACK: support the case when we add core resources through CRDs (KCP scenario)
+ rootPrefix := fmt.Sprintf("/apis/%s/%s", b.group, b.version)
+ if b.group == "" {
+ rootPrefix = fmt.Sprintf("/api/%s", b.version)
+ }
+
+ root := fmt.Sprintf("%s/%s", rootPrefix, b.plural)
if b.namespaced {
routes = append(routes, b.buildRoute(root, "", "GET", "list", "list", sampleList).Operation("list"+b.kind+"ForAllNamespaces"))
- root = fmt.Sprintf("/apis/%s/%s/namespaces/{namespace}/%s", b.group, b.version, b.plural)
+ root = fmt.Sprintf("%s/namespaces/{namespace}/%s", rootPrefix, b.plural)
}
routes = append(routes, b.buildRoute(root, "", "GET", "list", "list", sampleList))
routes = append(routes, b.buildRoute(root, "", "POST", "post", "create", sample).Reads(sample))
@@ -223,7 +229,7 @@ type CRDCanonicalTypeNamer struct {
// OpenAPICanonicalTypeName returns canonical type name for given CRD
func (c *CRDCanonicalTypeNamer) OpenAPICanonicalTypeName() string {
- return fmt.Sprintf("%s/%s.%s", c.group, c.version, c.kind)
+ return fmt.Sprintf("%s/%s.%s", packagePrefix(c.group), c.version, c.kind)
}
// builder contains validation schema and basic naming information for a CRD in
@@ -495,7 +501,7 @@ func addTypeMetaProperties(s *spec.Schema, v2 bool) {
// buildListSchema builds the list kind schema for the CRD
func (b *builder) buildListSchema(crd *apiextensionsv1.CustomResourceDefinition, opts Options) *spec.Schema {
- name := definitionPrefix + util.ToRESTFriendlyName(fmt.Sprintf("%s/%s/%s", b.group, b.version, b.kind))
+ name := definitionPrefix + util.ToRESTFriendlyName(fmt.Sprintf("%s/%s/%s", packagePrefix(b.group), b.version, b.kind))
doc := fmt.Sprintf("List of %s. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md", b.plural)
s := new(spec.Schema).
Typed("object", "").
@@ -544,11 +550,11 @@ func (b *builder) getOpenAPIConfig() *common.Config {
},
GetDefinitions: func(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
def := utilopenapi.GetOpenAPIDefinitionsWithoutDisabledFeatures(generatedopenapi.GetOpenAPIDefinitions)(ref)
- def[fmt.Sprintf("%s/%s.%s", b.group, b.version, b.kind)] = common.OpenAPIDefinition{
+ def[fmt.Sprintf("%s/%s.%s", packagePrefix(b.group), b.version, b.kind)] = common.OpenAPIDefinition{
Schema: *b.schema,
Dependencies: []string{objectMetaType},
}
- def[fmt.Sprintf("%s/%s.%s", b.group, b.version, b.listKind)] = common.OpenAPIDefinition{
+ def[fmt.Sprintf("%s/%s.%s", packagePrefix(b.group), b.version, b.listKind)] = common.OpenAPIDefinition{
Schema: *b.listSchema,
}
return def
@@ -578,11 +584,11 @@ func (b *builder) getOpenAPIV3Config() *common.OpenAPIV3Config {
},
GetDefinitions: func(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
def := utilopenapi.GetOpenAPIDefinitionsWithoutDisabledFeatures(generatedopenapi.GetOpenAPIDefinitions)(ref)
- def[fmt.Sprintf("%s/%s.%s", b.group, b.version, b.kind)] = common.OpenAPIDefinition{
+ def[fmt.Sprintf("%s/%s.%s", packagePrefix(b.group), b.version, b.kind)] = common.OpenAPIDefinition{
Schema: *b.schema,
Dependencies: []string{objectMetaType},
}
- def[fmt.Sprintf("%s/%s.%s", b.group, b.version, b.listKind)] = common.OpenAPIDefinition{
+ def[fmt.Sprintf("%s/%s.%s", packagePrefix(b.group), b.version, b.listKind)] = common.OpenAPIDefinition{
Schema: *b.listSchema,
}
return def
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder_kcp.go
new file mode 100644
index 0000000000000..ec8c86b05414c
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/builder_kcp.go
@@ -0,0 +1,49 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package builder
+
+import (
+ "strings"
+
+ "k8s.io/apimachinery/pkg/util/sets"
+)
+
+var dotlessKubeGroups = sets.NewString(
+ "",
+ "apps",
+ "batch",
+ "extensions",
+ "policy",
+)
+
+// HACK: support the case when we add "dotless" built-in API groups
+func packagePrefix(group string) string {
+ if strings.Contains(group, ".") {
+ return group
+ }
+
+ if !dotlessKubeGroups.Has(group) {
+ // Shouldn't really be possible...
+ return group
+ }
+
+ if group == "" {
+ group = "core"
+ }
+
+ return "k8s.io/api/" + group
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go
index a83d298f8f94c..afb7bbb83564d 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go
@@ -23,11 +23,20 @@ import (
"github.com/google/uuid"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
apiextensionsfeatures "k8s.io/apiextensions-apiserver/pkg/features"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/apiserver/pkg/server/routes"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
@@ -35,17 +44,11 @@ import (
"k8s.io/kube-openapi/pkg/cached"
"k8s.io/kube-openapi/pkg/handler"
"k8s.io/kube-openapi/pkg/validation/spec"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
- "k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
)
// Controller watches CustomResourceDefinitions and publishes validation schema
type Controller struct {
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdsSynced cache.InformerSynced
// To allow injection for testing.
@@ -55,12 +58,12 @@ type Controller struct {
staticSpec *spec.Swagger
- openAPIService *handler.OpenAPIService
+ openAPIServiceProvider routes.OpenAPIServiceProvider
- // specs by name. The specs are lazily constructed on request.
- // The lock is for the map only.
+ // specs per cluster and per CRD name. The specs are lazily constructed on
+ // request. The lock is for the map only.
lock sync.Mutex
- specsByName map[string]*specCache
+ specsByName map[logicalcluster.Name]map[string]*specCache
}
// specCache holds the merged version spec for a CRD as well as the CRD object.
@@ -112,7 +115,7 @@ func createSpecCache(crd *apiextensionsv1.CustomResourceDefinition) *specCache {
}
// NewController creates a new Controller with input CustomResourceDefinition informer
-func NewController(crdInformer informers.CustomResourceDefinitionInformer) *Controller {
+func NewController(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer) *Controller {
c := &Controller{
crdLister: crdInformer.Lister(),
crdsSynced: crdInformer.Informer().HasSynced,
@@ -120,7 +123,7 @@ func NewController(crdInformer informers.CustomResourceDefinitionInformer) *Cont
workqueue.DefaultTypedControllerRateLimiter[string](),
workqueue.TypedRateLimitingQueueConfig[string]{Name: "crd_openapi_controller"},
),
- specsByName: map[string]*specCache{},
+ specsByName: map[logicalcluster.Name]map[string]*specCache{},
}
crdInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
@@ -133,8 +136,51 @@ func NewController(crdInformer informers.CustomResourceDefinitionInformer) *Cont
return c
}
+// HACK:
+// Everything regarding OpenAPI and resource discovery is managed through controllers currently
+// (a number of controllers highly coupled with the corresponding http handlers).
+// The following code is an attempt at provising CRD tenancy while accommodating the current design without being too much invasive,
+// because doing differently would have meant too much refactoring..
+// But in the long run the "do this dynamically, not as part of a controller" is probably going to be important.
+// openapi/crd generation is expensive, so doing on a controller means that CPU and memory scale O(crds),
+// when we really want them to scale O(active_clusters).
+
+func (c *Controller) setClusterCrdSpecs(clusterName logicalcluster.Name, crdName string, spec *specCache) {
+ _, found := c.specsByName[clusterName]
+ if !found {
+ c.specsByName[clusterName] = map[string]*specCache{}
+ }
+ c.specsByName[clusterName][crdName] = spec
+ c.openAPIServiceProvider.AddCuster(clusterName)
+}
+
+func (c *Controller) removeClusterCrdSpecs(clusterName logicalcluster.Name, crdName string) bool {
+ _, crdsForClusterFound := c.specsByName[clusterName]
+ if !crdsForClusterFound {
+ return false
+ }
+ if _, found := c.specsByName[clusterName][crdName]; !found {
+ return false
+ }
+ delete(c.specsByName[clusterName], crdName)
+ if len(c.specsByName[clusterName]) == 0 {
+ delete(c.specsByName, clusterName)
+ c.openAPIServiceProvider.RemoveCuster(clusterName)
+ }
+ return true
+}
+
+func (c *Controller) getClusterCrdSpecs(clusterName logicalcluster.Name, crdName string) (*specCache, bool) {
+ _, specsFoundForCluster := c.specsByName[clusterName]
+ if !specsFoundForCluster {
+ return nil, false
+ }
+ crdSpecs, found := c.specsByName[clusterName][crdName]
+ return crdSpecs, found
+}
+
// Run sets openAPIAggregationManager and starts workers
-func (c *Controller) Run(staticSpec *spec.Swagger, openAPIService *handler.OpenAPIService, stopCh <-chan struct{}) {
+func (c *Controller) Run(staticSpec *spec.Swagger, openAPIService routes.OpenAPIServiceProvider, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.queue.ShutDown()
defer klog.Infof("Shutting down OpenAPI controller")
@@ -142,7 +188,7 @@ func (c *Controller) Run(staticSpec *spec.Swagger, openAPIService *handler.OpenA
klog.Infof("Starting OpenAPI controller")
c.staticSpec = staticSpec
- c.openAPIService = openAPIService
+ c.openAPIServiceProvider = openAPIService
if !cache.WaitForCacheSync(stopCh, c.crdsSynced) {
utilruntime.HandleError(fmt.Errorf("timed out waiting for caches to sync"))
@@ -159,7 +205,7 @@ func (c *Controller) Run(staticSpec *spec.Swagger, openAPIService *handler.OpenA
if !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
continue
}
- c.specsByName[crd.Name] = createSpecCache(crd)
+ c.setClusterCrdSpecs(logicalcluster.From(crd), crd.Name, createSpecCache(crd))
}
c.updateSpecLocked()
@@ -201,23 +247,29 @@ func (c *Controller) processNextWorkItem() bool {
return true
}
-func (c *Controller) sync(name string) error {
+func (c *Controller) sync(key string) error {
c.lock.Lock()
defer c.lock.Unlock()
- crd, err := c.crdLister.Get(name)
+ clusterName, _, crdName, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+
+ crd, err := c.crdLister.Cluster(clusterName).Get(crdName)
if err != nil && !errors.IsNotFound(err) {
return err
}
// do we have to remove all specs of this CRD?
if errors.IsNotFound(err) || !apiextensionshelpers.IsCRDConditionTrue(crd, apiextensionsv1.Established) {
- if _, found := c.specsByName[name]; !found {
+ if !c.removeClusterCrdSpecs(clusterName, crdName) {
return nil
}
- delete(c.specsByName, name)
- klog.V(2).Infof("Updating CRD OpenAPI spec because %s was removed", name)
- regenerationCounter.With(map[string]string{"crd": name, "reason": "remove"})
+ klog.V(2).Infof("Updating CRD OpenAPI spec because %s was removed", crdName)
+ regenerationCounter.With(map[string]string{"crd": crdName, "reason": "remove"})
+
c.updateSpecLocked()
return nil
}
@@ -225,46 +277,48 @@ func (c *Controller) sync(name string) error {
// If CRD spec already exists, update the CRD.
// specCache.update() includes the ETag so an update on a spec
// resulting in the same ETag will be a noop.
- s, exists := c.specsByName[crd.Name]
+ s, exists := c.getClusterCrdSpecs(logicalcluster.From(crd), crd.Name)
if exists {
s.update(crd)
- klog.V(2).Infof("Updating CRD OpenAPI spec because %s changed", name)
- regenerationCounter.With(map[string]string{"crd": name, "reason": "update"})
+ klog.V(2).Infof("Updating CRD OpenAPI spec because %s changed", crd.Name)
+ regenerationCounter.With(map[string]string{"crd": crd.Name, "reason": "update"})
return nil
}
- c.specsByName[crd.Name] = createSpecCache(crd)
- klog.V(2).Infof("Updating CRD OpenAPI spec because %s changed", name)
- regenerationCounter.With(map[string]string{"crd": name, "reason": "add"})
+ c.setClusterCrdSpecs(logicalcluster.From(crd), crd.Name, createSpecCache(crd))
+ klog.V(2).Infof("Updating CRD OpenAPI spec because %s changed", crd.Name)
+ regenerationCounter.With(map[string]string{"crd": crd.Name, "reason": "add"})
c.updateSpecLocked()
return nil
}
// updateSpecLocked updates the cached spec graph.
func (c *Controller) updateSpecLocked() {
- specList := make([]cached.Value[*spec.Swagger], 0, len(c.specsByName))
- for crd := range c.specsByName {
- specList = append(specList, c.specsByName[crd].mergedVersionSpec)
- }
+ for clusterName, clusterCrdSpecs := range c.specsByName {
+ specList := make([]cached.Value[*spec.Swagger], 0, len(clusterCrdSpecs))
+ for crd := range clusterCrdSpecs {
+ specList = append(specList, clusterCrdSpecs[crd].mergedVersionSpec)
+ }
- cache := cached.MergeList(func(results []cached.Result[*spec.Swagger]) (*spec.Swagger, string, error) {
- localCRDSpec := make([]*spec.Swagger, 0, len(results))
- for k := range results {
- if results[k].Err == nil {
- localCRDSpec = append(localCRDSpec, results[k].Value)
+ cache := cached.MergeList(func(results []cached.Result[*spec.Swagger]) (*spec.Swagger, string, error) {
+ localCRDSpec := make([]*spec.Swagger, 0, len(results))
+ for k := range results {
+ if results[k].Err == nil {
+ localCRDSpec = append(localCRDSpec, results[k].Value)
+ }
}
- }
- mergedSpec, err := builder.MergeSpecs(c.staticSpec, localCRDSpec...)
- if err != nil {
- return nil, "", fmt.Errorf("failed to merge specs: %v", err)
- }
- // A UUID is returned for the etag because we will only
- // create a new merger when a CRD has changed. A hash based
- // etag is more expensive because the CRDs are not
- // premarshalled.
- return mergedSpec, uuid.New().String(), nil
- }, specList)
- c.openAPIService.UpdateSpecLazy(cache)
+ mergedSpec, err := builder.MergeSpecs(c.staticSpec, localCRDSpec...)
+ if err != nil {
+ return nil, "", fmt.Errorf("failed to merge specs: %v", err)
+ }
+ // A UUID is returned for the etag because we will only
+ // create a new merger when a CRD has changed. A hash based
+ // etag is more expensive because the CRDs are not
+ // premarshalled.
+ return mergedSpec, uuid.New().String(), nil
+ }, specList)
+ c.openAPIServiceProvider.ForCluster(clusterName).UpdateSpecLazy(cache)
+ }
}
func (c *Controller) addCustomResourceDefinition(obj interface{}) {
@@ -298,7 +352,8 @@ func (c *Controller) deleteCustomResourceDefinition(obj interface{}) {
}
func (c *Controller) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- c.queue.Add(obj.Name)
+ key, _ := kcpcache.MetaClusterNamespaceKeyFunc(obj)
+ c.queue.Add(key)
}
func generateCRDHash(crd *apiextensionsv1.CustomResourceDefinition) string {
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapiv3/controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapiv3/controller.go
index 7e072d3855864..4dfb2a3c14fd0 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapiv3/controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapiv3/controller.go
@@ -22,6 +22,13 @@ import (
"sync"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
apiextensionsfeatures "k8s.io/apiextensions-apiserver/pkg/features"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
@@ -34,17 +41,11 @@ import (
"k8s.io/klog/v2"
"k8s.io/kube-openapi/pkg/handler3"
"k8s.io/kube-openapi/pkg/spec3"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
- "k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder"
)
// Controller watches CustomResourceDefinitions and publishes OpenAPI v3
type Controller struct {
- crdLister listers.CustomResourceDefinitionLister
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
crdsSynced cache.InformerSynced
// To allow injection for testing.
@@ -60,7 +61,7 @@ type Controller struct {
}
// NewController creates a new Controller with input CustomResourceDefinition informer
-func NewController(crdInformer informers.CustomResourceDefinitionInformer) *Controller {
+func NewController(crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer) *Controller {
c := &Controller{
crdLister: crdInformer.Lister(),
crdsSynced: crdInformer.Informer().HasSynced,
@@ -151,11 +152,16 @@ func (c *Controller) processNextWorkItem() bool {
return true
}
-func (c *Controller) sync(name string) error {
+func (c *Controller) sync(key string) error {
c.lock.Lock()
defer c.lock.Unlock()
- crd, err := c.crdLister.Get(name)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ crd, err := c.crdLister.Cluster(clusterName).Get(name)
if err != nil && !errors.IsNotFound(err) {
return err
}
@@ -279,5 +285,10 @@ func (c *Controller) deleteCustomResourceDefinition(obj interface{}) {
}
func (c *Controller) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- c.queue.Add(obj.Name)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
+ return
+ }
+ c.queue.Add(key)
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller.go
index ba448a150cf73..0b4b2b2fa6a3f 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller.go
@@ -23,8 +23,16 @@ import (
"strings"
"time"
- "k8s.io/klog/v2"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpapiextensionsv1client "github.com/kcp-dev/client-go/apiextensions/client/typed/apiextensions/v1"
+ kcpapiextensionsv1informers "github.com/kcp-dev/client-go/apiextensions/informers/apiextensions/v1"
+ kcpapiextensionsv1listers "github.com/kcp-dev/client-go/apiextensions/listers/apiextensions/v1"
+ kcpthirdpartycache "github.com/kcp-dev/client-go/third_party/k8s.io/client-go/tools/cache"
+ "github.com/kcp-dev/logicalcluster/v3"
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apiextensions-apiserver/pkg/kcp"
"k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -35,25 +43,21 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
-
- apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
- apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
- client "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1"
- informers "k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/apiextensions/v1"
- listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
+ "k8s.io/klog/v2"
)
// This controller is reserving names. To avoid conflicts, be sure to run only one instance of the worker at a time.
// This could eventually be lifted, but starting simple.
type NamingConditionController struct {
- crdClient client.CustomResourceDefinitionsGetter
+ crdClient kcpapiextensionsv1client.CustomResourceDefinitionsClusterGetter
- crdLister listers.CustomResourceDefinitionLister
- crdSynced cache.InformerSynced
+ crdLister kcpapiextensionsv1listers.CustomResourceDefinitionClusterLister
+ clusterAwareCRDLister kcp.ClusterAwareCRDClusterLister
+ crdSynced cache.InformerSynced
// crdMutationCache backs our lister and keeps track of committed updates to avoid racy
// write/lookup cycles. It's got 100 slots by default, so it unlikely to overrun
// TODO to revisit this if naming conflicts are found to occur in the wild
- crdMutationCache cache.MutationCache
+ crdMutationCache kcpthirdpartycache.MutationCache
// To allow injection for testing.
syncFn func(key string) error
@@ -63,13 +67,15 @@ type NamingConditionController struct {
func NewNamingConditionController(
logger klog.Logger,
- crdInformer informers.CustomResourceDefinitionInformer,
- crdClient client.CustomResourceDefinitionsGetter,
+ crdInformer kcpapiextensionsv1informers.CustomResourceDefinitionClusterInformer,
+ crdClient kcpapiextensionsv1client.ApiextensionsV1ClusterInterface,
+ clusterAwareCRDLister kcp.ClusterAwareCRDClusterLister,
) *NamingConditionController {
c := &NamingConditionController{
- crdClient: crdClient,
- crdLister: crdInformer.Lister(),
- crdSynced: crdInformer.Informer().HasSynced,
+ crdClient: crdClient,
+ crdLister: crdInformer.Lister(),
+ clusterAwareCRDLister: clusterAwareCRDLister,
+ crdSynced: crdInformer.Informer().HasSynced,
queue: workqueue.NewTypedRateLimitingQueueWithConfig(
workqueue.DefaultTypedControllerRateLimiter[string](),
workqueue.TypedRateLimitingQueueConfig[string]{Name: "crd_naming_condition_controller"},
@@ -94,11 +100,11 @@ func NewNamingConditionController(
return c
}
-func (c *NamingConditionController) getAcceptedNamesForGroup(group string) (allResources sets.String, allKinds sets.String) {
+func (c *NamingConditionController) getAcceptedNamesForGroup(clusterName logicalcluster.Name, group string) (allResources sets.String, allKinds sets.String) {
allResources = sets.String{}
allKinds = sets.String{}
- list, err := c.crdLister.List(labels.Everything())
+ list, err := c.clusterAwareCRDLister.Cluster(clusterName).List(context.TODO(), labels.Everything())
if err != nil {
panic(err)
}
@@ -112,7 +118,12 @@ func (c *NamingConditionController) getAcceptedNamesForGroup(group string) (allR
// this makes sure that if we tight loop on update and run, our mutation cache will show
// us the version of the objects we just updated to.
item := curr
- obj, exists, err := c.crdMutationCache.GetByKey(curr.Name)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(item)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", item, err))
+ continue
+ }
+ obj, exists, err := c.crdMutationCache.GetByKey(key)
if exists && err == nil {
item = obj.(*apiextensionsv1.CustomResourceDefinition)
}
@@ -130,7 +141,15 @@ func (c *NamingConditionController) getAcceptedNamesForGroup(group string) (allR
func (c *NamingConditionController) calculateNamesAndConditions(in *apiextensionsv1.CustomResourceDefinition) (apiextensionsv1.CustomResourceDefinitionNames, apiextensionsv1.CustomResourceDefinitionCondition, apiextensionsv1.CustomResourceDefinitionCondition) {
// Get the names that have already been claimed
- allResources, allKinds := c.getAcceptedNamesForGroup(in.Spec.Group)
+ allResources, allKinds := c.getAcceptedNamesForGroup(logicalcluster.From(in), in.Spec.Group)
+
+ // HACK(kcp): if it's a bound CRD, reset already claimed resources and kinds to empty, because we need to support
+ // multiple bound CRDs with overlapping names. KCP admission will ensure that a workspace does not have any
+ // naming conflicts.
+ if _, kcpBoundCRD := in.Annotations["apis.kcp.io/bound-crd"]; kcpBoundCRD {
+ allResources = sets.NewString()
+ allKinds = sets.NewString()
+ }
namesAcceptedCondition := apiextensionsv1.CustomResourceDefinitionCondition{
Type: apiextensionsv1.NamesAccepted,
@@ -240,11 +259,16 @@ func equalToAcceptedOrFresh(requestedName, acceptedName string, usedNames sets.S
}
func (c *NamingConditionController) sync(key string) error {
- inCustomResourceDefinition, err := c.crdLister.Get(key)
+ clusterName, _, name, err := kcpcache.SplitMetaClusterNamespaceKey(key)
+ if err != nil {
+ utilruntime.HandleError(err)
+ return nil
+ }
+ inCustomResourceDefinition, err := c.crdLister.Cluster(clusterName).Get(name)
if apierrors.IsNotFound(err) {
// CRD was deleted and has freed its names.
// Reconsider all other CRDs in the same group.
- if err := c.requeueAllOtherGroupCRDs(key); err != nil {
+ if err := c.requeueAllOtherGroupCRDs(clusterName, name); err != nil {
return err
}
return nil
@@ -271,7 +295,7 @@ func (c *NamingConditionController) sync(key string) error {
apiextensionshelpers.SetCRDCondition(crd, namingCondition)
apiextensionshelpers.SetCRDCondition(crd, establishedCondition)
- updatedObj, err := c.crdClient.CustomResourceDefinitions().UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
+ updatedObj, err := c.crdClient.CustomResourceDefinitions().Cluster(clusterName.Path()).UpdateStatus(context.TODO(), crd, metav1.UpdateOptions{})
if apierrors.IsNotFound(err) || apierrors.IsConflict(err) {
// deleted or changed in the meantime, we'll get called again
return nil
@@ -285,7 +309,7 @@ func (c *NamingConditionController) sync(key string) error {
// we updated our status, so we may be releasing a name. When this happens, we need to rekick everything in our group
// if we fail to rekick, just return as normal. We'll get everything on a resync
- if err := c.requeueAllOtherGroupCRDs(key); err != nil {
+ if err := c.requeueAllOtherGroupCRDs(clusterName, name); err != nil {
return err
}
@@ -335,7 +359,7 @@ func (c *NamingConditionController) processNextWorkItem() bool {
}
func (c *NamingConditionController) enqueue(obj *apiextensionsv1.CustomResourceDefinition) {
- key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
+ key, err := kcpcache.DeletionHandlingMetaClusterNamespaceKeyFunc(obj)
if err != nil {
utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
return
@@ -374,15 +398,28 @@ func (c *NamingConditionController) deleteCustomResourceDefinition(obj interface
c.enqueue(castObj)
}
-func (c *NamingConditionController) requeueAllOtherGroupCRDs(name string) error {
+func (c *NamingConditionController) requeueAllOtherGroupCRDs(clusterName logicalcluster.Name, name string) error {
pluralGroup := strings.SplitN(name, ".", 2)
- list, err := c.crdLister.List(labels.Everything())
+ var groupForName string
+
+ // In case the group is empty because we're adding core resources as CRDs in KCP
+ if len(pluralGroup) == 1 {
+ groupForName = ""
+ } else {
+ // Given name = widgets.example.com
+ // pluralGroup[0] is the name, such as widgets
+ // pluarlGroup[1] is the API group, such as example.com
+ groupForName = pluralGroup[1]
+ }
+
+ list, err := c.clusterAwareCRDLister.Cluster(clusterName).List(context.TODO(), labels.Everything())
if err != nil {
return err
}
+
for _, curr := range list {
- if curr.Spec.Group == pluralGroup[1] && curr.Name != name {
- c.queue.Add(curr.Name)
+ if curr.Spec.Group == groupForName && curr.Name != name {
+ c.enqueue(curr)
}
}
return nil
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller_kcp_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller_kcp_test.go
new file mode 100644
index 0000000000000..bf51f3cddeb16
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/naming_controller_kcp_test.go
@@ -0,0 +1,114 @@
+/*
+Copyright 2022 The kcp Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package status
+
+import (
+ "github.com/google/uuid"
+ "github.com/stretchr/testify/require"
+ apiextensionshelpers "k8s.io/apiextensions-apiserver/pkg/apihelpers"
+ apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ listers "k8s.io/apiextensions-apiserver/pkg/client/listers/apiextensions/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/tools/cache"
+ "testing"
+ "time"
+)
+
+func newBoundCRD(resource, group string) *crdBuilder {
+ return &crdBuilder{
+ curr: apiextensionsv1.CustomResourceDefinition{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: uuid.New().String(),
+ Annotations: map[string]string{
+ "apis.kcp.io/bound-crd": "",
+ },
+ },
+ Spec: apiextensionsv1.CustomResourceDefinitionSpec{
+ Group: group,
+ Names: apiextensionsv1.CustomResourceDefinitionNames{
+ Plural: resource,
+ },
+ },
+ },
+ }
+}
+
+func TestSync_KCP_BoundCRDsDoNotConflict(t *testing.T) {
+ tests := []struct {
+ name string
+ existing []*apiextensionsv1.CustomResourceDefinition
+ }{
+ {
+ name: "conflict on plural to singular",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "alfa", "", "").NewOrDie(),
+ },
+ },
+ {
+ name: "conflict on singular to shortName",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "indias", "", "", "delta-singular").NewOrDie(),
+ },
+ },
+ {
+ name: "conflict on shortName to shortName",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "indias", "", "", "hotel-shortname-2").NewOrDie(),
+ },
+ },
+ {
+ name: "conflict on kind to listkind",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "indias", "", "echo-kind").NewOrDie(),
+ },
+ },
+ {
+ name: "conflict on listkind to kind",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "indias", "foxtrot-listkind", "").NewOrDie(),
+ },
+ },
+ {
+ name: "no conflict on resource and kind",
+ existing: []*apiextensionsv1.CustomResourceDefinition{
+ newBoundCRD("india", "bravo.com").StatusNames("india", "echo-kind", "", "").NewOrDie(),
+ },
+ },
+ }
+
+ for _, tc := range tests {
+ crdIndexer := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
+ for _, obj := range tc.existing {
+ crdIndexer.Add(obj)
+ }
+
+ c := NamingConditionController{
+ crdLister: listers.NewCustomResourceDefinitionLister(crdIndexer),
+ crdMutationCache: cache.NewIntegerResourceVersionMutationCache(crdIndexer, crdIndexer, 60*time.Second, false),
+ }
+
+ newCRD := newBoundCRD("alfa", "bravo.com").SpecNames("alfa", "delta-singular", "echo-kind", "foxtrot-listkind", "golf-shortname-1", "hotel-shortname-2").NewOrDie()
+
+ expectedNames := names("alfa", "delta-singular", "echo-kind", "foxtrot-listkind", "golf-shortname-1", "hotel-shortname-2")
+
+ actualNames, actualNameConflictCondition, establishedCondition := c.calculateNamesAndConditions(newCRD)
+
+ require.Equal(t, expectedNames, actualNames, "calculated names mismatch")
+ require.True(t, apiextensionshelpers.IsCRDConditionEquivalent(&acceptedCondition, &actualNameConflictCondition), "unexpected name conflict condition")
+ require.True(t, apiextensionshelpers.IsCRDConditionEquivalent(&installingCondition, &establishedCondition), "unexpected established condition")
+ }
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/kcp/cluster_aware_crd_lister.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/kcp/cluster_aware_crd_lister.go
new file mode 100644
index 0000000000000..170b6b588393f
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/kcp/cluster_aware_crd_lister.go
@@ -0,0 +1,42 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package kcp
+
+import (
+ "context"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
+ "k8s.io/apimachinery/pkg/labels"
+)
+
+// ClusterAwareCRDClusterLister knows how to scope down to a ClusterAwareCRDLister for one cluster.
+type ClusterAwareCRDClusterLister interface {
+ Cluster(logicalcluster.Name) ClusterAwareCRDLister
+}
+
+// ClusterAwareCRDLister is a CRD lister that is kcp-specific.
+type ClusterAwareCRDLister interface {
+ // List lists all CRDs matching selector.
+ List(ctx context.Context, selector labels.Selector) ([]*v1.CustomResourceDefinition, error)
+ // Get gets a CRD by name.
+ Get(ctx context.Context, name string) (*v1.CustomResourceDefinition, error)
+ // Refresh gets the current/latest copy of the CRD from the cache. This is necessary to ensure the identity
+ // annotation is present when called by crdHandler.getOrCreateServingInfoFor
+ Refresh(crd *v1.CustomResourceDefinition) (*v1.CustomResourceDefinition, error)
+}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
index edc89bb5f833f..4a429de8043bd 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go
@@ -42,6 +42,10 @@ type CustomResourceStorage struct {
}
func NewStorage(resource schema.GroupResource, singularResource schema.GroupResource, kind, listKind schema.GroupVersionKind, strategy customResourceStrategy, optsGetter generic.RESTOptionsGetter, categories []string, tableConvertor rest.TableConvertor, replicasPathMapping managedfields.ResourcePathMappings) (CustomResourceStorage, error) {
+ return NewStorageWithCustomStore(resource, singularResource, kind, listKind, strategy, optsGetter, categories, tableConvertor, replicasPathMapping, nil)
+}
+
+func NewStorageWithCustomStore(resource schema.GroupResource, singularResource schema.GroupResource, kind, listKind schema.GroupVersionKind, strategy customResourceStrategy, optsGetter generic.RESTOptionsGetter, categories []string, tableConvertor rest.TableConvertor, replicasPathMapping managedfields.ResourcePathMappings, newStores NewStores) (CustomResourceStorage, error) {
var storage CustomResourceStorage
store := &genericregistry.Store{
NewFunc: func() runtime.Object {
@@ -102,7 +106,7 @@ func NewStorage(resource schema.GroupResource, singularResource schema.GroupReso
// REST implements a RESTStorage for API services against etcd
type REST struct {
- *genericregistry.Store
+ Store
categories []string
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_kcp.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_kcp.go
new file mode 100644
index 0000000000000..846c9630cc5b4
--- /dev/null
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_kcp.go
@@ -0,0 +1,36 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package customresource
+
+import (
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apiserver/pkg/registry/generic"
+ "k8s.io/apiserver/pkg/registry/rest"
+)
+
+// Store is an interface used by the upstream CR registry instead of the concrete genericregistry.Store
+// in order to allow alternative implementations to be used.
+type Store interface {
+ rest.StandardStorage
+ rest.ResetFieldsStrategy
+}
+
+// NewStores is a constructor of the main and status subresource stores for custom resources.
+type NewStores func(resource schema.GroupResource, kind, listKind schema.GroupVersionKind, strategy customResourceStrategy, optsGetter generic.RESTOptionsGetter, tableConvertor rest.TableConvertor) (main Store, status Store)
+
+// CustomResourceStrategy makes customResourceStrategy public for downstream consumers.
+type CustomResourceStrategy = customResourceStrategy
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
index e5df87be2c759..eecd5fe223265 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd_test.go
@@ -29,12 +29,13 @@ import (
autoscalingv1 "k8s.io/api/autoscaling/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/api/errors"
+ apivalidation "k8s.io/apimachinery/pkg/api/validation"
metainternal "k8s.io/apimachinery/pkg/apis/meta/internalversion"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
- "k8s.io/apimachinery/pkg/util/managedfields"
+ "k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager"
genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/registry/generic"
registrytest "k8s.io/apiserver/pkg/registry/generic/testing"
@@ -47,6 +48,7 @@ import (
"k8s.io/apiextensions-apiserver/pkg/crdserverscheme"
"k8s.io/apiextensions-apiserver/pkg/registry/customresource"
"k8s.io/apiextensions-apiserver/pkg/registry/customresource/tableconvertor"
+ genericregistry "k8s.io/apiserver/pkg/registry/generic/registry"
)
func newStorage(t *testing.T) (customresource.CustomResourceStorage, *etcd3testing.EtcdTestServer) {
@@ -101,6 +103,7 @@ func newStorage(t *testing.T) (customresource.CustomResourceStorage, *etcd3testi
typer,
true,
kind,
+ apivalidation.NameIsDNSSubdomain,
nil,
nil,
nil,
@@ -111,7 +114,7 @@ func newStorage(t *testing.T) (customresource.CustomResourceStorage, *etcd3testi
restOptions,
[]string{"all"},
table,
- managedfields.ResourcePathMappings{},
+ fieldmanager.ResourcePathMappings{},
)
if err != nil {
t.Errorf("unexpected error: %v", err)
@@ -159,8 +162,8 @@ var validCustomResource = *validNewCustomResource()
func TestCreate(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
- test := registrytest.New(t, storage.CustomResource.Store)
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
+ test := registrytest.New(t, storage.CustomResource.Store.(*genericregistry.Store))
cr := validNewCustomResource()
cr.SetNamespace("")
test.TestCreate(
@@ -171,31 +174,31 @@ func TestCreate(t *testing.T) {
func TestGet(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
- test := registrytest.New(t, storage.CustomResource.Store)
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
+ test := registrytest.New(t, storage.CustomResource.Store.(*genericregistry.Store))
test.TestGet(validNewCustomResource())
}
func TestList(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
- test := registrytest.New(t, storage.CustomResource.Store)
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
+ test := registrytest.New(t, storage.CustomResource.Store.(*genericregistry.Store))
test.TestList(validNewCustomResource())
}
func TestDelete(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
- test := registrytest.New(t, storage.CustomResource.Store)
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
+ test := registrytest.New(t, storage.CustomResource.Store.(*genericregistry.Store))
test.TestDelete(validNewCustomResource())
}
func TestGenerationNumber(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
modifiedRno := *validNewCustomResource()
modifiedRno.SetGeneration(10)
ctx := genericapirequest.NewDefaultContext()
@@ -247,7 +250,7 @@ func TestGenerationNumber(t *testing.T) {
func TestCategories(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
expected := []string{"all"}
actual := storage.CustomResource.Categories()
@@ -260,12 +263,12 @@ func TestCategories(t *testing.T) {
func TestColumns(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/foo"
validCustomResource := validNewCustomResource()
- if err := storage.CustomResource.Storage.Create(ctx, key, validCustomResource, nil, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, validCustomResource, nil, 0, false); err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -331,11 +334,11 @@ func TestColumns(t *testing.T) {
func TestStatusUpdate(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/foo"
validCustomResource := validNewCustomResource()
- if err := storage.CustomResource.Storage.Create(ctx, key, validCustomResource, nil, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, validCustomResource, nil, 0, false); err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -379,14 +382,14 @@ func TestStatusUpdate(t *testing.T) {
func TestScaleGet(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
var cr unstructured.Unstructured
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
- if err := storage.CustomResource.Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, validCustomResource, err)
}
@@ -421,7 +424,7 @@ func TestScaleGet(t *testing.T) {
func TestScaleGetWithoutSpecReplicas(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
@@ -430,7 +433,7 @@ func TestScaleGetWithoutSpecReplicas(t *testing.T) {
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
withoutSpecReplicas := validCustomResource.DeepCopy()
unstructured.RemoveNestedField(withoutSpecReplicas.Object, "spec", "replicas")
- if err := storage.CustomResource.Storage.Create(ctx, key, withoutSpecReplicas, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, withoutSpecReplicas, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, withoutSpecReplicas, err)
}
@@ -446,14 +449,14 @@ func TestScaleGetWithoutSpecReplicas(t *testing.T) {
func TestScaleUpdate(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
var cr unstructured.Unstructured
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
- if err := storage.CustomResource.Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, validCustomResource, err)
}
@@ -498,7 +501,7 @@ func TestScaleUpdate(t *testing.T) {
func TestScaleUpdateWithoutSpecReplicas(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
@@ -507,7 +510,7 @@ func TestScaleUpdateWithoutSpecReplicas(t *testing.T) {
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
withoutSpecReplicas := validCustomResource.DeepCopy()
unstructured.RemoveNestedField(withoutSpecReplicas.Object, "spec", "replicas")
- if err := storage.CustomResource.Storage.Create(ctx, key, withoutSpecReplicas, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, withoutSpecReplicas, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, withoutSpecReplicas, err)
}
@@ -539,14 +542,14 @@ func TestScaleUpdateWithoutSpecReplicas(t *testing.T) {
func TestScaleUpdateWithoutResourceVersion(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
var cr unstructured.Unstructured
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
- if err := storage.CustomResource.Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, validCustomResource, err)
}
@@ -577,14 +580,14 @@ func TestScaleUpdateWithoutResourceVersion(t *testing.T) {
func TestScaleUpdateWithoutResourceVersionWithConflicts(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
var cr unstructured.Unstructured
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
- if err := storage.CustomResource.Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, validCustomResource, err)
}
@@ -676,14 +679,14 @@ func TestScaleUpdateWithoutResourceVersionWithConflicts(t *testing.T) {
func TestScaleUpdateWithResourceVersionWithConflicts(t *testing.T) {
storage, server := newStorage(t)
defer server.Terminate(t)
- defer storage.CustomResource.Store.DestroyFunc()
+ defer storage.CustomResource.Store.(*genericregistry.Store).DestroyFunc()
name := "foo"
var cr unstructured.Unstructured
ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault)
key := "/noxus/" + metav1.NamespaceDefault + "/" + name
- if err := storage.CustomResource.Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
+ if err := storage.CustomResource.Store.(*genericregistry.Store).Storage.Create(ctx, key, &validCustomResource, &cr, 0, false); err != nil {
t.Fatalf("error setting new custom resource (key: %s) %v: %v", key, validCustomResource, err)
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/status_strategy.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/status_strategy.go
index ea5a412af59fa..bde96d8541327 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/status_strategy.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/status_strategy.go
@@ -81,6 +81,7 @@ func (a statusStrategy) PrepareForUpdate(ctx context.Context, obj, old runtime.O
// set status
newCustomResourceObject.SetManagedFields(managedFields)
+
newCustomResource = newCustomResourceObject.UnstructuredContent()
if ok {
newCustomResource["status"] = status
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/strategy.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/strategy.go
index 1bc57e291a9e9..5e1f8c7fc33cc 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/strategy.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/strategy.go
@@ -34,6 +34,7 @@ import (
apiextensionsfeatures "k8s.io/apiextensions-apiserver/pkg/features"
apiequality "k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/api/meta"
+ apivalidation "k8s.io/apimachinery/pkg/api/validation"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/fields"
@@ -44,6 +45,7 @@ import (
"k8s.io/apimachinery/pkg/util/validation/field"
celconfig "k8s.io/apiserver/pkg/apis/cel"
"k8s.io/apiserver/pkg/cel/common"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/registry/generic"
apiserverstorage "k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/names"
@@ -73,7 +75,7 @@ type selectableField struct {
err error
}
-func NewStrategy(typer runtime.ObjectTyper, namespaceScoped bool, kind schema.GroupVersionKind, schemaValidator, statusSchemaValidator validation.SchemaValidator, structuralSchema *structuralschema.Structural, status *apiextensions.CustomResourceSubresourceStatus, scale *apiextensions.CustomResourceSubresourceScale, selectableFields []v1.SelectableField) customResourceStrategy {
+func NewStrategy(typer runtime.ObjectTyper, namespaceScoped bool, kind schema.GroupVersionKind, kcpValidateName apivalidation.ValidateNameFunc, schemaValidator, statusSchemaValidator validation.SchemaValidator, structuralSchema *structuralschema.Structural, status *apiextensions.CustomResourceSubresourceStatus, scale *apiextensions.CustomResourceSubresourceScale, selectableFields []v1.SelectableField) customResourceStrategy {
var celValidator *cel.Validator
celValidator = cel.NewValidator(structuralSchema, true, celconfig.PerCallLimit) // CEL programs are compiled and cached here
@@ -84,6 +86,8 @@ func NewStrategy(typer runtime.ObjectTyper, namespaceScoped bool, kind schema.Gr
status: status,
scale: scale,
validator: customResourceValidator{
+ kcpValidateName: kcpValidateName,
+
namespaceScoped: namespaceScoped,
kind: kind,
schemaValidator: schemaValidator,
@@ -152,6 +156,10 @@ func (a customResourceStrategy) PrepareForCreate(ctx context.Context, obj runtim
}
accessor, _ := meta.Accessor(obj)
+ if _, found := accessor.GetAnnotations()[genericapirequest.ShardAnnotationKey]; found {
+ // to avoid an additional UPDATE request (mismatch on the generation field) replicated objects have the generation field already set
+ return
+ }
accessor.SetGeneration(1)
}
@@ -182,6 +190,11 @@ func (a customResourceStrategy) PrepareForUpdate(ctx context.Context, obj, old r
if !apiequality.Semantic.DeepEqual(newCopyContent, oldCopyContent) {
oldAccessor, _ := meta.Accessor(oldCustomResourceObject)
newAccessor, _ := meta.Accessor(newCustomResourceObject)
+ if _, found := oldAccessor.GetAnnotations()[genericapirequest.ShardAnnotationKey]; found {
+ // the presence of the annotation indicates the object is from the cache server.
+ // since the objects from the cache should not be modified in any way, just return early.
+ return
+ }
newAccessor.SetGeneration(oldAccessor.GetGeneration() + 1)
}
}
diff --git a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go
index eabb3fd572dbf..5bc087aafdcea 100644
--- a/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go
+++ b/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go
@@ -37,6 +37,8 @@ import (
)
type customResourceValidator struct {
+ kcpValidateName validation.ValidateNameFunc
+
namespaceScoped bool
kind schema.GroupVersionKind
schemaValidator apiextensionsvalidation.SchemaValidator
@@ -50,7 +52,7 @@ func (a customResourceValidator) Validate(ctx context.Context, obj *unstructured
var allErrs field.ErrorList
- allErrs = append(allErrs, validation.ValidateObjectMetaAccessor(obj, a.namespaceScoped, validation.NameIsDNSSubdomain, field.NewPath("metadata"))...)
+ allErrs = append(allErrs, validation.ValidateObjectMetaAccessor(obj, a.namespaceScoped, a.kcpValidateName, field.NewPath("metadata"))...)
allErrs = append(allErrs, apiextensionsvalidation.ValidateCustomResource(nil, obj.UnstructuredContent(), a.schemaValidator)...)
allErrs = append(allErrs, a.ValidateScaleSpec(ctx, obj, scale)...)
allErrs = append(allErrs, a.ValidateScaleStatus(ctx, obj, scale)...)
@@ -122,7 +124,12 @@ func (a customResourceValidator) ValidateTypeMeta(ctx context.Context, obj *unst
if typeAccessor.GetKind() != a.kind.Kind {
allErrs = append(allErrs, field.Invalid(field.NewPath("kind"), typeAccessor.GetKind(), fmt.Sprintf("must be %v", a.kind.Kind)))
}
- if typeAccessor.GetAPIVersion() != a.kind.Group+"/"+a.kind.Version {
+ // HACK: support the case when we add core resources through CRDs (KCP scenario)
+ expectedAPIVersion := a.kind.Group + "/" + a.kind.Version
+ if a.kind.Group == "" {
+ expectedAPIVersion = a.kind.Version
+ }
+ if typeAccessor.GetAPIVersion() != expectedAPIVersion {
allErrs = append(allErrs, field.Invalid(field.NewPath("apiVersion"), typeAccessor.GetAPIVersion(), fmt.Sprintf("must be %v", a.kind.Group+"/"+a.kind.Version)))
}
return allErrs
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/attributes.go b/staging/src/k8s.io/apiserver/pkg/admission/attributes.go
index 1d291f6b22efd..cca038d97c738 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/attributes.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/attributes.go
@@ -21,6 +21,7 @@ import (
"strings"
"sync"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/validation"
@@ -41,6 +42,8 @@ type attributesRecord struct {
oldObject runtime.Object
userInfo user.Info
+ cluster logicalcluster.Name
+
// other elements are always accessed in single goroutine.
// But ValidatingAdmissionWebhook add annotations concurrently.
annotations map[string]annotation
@@ -115,6 +118,14 @@ func (record *attributesRecord) GetUserInfo() user.Info {
return record.userInfo
}
+func (record *attributesRecord) SetCluster(cluster logicalcluster.Name) {
+ record.cluster = cluster
+}
+
+func (record *attributesRecord) GetCluster() logicalcluster.Name {
+ return record.cluster
+}
+
// getAnnotations implements privateAnnotationsGetter.It's a private method used
// by WithAudit decorator.
func (record *attributesRecord) getAnnotations(maxLevel auditinternal.Level) map[string]string {
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go b/staging/src/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go
index 3ecc00b74cbd1..b472177e6b19f 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go
@@ -27,6 +27,7 @@ import (
"k8s.io/apiserver/pkg/admission/plugin/webhook"
"k8s.io/apiserver/pkg/admission/plugin/webhook/generic"
"k8s.io/client-go/informers"
+ admissionregistrationinformers "k8s.io/client-go/informers/admissionregistration/v1"
admissionregistrationlisters "k8s.io/client-go/listers/admissionregistration/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/cache/synctrack"
@@ -52,6 +53,10 @@ var _ generic.Source = &mutatingWebhookConfigurationManager{}
func NewMutatingWebhookConfigurationManager(f informers.SharedInformerFactory) generic.Source {
informer := f.Admissionregistration().V1().MutatingWebhookConfigurations()
+ return NewMutatingWebhookConfigurationManagerForInformer(informer)
+}
+
+func NewMutatingWebhookConfigurationManagerForInformer(informer admissionregistrationinformers.MutatingWebhookConfigurationInformer) generic.Source {
manager := &mutatingWebhookConfigurationManager{
lister: informer.Lister(),
createMutatingWebhookAccessor: webhook.NewMutatingWebhookAccessor,
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go b/staging/src/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go
index b423321177020..0dbcec7a8067b 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go
@@ -27,6 +27,7 @@ import (
"k8s.io/apiserver/pkg/admission/plugin/webhook"
"k8s.io/apiserver/pkg/admission/plugin/webhook/generic"
"k8s.io/client-go/informers"
+ admissionregistrationinformers "k8s.io/client-go/informers/admissionregistration/v1"
admissionregistrationlisters "k8s.io/client-go/listers/admissionregistration/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/cache/synctrack"
@@ -52,6 +53,10 @@ var _ generic.Source = &validatingWebhookConfigurationManager{}
func NewValidatingWebhookConfigurationManager(f informers.SharedInformerFactory) generic.Source {
informer := f.Admissionregistration().V1().ValidatingWebhookConfigurations()
+ return NewValidatingWebhookConfigurationManagerForInformer(informer)
+}
+
+func NewValidatingWebhookConfigurationManagerForInformer(informer admissionregistrationinformers.ValidatingWebhookConfigurationInformer) generic.Source {
manager := &validatingWebhookConfigurationManager{
lister: informer.Lister(),
createValidatingWebhookAccessor: webhook.NewValidatingWebhookAccessor,
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/interfaces.go b/staging/src/k8s.io/apiserver/pkg/admission/interfaces.go
index ba979c973f1c8..be735f075b884 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/interfaces.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/interfaces.go
@@ -20,6 +20,7 @@ import (
"context"
"io"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
auditinternal "k8s.io/apiserver/pkg/apis/audit"
@@ -59,6 +60,9 @@ type Attributes interface {
// GetUserInfo is information about the requesting user
GetUserInfo() user.Info
+ GetCluster() logicalcluster.Name
+ SetCluster(logicalcluster.Name)
+
// AddAnnotation sets annotation according to key-value pair. The key should be qualified, e.g., podsecuritypolicy.admission.k8s.io/admit-policy, where
// "podsecuritypolicy" is the name of the plugin, "admission.k8s.io" is the name of the organization, "admit-policy" is the key name.
// An error is returned if the format of key is invalid. When trying to overwrite annotation with a new value, an error is returned.
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/namespace/lifecycle/admission.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/namespace/lifecycle/admission.go
index 936a95e45cc15..1d9a65df5d491 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/namespace/lifecycle/admission.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/namespace/lifecycle/admission.go
@@ -22,6 +22,10 @@ import (
"io"
"time"
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ kcpkubernetesinformers "github.com/kcp-dev/client-go/informers"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpcorev1listers "github.com/kcp-dev/client-go/listers/core/v1"
"k8s.io/klog/v2"
v1 "k8s.io/api/core/v1"
@@ -31,10 +35,7 @@ import (
utilcache "k8s.io/apimachinery/pkg/util/cache"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/admission"
- "k8s.io/apiserver/pkg/admission/initializer"
- "k8s.io/client-go/informers"
- "k8s.io/client-go/kubernetes"
- corelisters "k8s.io/client-go/listers/core/v1"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/utils/clock"
)
@@ -62,16 +63,16 @@ func Register(plugins *admission.Plugins) {
// It enforces life-cycle constraints around a Namespace depending on its Phase
type Lifecycle struct {
*admission.Handler
- client kubernetes.Interface
+ client kcpkubernetesclientset.ClusterInterface
immortalNamespaces sets.String
- namespaceLister corelisters.NamespaceLister
+ namespaceLister kcpcorev1listers.NamespaceClusterLister
// forceLiveLookupCache holds a list of entries for namespaces that we have a strong reason to believe are stale in our local cache.
// if a namespace is in this cache, then we will ignore our local state and always fetch latest from api server.
forceLiveLookupCache *utilcache.LRUExpireCache
}
-var _ = initializer.WantsExternalKubeInformerFactory(&Lifecycle{})
-var _ = initializer.WantsExternalKubeClientSet(&Lifecycle{})
+//var _ = initializer.WantsExternalKubeInformerFactory(&Lifecycle{})
+//var _ = initializer.WantsExternalKubeClientSet(&Lifecycle{})
// Admit makes an admission decision based on the request attributes
func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admission.ObjectInterfaces) error {
@@ -85,13 +86,19 @@ func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admissi
return nil
}
+ clusterName, err := genericapirequest.ClusterNameFrom(ctx)
+ if err != nil {
+ return errors.NewInternalError(err)
+ }
+ namespaceKey := kcpcache.ToClusterAwareKey(clusterName.String(), "", a.GetName())
+
if a.GetKind().GroupKind() == v1.SchemeGroupVersion.WithKind("Namespace").GroupKind() {
// if a namespace is deleted, we want to prevent all further creates into it
// while it is undergoing termination. to reduce incidences where the cache
// is slow to update, we add the namespace into a force live lookup list to ensure
// we are not looking at stale state.
if a.GetOperation() == admission.Delete {
- l.forceLiveLookupCache.Add(a.GetName(), true, forceLiveLookupTTL)
+ l.forceLiveLookupCache.Add(namespaceKey, true, forceLiveLookupTTL)
}
// allow all operations to namespaces
return nil
@@ -114,10 +121,9 @@ func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admissi
var (
exists bool
- err error
)
- namespace, err := l.namespaceLister.Get(a.GetNamespace())
+ namespace, err := l.namespaceLister.Cluster(clusterName).Get(a.GetNamespace())
if err != nil {
if !errors.IsNotFound(err) {
return errors.NewInternalError(err)
@@ -130,7 +136,7 @@ func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admissi
// give the cache time to observe the namespace before rejecting a create.
// this helps when creating a namespace and immediately creating objects within it.
time.Sleep(missingNamespaceWait)
- namespace, err = l.namespaceLister.Get(a.GetNamespace())
+ namespace, err = l.namespaceLister.Cluster(clusterName).Get(a.GetNamespace())
switch {
case errors.IsNotFound(err):
// no-op
@@ -146,7 +152,7 @@ func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admissi
// forceLiveLookup if true will skip looking at local cache state and instead always make a live call to server.
forceLiveLookup := false
- if _, ok := l.forceLiveLookupCache.Get(a.GetNamespace()); ok {
+ if _, ok := l.forceLiveLookupCache.Get(namespaceKey); ok {
// we think the namespace was marked for deletion, but our current local cache says otherwise, we will force a live lookup.
forceLiveLookup = exists && namespace.Status.Phase == v1.NamespaceActive
}
@@ -154,7 +160,7 @@ func (l *Lifecycle) Admit(ctx context.Context, a admission.Attributes, o admissi
// refuse to operate on non-existent namespaces
if !exists || forceLiveLookup {
// as a last resort, make a call directly to storage
- namespace, err = l.client.CoreV1().Namespaces().Get(context.TODO(), a.GetNamespace(), metav1.GetOptions{})
+ namespace, err = l.client.Cluster(clusterName.Path()).CoreV1().Namespaces().Get(ctx, a.GetNamespace(), metav1.GetOptions{})
switch {
case errors.IsNotFound(err):
return err
@@ -200,14 +206,14 @@ func newLifecycleWithClock(immortalNamespaces sets.String, clock utilcache.Clock
}
// SetExternalKubeInformerFactory implements the WantsExternalKubeInformerFactory interface.
-func (l *Lifecycle) SetExternalKubeInformerFactory(f informers.SharedInformerFactory) {
+func (l *Lifecycle) SetExternalKubeInformerFactory(f kcpkubernetesinformers.SharedInformerFactory) {
namespaceInformer := f.Core().V1().Namespaces()
l.namespaceLister = namespaceInformer.Lister()
l.SetReadyFunc(namespaceInformer.Informer().HasSynced)
}
// SetExternalKubeClientSet implements the WantsExternalKubeClientSet interface.
-func (l *Lifecycle) SetExternalKubeClientSet(client kubernetes.Interface) {
+func (l *Lifecycle) SetExternalKubeClientSet(client kcpkubernetesclientset.ClusterInterface) {
l.client = client
}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/accessor.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/accessor.go
index 515634f00628a..09e25c5c51d61 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/accessor.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/accessor.go
@@ -24,6 +24,7 @@ import (
type PolicyAccessor interface {
GetName() string
GetNamespace() string
+ GetCluster() string
GetParamKind() *v1.ParamKind
GetMatchConstraints() *v1.MatchResources
GetFailurePolicy() *v1.FailurePolicyType
@@ -32,6 +33,7 @@ type PolicyAccessor interface {
type BindingAccessor interface {
GetName() string
GetNamespace() string
+ GetCluster() string
// GetPolicyName returns the name of the (Validating/Mutating)AdmissionPolicy,
// which is cluster-scoped, so namespace is usually left blank.
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin.go
index 03aebdd58ac10..c48aff21716c7 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin.go
@@ -21,6 +21,7 @@ import (
"errors"
"fmt"
+ "github.com/kcp-dev/logicalcluster/v3"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -32,12 +33,13 @@ import (
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/informers"
+ coreinformers "k8s.io/client-go/informers/core/v1"
"k8s.io/client-go/kubernetes"
)
// H is the Hook type generated by the source and consumed by the dispatcher.
// !TODO: Just pass in a Plugin[H] with accessors to all this information
-type sourceFactory[H any] func(informers.SharedInformerFactory, kubernetes.Interface, dynamic.Interface, meta.RESTMapper) Source[H]
+type sourceFactory[H any] func(informers.SharedInformerFactory, kubernetes.Interface, dynamic.Interface, meta.RESTMapper, logicalcluster.Name) Source[H]
type dispatcherFactory[H any] func(authorizer.Authorizer, *matching.Matcher, kubernetes.Interface) Dispatcher[H]
// admissionResources is the list of resources related to CEL-based admission
@@ -69,6 +71,9 @@ type Plugin[H any] struct {
stopCh <-chan struct{}
authorizer authorizer.Authorizer
enabled bool
+
+ namespaceInformer coreinformers.NamespaceInformer
+ clusterName logicalcluster.Name
}
var (
@@ -98,7 +103,7 @@ func NewPlugin[H any](
}
func (c *Plugin[H]) SetExternalKubeInformerFactory(f informers.SharedInformerFactory) {
- c.informerFactory = f
+ c.namespaceInformer = f.Core().V1().Namespaces()
}
func (c *Plugin[H]) SetExternalKubeClientSet(client kubernetes.Interface) {
@@ -143,8 +148,8 @@ func (c *Plugin[H]) ValidateInitialization() error {
if c.Handler == nil {
return errors.New("missing handler")
}
- if c.informerFactory == nil {
- return errors.New("missing informer factory")
+ if c.namespaceInformer == nil {
+ return errors.New("missing namespace informer")
}
if c.client == nil {
return errors.New("missing kubernetes client")
@@ -163,14 +168,14 @@ func (c *Plugin[H]) ValidateInitialization() error {
}
// Use default matcher
- namespaceInformer := c.informerFactory.Core().V1().Namespaces()
+ namespaceInformer := c.namespaceInformer
c.matcher = matching.NewMatcher(namespaceInformer.Lister(), c.client)
if err := c.matcher.ValidateInitialization(); err != nil {
return err
}
- c.source = c.sourceFactory(c.informerFactory, c.client, c.dynamicClient, c.restMapper)
+ c.source = c.sourceFactory(c.informerFactory, c.client, c.dynamicClient, c.restMapper, c.clusterName)
c.dispatcher = c.dispatcherFactory(c.authorizer, c.matcher, c.client)
pluginContext, pluginContextCancel := context.WithCancel(context.Background())
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin_kcp.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin_kcp.go
new file mode 100644
index 0000000000000..020c562a9c4d7
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/plugin_kcp.go
@@ -0,0 +1,23 @@
+package generic
+
+import (
+ "github.com/kcp-dev/logicalcluster/v3"
+ "k8s.io/client-go/informers"
+ coreinformers "k8s.io/client-go/informers/core/v1"
+)
+
+func (c *Plugin[H]) SetNamespaceInformer(i coreinformers.NamespaceInformer) {
+ c.namespaceInformer = i
+}
+
+func (c *Plugin[H]) SetInformerFactory(f informers.SharedInformerFactory) {
+ c.informerFactory = f
+}
+
+func (c *Plugin[H]) SetSourceFactory(s sourceFactory[H]) {
+ c.sourceFactory = s
+}
+
+func (c *Plugin[H]) SetClusterName(clusterName logicalcluster.Name) {
+ c.clusterName = clusterName
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_source.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_source.go
index ca6cdc884fc77..e07eb1f839eda 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_source.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_source.go
@@ -39,6 +39,8 @@ import (
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
"k8s.io/klog/v2"
+
+ "github.com/kcp-dev/logicalcluster/v3"
)
// Interval for refreshing policies.
@@ -113,11 +115,12 @@ func NewPolicySource[P runtime.Object, B runtime.Object, E Evaluator](
paramInformerFactory informers.SharedInformerFactory,
dynamicClient dynamic.Interface,
restMapper meta.RESTMapper,
+ clusterName logicalcluster.Name,
) Source[PolicyHook[P, B, E]] {
res := &policySource[P, B, E]{
compiler: compiler,
- policyInformer: generic.NewInformer[P](policyInformer),
- bindingInformer: generic.NewInformer[B](bindingInformer),
+ policyInformer: generic.NewInformer[P](policyInformer, clusterName),
+ bindingInformer: generic.NewInformer[B](bindingInformer, clusterName),
compiledPolicies: map[types.NamespacedName]compiledPolicyEntry[E]{},
newPolicyAccessor: newPolicyAccessor,
newBindingAccessor: newBindingAccessor,
@@ -469,6 +472,7 @@ func (s *policySource[P, B, E]) compilePolicyLocked(policySpec P) E {
var emptyEvaluator E
return emptyEvaluator
}
+
key := types.NamespacedName{
Namespace: policyMeta.GetNamespace(),
Name: policyMeta.GetName(),
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_test_context.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_test_context.go
index 964f2d904fd22..86bf479fd54c2 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_test_context.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/generic/policy_test_context.go
@@ -21,6 +21,7 @@ import (
"fmt"
"time"
+ "github.com/kcp-dev/logicalcluster/v3"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
@@ -179,7 +180,7 @@ func NewPolicyTestContext[P, B runtime.Object, E Evaluator](
var source Source[PolicyHook[P, B, E]]
plugin := NewPlugin[PolicyHook[P, B, E]](
admission.NewHandler(admission.Connect, admission.Create, admission.Delete, admission.Update),
- func(sif informers.SharedInformerFactory, i1 kubernetes.Interface, i2 dynamic.Interface, r meta.RESTMapper) Source[PolicyHook[P, B, E]] {
+ func(sif informers.SharedInformerFactory, i1 kubernetes.Interface, i2 dynamic.Interface, r meta.RESTMapper, c logicalcluster.Name) Source[PolicyHook[P, B, E]] {
source = NewPolicySource[P, B, E](
policyInformer,
bindingInformer,
@@ -189,6 +190,7 @@ func NewPolicyTestContext[P, B runtime.Object, E Evaluator](
sif,
i2,
r,
+ c,
)
return source
}, dispatcher)
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/informer.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/informer.go
index acb6316ec3a2a..fa9d88544085a 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/informer.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/informer.go
@@ -17,6 +17,7 @@ limitations under the License.
package generic
import (
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/cache"
)
@@ -32,9 +33,9 @@ type informer[T runtime.Object] struct {
// It is incumbent on the caller to ensure that the generic type argument is
// consistent with the type of the objects stored inside the SharedIndexInformer
// as they will be casted.
-func NewInformer[T runtime.Object](informe cache.SharedIndexInformer) Informer[T] {
+func NewInformer[T runtime.Object](informe cache.SharedIndexInformer, clusterName logicalcluster.Name) Informer[T] {
return informer[T]{
SharedIndexInformer: informe,
- lister: NewLister[T](informe.GetIndexer()),
+ lister: NewLister[T](informe.GetIndexer(), clusterName),
}
}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/lister.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/lister.go
index aa6b090324c06..ddca740687789 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/lister.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/internal/generic/lister.go
@@ -25,17 +25,21 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/cache"
+
+ kcpcache "github.com/kcp-dev/apimachinery/v2/pkg/cache"
+ "github.com/kcp-dev/logicalcluster/v3"
)
var _ Lister[runtime.Object] = lister[runtime.Object]{}
type namespacedLister[T runtime.Object] struct {
- indexer cache.Indexer
- namespace string
+ indexer cache.Indexer
+ namespace string
+ clusterName logicalcluster.Name
}
func (w namespacedLister[T]) List(selector labels.Selector) (ret []T, err error) {
- err = cache.ListAllByNamespace(w.indexer, w.namespace, selector, func(m interface{}) {
+ err = kcpcache.ListAllByClusterAndNamespace(w.indexer, w.clusterName, w.namespace, selector, func(m interface{}) {
ret = append(ret, m.(T))
})
return ret, err
@@ -44,7 +48,9 @@ func (w namespacedLister[T]) List(selector labels.Selector) (ret []T, err error)
func (w namespacedLister[T]) Get(name string) (T, error) {
var result T
- obj, exists, err := w.indexer.GetByKey(w.namespace + "/" + name)
+ key := kcpcache.ToClusterAwareKey(w.clusterName.String(), w.namespace, name)
+
+ obj, exists, err := w.indexer.GetByKey(key)
if err != nil {
return result, err
}
@@ -61,11 +67,12 @@ func (w namespacedLister[T]) Get(name string) (T, error) {
}
type lister[T runtime.Object] struct {
- indexer cache.Indexer
+ indexer cache.Indexer
+ clusterName logicalcluster.Name
}
func (w lister[T]) List(selector labels.Selector) (ret []T, err error) {
- err = cache.ListAll(w.indexer, selector, func(m interface{}) {
+ err = kcpcache.ListAllByCluster(w.indexer, w.clusterName, selector, func(m interface{}) {
ret = append(ret, m.(T))
})
return ret, err
@@ -74,7 +81,9 @@ func (w lister[T]) List(selector labels.Selector) (ret []T, err error) {
func (w lister[T]) Get(name string) (T, error) {
var result T
- obj, exists, err := w.indexer.GetByKey(name)
+ key := kcpcache.ToClusterAwareKey(w.clusterName.String(), "", name)
+
+ obj, exists, err := w.indexer.GetByKey(key)
if err != nil {
return result, err
}
@@ -95,6 +104,6 @@ func (w lister[T]) Namespaced(namespace string) NamespacedLister[T] {
return namespacedLister[T]{namespace: namespace, indexer: w.indexer}
}
-func NewLister[T runtime.Object](indexer cache.Indexer) lister[T] {
- return lister[T]{indexer: indexer}
+func NewLister[T runtime.Object](indexer cache.Indexer, clusterName logicalcluster.Name) lister[T] {
+ return lister[T]{indexer: indexer, clusterName: clusterName}
}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/accessor.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/accessor.go
index e5ef242fa371f..f83b01cd7afc7 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/accessor.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/accessor.go
@@ -17,6 +17,7 @@ limitations under the License.
package mutating
import (
+ "github.com/kcp-dev/logicalcluster/v3"
v1 "k8s.io/api/admissionregistration/v1"
"k8s.io/api/admissionregistration/v1alpha1"
"k8s.io/apimachinery/pkg/types"
@@ -47,6 +48,10 @@ func (v *mutatingAdmissionPolicyAccessor) GetName() string {
return v.Name
}
+func (v *mutatingAdmissionPolicyAccessor) GetCluster() string {
+ return logicalcluster.From(v.Policy).String()
+}
+
func (v *mutatingAdmissionPolicyAccessor) GetParamKind() *v1.ParamKind {
pk := v.Spec.ParamKind
if pk == nil {
@@ -86,6 +91,10 @@ func (v *mutatingAdmissionPolicyBindingAccessor) GetName() string {
return v.Name
}
+func (v *mutatingAdmissionPolicyBindingAccessor) GetCluster() string {
+ return logicalcluster.From(v.PolicyBinding).String()
+}
+
func (v *mutatingAdmissionPolicyBindingAccessor) GetPolicyName() types.NamespacedName {
return types.NamespacedName{
Namespace: "",
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/plugin.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/plugin.go
index 527bc6a53c0ff..6429e9ef3015c 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/plugin.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/mutating/plugin.go
@@ -21,6 +21,7 @@ import (
celgo "github.com/google/cel-go/cel"
"io"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/api/admissionregistration/v1alpha1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/meta"
@@ -94,7 +95,7 @@ func NewPlugin(_ io.Reader) *Plugin {
res := &Plugin{}
res.Plugin = generic.NewPlugin(
handler,
- func(f informers.SharedInformerFactory, client kubernetes.Interface, dynamicClient dynamic.Interface, restMapper meta.RESTMapper) generic.Source[PolicyHook] {
+ func(f informers.SharedInformerFactory, client kubernetes.Interface, dynamicClient dynamic.Interface, restMapper meta.RESTMapper, clusterName logicalcluster.Name) generic.Source[PolicyHook] {
return generic.NewPolicySource(
f.Admissionregistration().V1alpha1().MutatingAdmissionPolicies().Informer(),
f.Admissionregistration().V1alpha1().MutatingAdmissionPolicyBindings().Informer(),
@@ -106,6 +107,7 @@ func NewPlugin(_ io.Reader) *Plugin {
f,
dynamicClient,
restMapper,
+ clusterName,
)
},
func(a authorizer.Authorizer, m *matching.Matcher, client kubernetes.Interface) generic.Dispatcher[PolicyHook] {
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/accessor.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/accessor.go
index 628e3a6532953..05e0548995c80 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/accessor.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/accessor.go
@@ -17,6 +17,7 @@ limitations under the License.
package validating
import (
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/admission/plugin/policy/generic"
@@ -46,6 +47,10 @@ func (v *validatingAdmissionPolicyAccessor) GetName() string {
return v.Name
}
+func (v *validatingAdmissionPolicyAccessor) GetCluster() string {
+ return logicalcluster.From(v.ValidatingAdmissionPolicy).String()
+}
+
func (v *validatingAdmissionPolicyAccessor) GetParamKind() *v1.ParamKind {
return v.Spec.ParamKind
}
@@ -70,6 +75,10 @@ func (v *validatingAdmissionPolicyBindingAccessor) GetName() string {
return v.Name
}
+func (v *validatingAdmissionPolicyBindingAccessor) GetCluster() string {
+ return logicalcluster.From(v.ValidatingAdmissionPolicyBinding).String()
+}
+
func (v *validatingAdmissionPolicyBindingAccessor) GetPolicyName() types.NamespacedName {
return types.NamespacedName{
Namespace: "",
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/plugin.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/plugin.go
index 85db23cd8a67c..2eedd2287006d 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/plugin.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/plugin.go
@@ -21,6 +21,8 @@ import (
"io"
"sync"
+ "github.com/kcp-dev/logicalcluster/v3"
+
v1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apiserver/pkg/admission"
@@ -100,16 +102,17 @@ func NewPlugin(_ io.Reader) *Plugin {
p := &Plugin{
Plugin: generic.NewPlugin(
handler,
- func(f informers.SharedInformerFactory, client kubernetes.Interface, dynamicClient dynamic.Interface, restMapper meta.RESTMapper) generic.Source[PolicyHook] {
+ func(f informers.SharedInformerFactory, client kubernetes.Interface, dynamicClient dynamic.Interface, restMapper meta.RESTMapper, clusterName logicalcluster.Name) generic.Source[PolicyHook] {
return generic.NewPolicySource(
f.Admissionregistration().V1().ValidatingAdmissionPolicies().Informer(),
f.Admissionregistration().V1().ValidatingAdmissionPolicyBindings().Informer(),
NewValidatingAdmissionPolicyAccessor,
NewValidatingAdmissionPolicyBindingAccessor,
- compilePolicy,
- f,
+ CompilePolicy,
+ nil, // TODO(embik): this was done in accordance with d0a7ccbaac22d32f219b4a2c4944e72e507c3d14.
dynamicClient,
restMapper,
+ clusterName,
)
},
func(a authorizer.Authorizer, m *matching.Matcher, client kubernetes.Interface) generic.Dispatcher[PolicyHook] {
@@ -126,7 +129,7 @@ func (a *Plugin) Validate(ctx context.Context, attr admission.Attributes, o admi
return a.Plugin.Dispatch(ctx, attr, o)
}
-func compilePolicy(policy *Policy) Validator {
+func CompilePolicy(policy *Policy) Validator {
hasParam := false
if policy.Spec.ParamKind != nil {
hasParam = true
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission.go
index 5455b414eda66..4e0194a7b78c8 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission.go
@@ -160,10 +160,11 @@ func (a *QuotaAdmission) ValidateInitialization() error {
// Validate makes admission decisions while enforcing quota
func (a *QuotaAdmission) Validate(ctx context.Context, attr admission.Attributes, o admission.ObjectInterfaces) (err error) {
- // ignore all operations that are not namespaced or creation of namespaces
- if attr.GetNamespace() == "" || isNamespaceCreation(attr) {
- return nil
- }
+ // kcp edit: allow quota of cluster-scoped resources, so don't ignore them
+ // ~~~ignore all operations that are not namespaced or creation of namespaces~~~
+ // if attr.GetNamespace() == "" || isNamespaceCreation(attr) {
+ // return nil
+ // }
return a.evaluator.Evaluate(attr)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission_kcp.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission_kcp.go
new file mode 100644
index 0000000000000..2ecd590eee461
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/admission_kcp.go
@@ -0,0 +1,32 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package resourcequota
+
+import (
+ corev1listers "k8s.io/client-go/listers/core/v1"
+ cache "k8s.io/client-go/tools/cache"
+)
+
+// SetResourceQuotaLister sets the lister and indexer on the quotaAccessor. This is used by kcp to inject a lister and
+// indexer that are scoped to a single logical cluster. This replaces the need to use a.SetExternalKubeInformerFactory().
+func (a *QuotaAdmission) SetResourceQuotaLister(lister corev1listers.ResourceQuotaLister) {
+ a.quotaAccessor.lister = lister
+}
+
+func (a *QuotaAdmission) SetResourceQuotaInformer(informer cache.SharedIndexInformer) {
+ a.quotaAccessor.hasSynced = informer.HasSynced
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/resource_access.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/resource_access.go
index fd4c102e6d527..86839ceb6d499 100644
--- a/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/resource_access.go
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/resourcequota/resource_access.go
@@ -19,6 +19,7 @@ package resourcequota
import (
"context"
"fmt"
+ "strconv"
"time"
"golang.org/x/sync/singleflight"
@@ -107,7 +108,42 @@ func (e *quotaAccessor) checkCache(quota *corev1.ResourceQuota) *corev1.Resource
return cachedQuota
}
+const (
+ kcpClusterScopedQuotaNamespace = "admin"
+ kcpExperimentalClusterScopedQuotaAnnotationKey = "experimental.quota.kcp.io/cluster-scoped"
+)
+
func (e *quotaAccessor) GetQuotas(namespace string) ([]corev1.ResourceQuota, error) {
+ possibleClusterScopedQuotas, err := e.lister.ResourceQuotas(kcpClusterScopedQuotaNamespace).List(labels.Everything())
+ if err != nil {
+ return nil, fmt.Errorf("error getting ResourceQuotas from namespace %q: %w", kcpClusterScopedQuotaNamespace, err)
+ }
+ // if there are no items held in our indexer, check our live-lookup LRU, if that misses, do the live lookup to prime it.
+ if len(possibleClusterScopedQuotas) == 0 {
+ lruItemObj, ok := e.liveLookupCache.Get(kcpClusterScopedQuotaNamespace)
+ if !ok || lruItemObj.(liveLookupEntry).expiry.Before(time.Now()) {
+ // TODO: If there are multiple operations at the same time and cache has just expired,
+ // this may cause multiple List operations being issued at the same time.
+ // If there is already in-flight List() for a given namespace, we should wait until
+ // it is finished and cache is updated instead of doing the same, also to avoid
+ // throttling - see #22422 for details.
+ liveList, err := e.client.CoreV1().ResourceQuotas(kcpClusterScopedQuotaNamespace).List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return nil, err
+ }
+ newEntry := liveLookupEntry{expiry: time.Now().Add(e.liveTTL)}
+ for i := range liveList.Items {
+ newEntry.items = append(newEntry.items, &liveList.Items[i])
+ }
+ e.liveLookupCache.Add(kcpClusterScopedQuotaNamespace, newEntry)
+ lruItemObj = newEntry
+ }
+ lruEntry := lruItemObj.(liveLookupEntry)
+ for i := range lruEntry.items {
+ possibleClusterScopedQuotas = append(possibleClusterScopedQuotas, lruEntry.items[i])
+ }
+ }
+
// determine if there are any quotas in this namespace
// if there are no quotas, we don't need to do anything
items, err := e.lister.ResourceQuotas(namespace).List(labels.Everything())
@@ -143,6 +179,19 @@ func (e *quotaAccessor) GetQuotas(namespace string) ([]corev1.ResourceQuota, err
}
}
+ for i := range possibleClusterScopedQuotas {
+ candidate := possibleClusterScopedQuotas[i]
+
+ a := candidate.Annotations[kcpExperimentalClusterScopedQuotaAnnotationKey]
+ if a == "" {
+ continue
+ }
+
+ if clusterScoped, _ := strconv.ParseBool(a); clusterScoped {
+ items = append(items, candidate)
+ }
+ }
+
resourceQuotas := []corev1.ResourceQuota{}
for i := range items {
quota := items[i]
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook_kcp.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook_kcp.go
new file mode 100644
index 0000000000000..c37e349e951b8
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook_kcp.go
@@ -0,0 +1,19 @@
+package generic
+
+import (
+ coreinformers "k8s.io/client-go/informers/core/v1"
+)
+
+func (a *Webhook) SetNamespaceInformer(namespaceInformer coreinformers.NamespaceInformer) {
+ a.namespaceMatcher.NamespaceLister = namespaceInformer.Lister()
+}
+
+func (a *Webhook) SetHookSource(hookSource Source) {
+ a.hookSource = hookSource
+}
+
+func (a *Webhook) SetReadyFuncFromKCP(namespaceInformer coreinformers.NamespaceInformer) {
+ a.SetReadyFunc(func() bool {
+ return namespaceInformer.Informer().HasSynced() && a.hookSource.HasSynced()
+ })
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher_kcp.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher_kcp.go
new file mode 100644
index 0000000000000..32571bbc77bff
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher_kcp.go
@@ -0,0 +1,27 @@
+/*
+Copyright 2022 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package mutating
+
+import (
+ "k8s.io/apiserver/pkg/admission/plugin/webhook/generic"
+ webhookutil "k8s.io/apiserver/pkg/util/webhook"
+)
+
+// NewMutatingDispatcher makes newMutatingDispatcher public for external use.
+func NewMutatingDispatcher(p *Plugin) func(cm *webhookutil.ClientManager) generic.Dispatcher {
+ return newMutatingDispatcher(p)
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher_kcp.go b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher_kcp.go
new file mode 100644
index 0000000000000..5788c731d29f6
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher_kcp.go
@@ -0,0 +1,27 @@
+/*
+Copyright 2022 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package validating
+
+import (
+ "k8s.io/apiserver/pkg/admission/plugin/webhook/generic"
+ webhookutil "k8s.io/apiserver/pkg/util/webhook"
+)
+
+// NewValidatingDispatcher makes newValidatingDispatcher public for external use.
+func NewValidatingDispatcher(p *Plugin) func(cm *webhookutil.ClientManager) generic.Dispatcher {
+ return newValidatingDispatcher(p)
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/authentication/serviceaccount/util.go b/staging/src/k8s.io/apiserver/pkg/authentication/serviceaccount/util.go
index dd11efbde5c53..be196f4055aab 100644
--- a/staging/src/k8s.io/apiserver/pkg/authentication/serviceaccount/util.go
+++ b/staging/src/k8s.io/apiserver/pkg/authentication/serviceaccount/util.go
@@ -20,6 +20,8 @@ import (
"fmt"
"strings"
+ "github.com/kcp-dev/logicalcluster/v3"
+
v1 "k8s.io/api/core/v1"
apimachineryvalidation "k8s.io/apimachinery/pkg/api/validation"
"k8s.io/apiserver/pkg/authentication/user"
@@ -48,6 +50,8 @@ const (
// NodeUIDKey is the key used in a user's "extra" to specify the node UID of
// the authenticating request.
NodeUIDKey = "authentication.kubernetes.io/node-uid"
+ // ClusterNameKey is the logical cluster name this service-account comes from.
+ ClusterNameKey = "authentication.kubernetes.io/cluster-name"
)
// MakeUsername generates a username from the given namespace and ServiceAccount name.
@@ -114,15 +118,17 @@ func MakeNamespaceGroupName(namespace string) string {
}
// UserInfo returns a user.Info interface for the given namespace, service account name and UID
-func UserInfo(namespace, name, uid string) user.Info {
+func UserInfo(clusterName logicalcluster.Name, namespace, name, uid string) user.Info {
return (&ServiceAccountInfo{
- Name: name,
- Namespace: namespace,
- UID: uid,
+ ClusterName: clusterName,
+ Name: name,
+ Namespace: namespace,
+ UID: uid,
}).UserInfo()
}
type ServiceAccountInfo struct {
+ ClusterName logicalcluster.Name
Name, Namespace, UID string
PodName, PodUID string
CredentialID string
@@ -160,6 +166,10 @@ func (sa *ServiceAccountInfo) UserInfo() user.Info {
}
}
+ if info.Extra == nil {
+ info.Extra = map[string][]string{}
+ }
+ info.Extra[ClusterNameKey] = []string{sa.ClusterName.String()}
return info
}
diff --git a/staging/src/k8s.io/apiserver/pkg/clientsethack/adapter.go b/staging/src/k8s.io/apiserver/pkg/clientsethack/adapter.go
new file mode 100644
index 0000000000000..dfadf9d1a2064
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/clientsethack/adapter.go
@@ -0,0 +1,333 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// +kcp-code-generator:skip
+
+package clientsethack
+
+import (
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+
+ "k8s.io/client-go/discovery"
+ "k8s.io/client-go/kubernetes"
+ admissionregistrationv1 "k8s.io/client-go/kubernetes/typed/admissionregistration/v1"
+ admissionregistrationv1alpha1 "k8s.io/client-go/kubernetes/typed/admissionregistration/v1alpha1"
+ admissionregistrationv1beta1 "k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1"
+ internalv1alpha1 "k8s.io/client-go/kubernetes/typed/apiserverinternal/v1alpha1"
+ appsv1 "k8s.io/client-go/kubernetes/typed/apps/v1"
+ appsv1beta1 "k8s.io/client-go/kubernetes/typed/apps/v1beta1"
+ appsv1beta2 "k8s.io/client-go/kubernetes/typed/apps/v1beta2"
+ authenticationv1 "k8s.io/client-go/kubernetes/typed/authentication/v1"
+ "k8s.io/client-go/kubernetes/typed/authentication/v1alpha1"
+ authenticationv1beta1 "k8s.io/client-go/kubernetes/typed/authentication/v1beta1"
+ authorizationv1 "k8s.io/client-go/kubernetes/typed/authorization/v1"
+ authorizationv1beta1 "k8s.io/client-go/kubernetes/typed/authorization/v1beta1"
+ autoscalingv1 "k8s.io/client-go/kubernetes/typed/autoscaling/v1"
+ autoscalingv2 "k8s.io/client-go/kubernetes/typed/autoscaling/v2"
+ autoscalingv2beta1 "k8s.io/client-go/kubernetes/typed/autoscaling/v2beta1"
+ autoscalingv2beta2 "k8s.io/client-go/kubernetes/typed/autoscaling/v2beta2"
+ batchv1 "k8s.io/client-go/kubernetes/typed/batch/v1"
+ batchv1beta1 "k8s.io/client-go/kubernetes/typed/batch/v1beta1"
+ certificatesv1 "k8s.io/client-go/kubernetes/typed/certificates/v1"
+ certificatesv1alpha1 "k8s.io/client-go/kubernetes/typed/certificates/v1alpha1"
+ certificatesv1beta1 "k8s.io/client-go/kubernetes/typed/certificates/v1beta1"
+ coordinationv1 "k8s.io/client-go/kubernetes/typed/coordination/v1"
+ coordinationv1alpha2 "k8s.io/client-go/kubernetes/typed/coordination/v1alpha2"
+ coordinationv1beta1 "k8s.io/client-go/kubernetes/typed/coordination/v1beta1"
+ corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
+ discoveryv1 "k8s.io/client-go/kubernetes/typed/discovery/v1"
+ discoveryv1beta1 "k8s.io/client-go/kubernetes/typed/discovery/v1beta1"
+ eventsv1 "k8s.io/client-go/kubernetes/typed/events/v1"
+ eventsv1beta1 "k8s.io/client-go/kubernetes/typed/events/v1beta1"
+ extensionsv1beta1 "k8s.io/client-go/kubernetes/typed/extensions/v1beta1"
+ flowcontrolv1 "k8s.io/client-go/kubernetes/typed/flowcontrol/v1"
+ flowcontrolv1beta1 "k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta1"
+ flowcontrolv1beta2 "k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta2"
+ flowcontrolv1beta3 "k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta3"
+ networkingv1 "k8s.io/client-go/kubernetes/typed/networking/v1"
+ networkingv1alpha1 "k8s.io/client-go/kubernetes/typed/networking/v1alpha1"
+ networkingv1beta1 "k8s.io/client-go/kubernetes/typed/networking/v1beta1"
+ nodev1 "k8s.io/client-go/kubernetes/typed/node/v1"
+ nodev1alpha1 "k8s.io/client-go/kubernetes/typed/node/v1alpha1"
+ nodev1beta1 "k8s.io/client-go/kubernetes/typed/node/v1beta1"
+ policyv1 "k8s.io/client-go/kubernetes/typed/policy/v1"
+ policyv1beta1 "k8s.io/client-go/kubernetes/typed/policy/v1beta1"
+ rbacv1 "k8s.io/client-go/kubernetes/typed/rbac/v1"
+ rbacv1alpha1 "k8s.io/client-go/kubernetes/typed/rbac/v1alpha1"
+ rbacv1beta1 "k8s.io/client-go/kubernetes/typed/rbac/v1beta1"
+ resourcev1alpha3 "k8s.io/client-go/kubernetes/typed/resource/v1alpha3"
+ resourcev1beta1 "k8s.io/client-go/kubernetes/typed/resource/v1beta1"
+ resourcev1beta2 "k8s.io/client-go/kubernetes/typed/resource/v1beta2"
+ schedulingv1 "k8s.io/client-go/kubernetes/typed/scheduling/v1"
+ schedulingv1alpha1 "k8s.io/client-go/kubernetes/typed/scheduling/v1alpha1"
+ schedulingv1beta1 "k8s.io/client-go/kubernetes/typed/scheduling/v1beta1"
+ storagev1 "k8s.io/client-go/kubernetes/typed/storage/v1"
+ storagev1alpha1 "k8s.io/client-go/kubernetes/typed/storage/v1alpha1"
+ storagev1beta1 "k8s.io/client-go/kubernetes/typed/storage/v1beta1"
+ storagemigrationv1alpha1 "k8s.io/client-go/kubernetes/typed/storagemigration/v1alpha1"
+)
+
+// Interface allows us to hold onto a strongly-typed cluster-aware clients here, while
+// passing in a cluster-unaware (but non-functional) clients to k8s libraries. We export this type so that we
+// can get the cluster-aware clients back using casting in admission plugin initialization.
+type Interface interface {
+ kubernetes.Interface
+ ClusterAware() kcpkubernetesclientset.ClusterInterface
+}
+
+var _ Interface = (*hack)(nil)
+
+// Wrap adapts a cluster-aware informer factory to a cluster-unaware wrapper that can divulge it after casting.
+func Wrap(clusterAware kcpkubernetesclientset.ClusterInterface) Interface {
+ return &hack{clusterAware: clusterAware}
+}
+
+// Unwrap extracts a cluster-aware informer factory from the cluster-unaware wrapper, or panics if we get the wrong input.
+func Unwrap(clusterUnaware kubernetes.Interface) kcpkubernetesclientset.ClusterInterface {
+ return clusterUnaware.(Interface).ClusterAware()
+}
+
+type hack struct {
+ clusterAware kcpkubernetesclientset.ClusterInterface
+}
+
+func (h *hack) AdmissionregistrationV1alpha1() admissionregistrationv1alpha1.AdmissionregistrationV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AuthenticationV1alpha1() v1alpha1.AuthenticationV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NetworkingV1alpha1() networkingv1alpha1.NetworkingV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) ResourceV1alpha3() resourcev1alpha3.ResourceV1alpha3Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) ResourceV1beta1() resourcev1beta1.ResourceV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) ResourceV1beta2() resourcev1beta2.ResourceV1beta2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AdmissionregistrationV1() admissionregistrationv1.AdmissionregistrationV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AdmissionregistrationV1beta1() admissionregistrationv1beta1.AdmissionregistrationV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) InternalV1alpha1() internalv1alpha1.InternalV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AppsV1() appsv1.AppsV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AppsV1beta1() appsv1beta1.AppsV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AppsV1beta2() appsv1beta2.AppsV1beta2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AuthenticationV1() authenticationv1.AuthenticationV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AuthenticationV1beta1() authenticationv1beta1.AuthenticationV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AuthorizationV1() authorizationv1.AuthorizationV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AuthorizationV1beta1() authorizationv1beta1.AuthorizationV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AutoscalingV1() autoscalingv1.AutoscalingV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AutoscalingV2() autoscalingv2.AutoscalingV2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AutoscalingV2beta1() autoscalingv2beta1.AutoscalingV2beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) AutoscalingV2beta2() autoscalingv2beta2.AutoscalingV2beta2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) BatchV1() batchv1.BatchV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) BatchV1beta1() batchv1beta1.BatchV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CertificatesV1() certificatesv1.CertificatesV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CertificatesV1alpha1() certificatesv1alpha1.CertificatesV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CertificatesV1beta1() certificatesv1beta1.CertificatesV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CoordinationV1alpha2() coordinationv1alpha2.CoordinationV1alpha2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CoordinationV1beta1() coordinationv1beta1.CoordinationV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CoordinationV1() coordinationv1.CoordinationV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) CoreV1() corev1.CoreV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) DiscoveryV1() discoveryv1.DiscoveryV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) DiscoveryV1beta1() discoveryv1beta1.DiscoveryV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) EventsV1() eventsv1.EventsV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) EventsV1beta1() eventsv1beta1.EventsV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) ExtensionsV1beta1() extensionsv1beta1.ExtensionsV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) FlowcontrolV1beta1() flowcontrolv1beta1.FlowcontrolV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) FlowcontrolV1beta2() flowcontrolv1beta2.FlowcontrolV1beta2Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) FlowcontrolV1beta3() flowcontrolv1beta3.FlowcontrolV1beta3Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) FlowcontrolV1() flowcontrolv1.FlowcontrolV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NetworkingV1() networkingv1.NetworkingV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NetworkingV1beta1() networkingv1beta1.NetworkingV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NodeV1() nodev1.NodeV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NodeV1alpha1() nodev1alpha1.NodeV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) NodeV1beta1() nodev1beta1.NodeV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) PolicyV1() policyv1.PolicyV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) PolicyV1beta1() policyv1beta1.PolicyV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) RbacV1() rbacv1.RbacV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) RbacV1beta1() rbacv1beta1.RbacV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) RbacV1alpha1() rbacv1alpha1.RbacV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) SchedulingV1alpha1() schedulingv1alpha1.SchedulingV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) SchedulingV1beta1() schedulingv1beta1.SchedulingV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) SchedulingV1() schedulingv1.SchedulingV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) StorageV1beta1() storagev1beta1.StorageV1beta1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) StorageV1() storagev1.StorageV1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) StorageV1alpha1() storagev1alpha1.StorageV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) StoragemigrationV1alpha1() storagemigrationv1alpha1.StoragemigrationV1alpha1Interface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) Discovery() discovery.DiscoveryInterface {
+ panic("programmer error: using a cluster-unaware clientset, need to cast this to use the cluster-aware one!")
+}
+
+func (h *hack) ClusterAware() kcpkubernetesclientset.ClusterInterface {
+ return h.clusterAware
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/dynamichack/adapter.go b/staging/src/k8s.io/apiserver/pkg/dynamichack/adapter.go
new file mode 100644
index 0000000000000..a14946bfa44b0
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/dynamichack/adapter.go
@@ -0,0 +1,59 @@
+/*
+Copyright 2023 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// +kcp-code-generator:skip
+
+package dynamichack
+
+import (
+ kcpdynamic "github.com/kcp-dev/client-go/dynamic"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/client-go/dynamic"
+)
+
+// Interface allows us to hold onto a strongly-typed cluster-aware clients here, while
+// passing in a cluster-unaware (but non-functional) clients to k8s libraries. We export this type so that we
+// can get the cluster-aware clients back using casting in admission plugin initialization.
+type Interface interface {
+ dynamic.Interface
+ ClusterAware() kcpdynamic.ClusterInterface
+}
+
+var _ Interface = (*hack)(nil)
+
+// Wrap adapts a cluster-aware dynamic client to a cluster-unaware wrapper that can divulge it after casting.
+func Wrap(clusterAware kcpdynamic.ClusterInterface) Interface {
+ return &hack{clusterAware: clusterAware}
+}
+
+// Unwrap extracts a cluster-aware dynamic client from the cluster-unaware wrapper, or panics if we get the wrong input.
+func Unwrap(clusterUnaware dynamic.Interface) kcpdynamic.ClusterInterface {
+ return clusterUnaware.(Interface).ClusterAware()
+}
+
+type hack struct {
+ clusterAware kcpdynamic.ClusterInterface
+}
+
+func (h hack) Resource(resource schema.GroupVersionResource) dynamic.NamespaceableResourceInterface {
+ panic("programmer error: using a cluster-unaware dynamic client, need to cast this to use the cluster-aware one!")
+
+}
+
+func (h hack) ClusterAware() kcpdynamic.ClusterInterface {
+ panic("programmer error: using a cluster-unaware dynamic, need to cast this to use the cluster-aware one!")
+
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/discovery/storageversionhash.go b/staging/src/k8s.io/apiserver/pkg/endpoints/discovery/storageversionhash.go
index f47e9632b71c7..eedef8515918f 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/discovery/storageversionhash.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/discovery/storageversionhash.go
@@ -19,14 +19,19 @@ package discovery
import (
"crypto/sha256"
"encoding/base64"
+
+ "github.com/kcp-dev/logicalcluster/v3"
)
// StorageVersionHash calculates the storage version hash for a
// tuple.
// WARNING: this function is subject to change. Clients shouldn't depend on
// this function.
-func StorageVersionHash(group, version, kind string) string {
- gvk := group + "/" + version + "/" + kind
+func StorageVersionHash(clusterName logicalcluster.Name, group, version, kind string) string {
+ gvk := clusterName.String() + "/" + group + "/" + version + "/" + kind
+ if gvk == "" {
+ return ""
+ }
bytes := sha256.Sum256([]byte(gvk))
// Assuming there are N kinds in the cluster, and the hash is X-byte long,
// the chance of colliding hash P(N,X) approximates to 1-e^(-(N^2)/2^(8X+1)).
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/patch.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/patch.go
index acfff1961c930..0fc462ab1e41d 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/patch.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/patch.go
@@ -57,6 +57,7 @@ import (
"k8s.io/apiserver/pkg/util/dryrun"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/component-base/tracing"
+ "k8s.io/kube-openapi/pkg/util/proto"
)
const (
@@ -441,6 +442,7 @@ type smpPatcher struct {
// Schema
schemaReferenceObj runtime.Object
fieldManager *managedfields.FieldManager
+ openapiModel proto.Schema
}
func (p *smpPatcher) applyPatchToCurrentObject(requestContext context.Context, currentObject runtime.Object) (runtime.Object, error) {
@@ -454,7 +456,7 @@ func (p *smpPatcher) applyPatchToCurrentObject(requestContext context.Context, c
if err != nil {
return nil, err
}
- if err := strategicPatchObject(requestContext, p.defaulter, currentVersionedObject, p.patchBytes, versionedObjToUpdate, p.schemaReferenceObj, p.validationDirective); err != nil {
+ if err := strategicPatchObject(requestContext, p.defaulter, currentVersionedObject, p.patchBytes, versionedObjToUpdate, p.schemaReferenceObj, p.validationDirective, p.openapiModel); err != nil {
return nil, err
}
// Convert the object back to the hub version
@@ -550,7 +552,17 @@ func strategicPatchObject(
objToUpdate runtime.Object,
schemaReferenceObj runtime.Object,
validationDirective string,
+ openapiModel proto.Schema,
) error {
+ // kcp: because we support using CRDs to represent built-in Kubernetes types (e.g. deployments) that do support
+ // strategic patch, we have to make sure we deep copy originalObject if it's already unstructured.Unstructured,
+ // because runtime.DefaultUnstructuredConverter.ToUnstructured returns the underlying map from an Unstructured
+ // without copying it, meaning that the call to applyPatchToObject below mutates the original Unstructured
+ // content unless we've first made a deep copy.
+ if _, ok := originalObject.(runtime.Unstructured); ok {
+ copiedOriginal := originalObject.DeepCopyObject()
+ originalObject = copiedOriginal
+ }
originalObjMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(originalObject)
if err != nil {
return err
@@ -569,7 +581,7 @@ func strategicPatchObject(
}
}
- if err := applyPatchToObject(requestContext, defaulter, originalObjMap, patchMap, objToUpdate, schemaReferenceObj, strictErrs, validationDirective); err != nil {
+ if err := applyPatchToObject(requestContext, defaulter, originalObjMap, patchMap, objToUpdate, schemaReferenceObj, strictErrs, validationDirective, openapiModel); err != nil {
return err
}
return nil
@@ -663,10 +675,17 @@ func (p *patcher) patchResource(ctx context.Context, scope *RequestScope) (runti
if err != nil {
return nil, false, err
}
+
+ var schema proto.Schema
+ modelsByGKV := scope.OpenapiModels
+ if modelsByGKV != nil {
+ schema = modelsByGKV[p.kind]
+ }
p.mechanism = &smpPatcher{
patcher: p,
schemaReferenceObj: schemaReferenceObj,
fieldManager: scope.FieldManager,
+ openapiModel: schema,
}
// this case is unreachable if ServerSideApply is not enabled because we will have already rejected the content type
case types.ApplyYAMLPatchType:
@@ -741,8 +760,12 @@ func applyPatchToObject(
schemaReferenceObj runtime.Object,
strictErrs []error,
validationDirective string,
+ openapiModel proto.Schema,
) error {
patchedObjMap, err := strategicpatch.StrategicMergeMapPatch(originalMap, patchMap, schemaReferenceObj)
+ if err == mergepatch.ErrUnsupportedStrategicMergePatchFormat && openapiModel != nil {
+ patchedObjMap, err = strategicpatch.StrategicMergeMapPatchUsingLookupPatchMeta(originalMap, patchMap, strategicpatch.NewPatchMetaFromOpenAPI(openapiModel))
+ }
if err != nil {
return interpretStrategicMergePatchError(err)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response.go
index 3f7ad6121c5bd..53829d56f67e0 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response.go
@@ -268,10 +268,10 @@ func doTransformObject(ctx context.Context, obj runtime.Object, opts interface{}
return obj, nil
case target.Kind == "PartialObjectMetadata":
- return asPartialObjectMetadata(obj, target.GroupVersion())
+ return asPartialObjectMetadata(ctx, obj, target.GroupVersion())
case target.Kind == "PartialObjectMetadataList":
- return asPartialObjectMetadataList(obj, target.GroupVersion())
+ return asPartialObjectMetadataList(ctx, obj, target.GroupVersion())
case target.Kind == "Table":
options, ok := opts.(*metav1.TableOptions)
@@ -419,7 +419,7 @@ func asTable(ctx context.Context, result runtime.Object, opts *metav1.TableOptio
return table, nil
}
-func asPartialObjectMetadata(result runtime.Object, groupVersion schema.GroupVersion) (runtime.Object, error) {
+func asPartialObjectMetadata(ctx context.Context, result runtime.Object, groupVersion schema.GroupVersion) (runtime.Object, error) {
if meta.IsListType(result) {
err := newNotAcceptableError(fmt.Sprintf("you requested PartialObjectMetadata, but the requested object is a list (%T)", result))
return nil, err
@@ -435,10 +435,11 @@ func asPartialObjectMetadata(result runtime.Object, groupVersion schema.GroupVer
}
partial := meta.AsPartialObjectMetadata(m)
partial.GetObjectKind().SetGroupVersionKind(groupVersion.WithKind("PartialObjectMetadata"))
+ setKCPOriginalAPIVersionAnnotation(ctx, result, partial)
return partial, nil
}
-func asPartialObjectMetadataList(result runtime.Object, groupVersion schema.GroupVersion) (runtime.Object, error) {
+func asPartialObjectMetadataList(ctx context.Context, result runtime.Object, groupVersion schema.GroupVersion) (runtime.Object, error) {
li, ok := result.(metav1.ListInterface)
if !ok {
return nil, newNotAcceptableError(fmt.Sprintf("you requested PartialObjectMetadataList, but the requested object is not a list (%T)", result))
@@ -455,6 +456,7 @@ func asPartialObjectMetadataList(result runtime.Object, groupVersion schema.Grou
}
partial := meta.AsPartialObjectMetadata(m)
partial.GetObjectKind().SetGroupVersionKind(gvk)
+ setKCPOriginalAPIVersionAnnotation(ctx, obj, partial)
list.Items = append(list.Items, *partial)
return nil
})
@@ -475,6 +477,7 @@ func asPartialObjectMetadataList(result runtime.Object, groupVersion schema.Grou
}
partial := meta.AsPartialObjectMetadata(m)
partial.GetObjectKind().SetGroupVersionKind(gvk)
+ setKCPOriginalAPIVersionAnnotation(ctx, obj, partial)
list.Items = append(list.Items, *partial)
return nil
})
@@ -507,15 +510,20 @@ type watchListTransformer struct {
targetGVK *schema.GroupVersionKind
negotiatedEncoder runtime.Encoder
buffer runtime.Splice
+
+ // kcp: needed for setKCPOriginalAPIVersionAnnotation().
+ // It expects a context with clusterContextKey key set.
+ ctx context.Context
}
// createWatchListTransformerIfRequested returns a transformer function for watchlist bookmark event.
-func newWatchListTransformer(initialEventsListBlueprint runtime.Object, targetGVK *schema.GroupVersionKind, negotiatedEncoder runtime.Encoder) *watchListTransformer {
+func newWatchListTransformer(ctx context.Context, initialEventsListBlueprint runtime.Object, targetGVK *schema.GroupVersionKind, negotiatedEncoder runtime.Encoder) *watchListTransformer {
return &watchListTransformer{
initialEventsListBlueprint: initialEventsListBlueprint,
targetGVK: targetGVK,
negotiatedEncoder: negotiatedEncoder,
buffer: runtime.NewSpliceBuffer(),
+ ctx: ctx,
}
}
@@ -565,7 +573,7 @@ func (e *watchListTransformer) encodeInitialEventsListBlueprint(object runtime.O
func (e *watchListTransformer) transformInitialEventsListBlueprint() (runtime.Object, error) {
if e.targetGVK != nil && e.targetGVK.Kind == "PartialObjectMetadata" {
- return asPartialObjectMetadataList(e.initialEventsListBlueprint, e.targetGVK.GroupVersion())
+ return asPartialObjectMetadataList(e.ctx, e.initialEventsListBlueprint, e.targetGVK.GroupVersion())
}
return e.initialEventsListBlueprint, nil
}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response_kcp.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response_kcp.go
new file mode 100644
index 0000000000000..b5f92b4bc6fa1
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/response_kcp.go
@@ -0,0 +1,62 @@
+/*
+Copyright 2023 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package handlers
+
+import (
+ "context"
+ "fmt"
+
+ "k8s.io/apimachinery/pkg/api/meta"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apiserver/pkg/endpoints/request"
+)
+
+const KCPOriginalAPIVersionAnnotation = "kcp.io/original-api-version"
+
+// setKCPOriginalAPIVersionAnnotation sets the annotation kcp.io/original-api-version on partial indicating the actual
+// API version of the original object. This is necessary for kcp with wildcard partial metadata list/watch requests.
+// For example, if the request is for /clusters/*/apis/kcp.io/v1/widgets, and it's a partial metadata request, the
+// server returns ALL widgets, regardless of their API version. But because this is a partial metadata request, the
+// API version of the returned object is always meta.k8s.io/$version (could be v1 or v1beta1). Any client needing to
+// modify or delete the returned object must know its exact API version. Therefore, we set this annotation with the
+// actual original API version of the object. Clients can use it when constructing dynamic clients to guarantee they
+// are using the correct API version.
+func setKCPOriginalAPIVersionAnnotation(ctx context.Context, original any, partial *metav1.PartialObjectMetadata) {
+ if cluster := request.ClusterFrom(ctx); !cluster.Wildcard {
+ return
+ }
+ annotations := partial.GetAnnotations()
+
+ if annotations[KCPOriginalAPIVersionAnnotation] != "" {
+ // Do not overwrite the annotation if it is present. It is set by the kcpWildcardPartialMetadataConverter
+ // during the conversion process so we don't lose the original API version. Changing it here would lead to
+ // an incorrect value.
+ return
+ }
+
+ if annotations == nil {
+ annotations = make(map[string]string)
+ }
+
+ t, err := meta.TypeAccessor(original)
+ if err != nil {
+ panic(fmt.Errorf("unable to get a TypeAccessor for %T: %w", original, err))
+ }
+
+ annotations[KCPOriginalAPIVersionAnnotation] = t.GetAPIVersion()
+ partial.SetAnnotations(annotations)
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go
index 7f6756e7845b9..d3a890a825a96 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go
@@ -44,6 +44,7 @@ import (
requestmetrics "k8s.io/apiserver/pkg/endpoints/handlers/metrics"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
"k8s.io/apiserver/pkg/endpoints/metrics"
+ "k8s.io/apiserver/pkg/endpoints/openapi"
"k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/registry/rest"
"k8s.io/apiserver/pkg/warning"
@@ -107,6 +108,8 @@ type RequestScope struct {
HubGroupVersion schema.GroupVersion
MaxRequestBodyBytes int64
+
+ OpenapiModels openapi.ModelsByGKV
}
func (scope *RequestScope) err(err error, w http.ResponseWriter, req *http.Request) {
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest_test.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest_test.go
index 2f9412073c2dc..b6a667f3506a8 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest_test.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest_test.go
@@ -102,7 +102,7 @@ func TestPatchAnonymousField(t *testing.T) {
}
actual := &testPatchType{}
- err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &testPatchType{}, "")
+ err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &testPatchType{}, "", nil)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
@@ -212,7 +212,7 @@ func TestStrategicMergePatchInvalid(t *testing.T) {
expectedError := "invalid character 'b' looking for beginning of value"
actual := &testPatchType{}
- err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &testPatchType{}, "")
+ err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &testPatchType{}, "", nil)
if !apierrors.IsBadRequest(err) {
t.Errorf("expected HTTP status: BadRequest, got: %#v", apierrors.ReasonForError(err))
}
@@ -303,7 +303,7 @@ func TestPatchCustomResource(t *testing.T) {
expectedError := "strategic merge patch format is not supported"
actual := &unstructured.Unstructured{}
- err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &unstructured.Unstructured{}, "")
+ err := strategicPatchObject(context.TODO(), defaulter, original, []byte(patch), actual, &unstructured.Unstructured{}, "", nil)
if !apierrors.IsBadRequest(err) {
t.Errorf("expected HTTP status: BadRequest, got: %#v", apierrors.ReasonForError(err))
}
@@ -626,7 +626,7 @@ func TestNumberConversion(t *testing.T) {
patchJS := []byte(`{"spec":{"terminationGracePeriodSeconds":42,"activeDeadlineSeconds":120}}`)
- err := strategicPatchObject(context.TODO(), defaulter, currentVersionedObject, patchJS, versionedObjToUpdate, schemaReferenceObj, "")
+ err := strategicPatchObject(context.TODO(), defaulter, currentVersionedObject, patchJS, versionedObjToUpdate, schemaReferenceObj, "", nil)
if err != nil {
t.Fatal(err)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/watch.go b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/watch.go
index c239d1f7abe8f..b7ba983a5d3bd 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/watch.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/watch.go
@@ -167,7 +167,7 @@ func serveWatchHandler(watcher watch.Interface, scope *RequestScope, mediaTypeOp
Encoder: encoder,
EmbeddedEncoder: embeddedEncoder,
- watchListTransformerFn: newWatchListTransformer(initialEventsListBlueprint, mediaTypeOptions.Convert, negotiatedEncoder).transform,
+ watchListTransformerFn: newWatchListTransformer(ctx, initialEventsListBlueprint, mediaTypeOptions.Convert, negotiatedEncoder).transform,
MemoryAllocator: memoryAllocator,
TimeoutFactory: &realTimeoutFactory{timeout},
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/installer.go b/staging/src/k8s.io/apiserver/pkg/endpoints/installer.go
index f9dec903184a8..1f40402998b2f 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/installer.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/installer.go
@@ -26,6 +26,7 @@ import (
"unicode"
restful "github.com/emicklei/go-restful/v3"
+ "github.com/kcp-dev/logicalcluster/v3"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
apidiscoveryv2 "k8s.io/api/apidiscovery/v2"
@@ -491,7 +492,7 @@ func (a *APIInstaller) registerResourceHandlers(path string, storage rest.Storag
if err != nil {
return nil, nil, err
}
- apiResource.StorageVersionHash = discovery.StorageVersionHash(gvk.Group, gvk.Version, gvk.Kind)
+ apiResource.StorageVersionHash = discovery.StorageVersionHash(logicalcluster.Name(""), gvk.Group, gvk.Version, gvk.Kind)
}
// Get the list of actions for the given scope.
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/openapi/openapi.go b/staging/src/k8s.io/apiserver/pkg/endpoints/openapi/openapi.go
index e61f444399e56..36733b23c5a1d 100644
--- a/staging/src/k8s.io/apiserver/pkg/endpoints/openapi/openapi.go
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/openapi/openapi.go
@@ -18,6 +18,7 @@ package openapi
import (
"bytes"
+ "errors"
"fmt"
"reflect"
"sort"
@@ -30,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/kube-openapi/pkg/util"
+ "k8s.io/kube-openapi/pkg/util/proto"
"k8s.io/kube-openapi/pkg/validation/spec"
)
@@ -189,3 +191,86 @@ func (d *DefinitionNamer) GetDefinitionName(name string) (string, spec.Extension
}
return friendlyName(name), nil
}
+
+type ModelsByGKV map[schema.GroupVersionKind]proto.Schema
+
+// GetModelsByGKV creates a new `Resources` out of the openapi models
+func GetModelsByGKV(models proto.Models) (ModelsByGKV, error) {
+ result := map[schema.GroupVersionKind]proto.Schema{}
+ for _, modelName := range models.ListModels() {
+ model := models.LookupModel(modelName)
+ if model == nil {
+ return map[schema.GroupVersionKind]proto.Schema{}, errors.New("ListModels returns a model that can't be looked-up.")
+ }
+ gvkList := parseGroupVersionKind(model)
+ for _, gvk := range gvkList {
+ if len(gvk.Kind) > 0 {
+ key := schema.GroupVersionKind{Group: gvk.Group, Version: gvk.Version, Kind: gvk.Kind}
+ if key.Group == "core" {
+ key.Group = ""
+ }
+ result[key] = model
+ }
+ }
+ }
+
+ return result, nil
+}
+
+// Get and parse GroupVersionKind from the extension. Returns empty if it doesn't have one.
+func parseGroupVersionKind(s proto.Schema) []schema.GroupVersionKind {
+ extensions := s.GetExtensions()
+
+ gvkListResult := []schema.GroupVersionKind{}
+
+ // Get the extensions
+ gvkExtension, ok := extensions[extensionGVK]
+ if !ok {
+ return []schema.GroupVersionKind{}
+ }
+
+ // gvk extension must be a list of at least 1 element.
+ gvkList, ok := gvkExtension.([]interface{})
+ if !ok {
+ return []schema.GroupVersionKind{}
+ }
+
+ for _, gvk := range gvkList {
+ // gvk extension list must be a map with group, version, and
+ // kind fields
+ gvkMap, ok := gvk.(map[interface{}]interface{})
+ if !ok {
+ // OpenAPI v3 seems to place string maps there
+ gvkStringMap, ok := gvk.(map[string]interface{})
+ if !ok {
+ continue
+ }
+ gvkMap = map[interface{}]interface{}{}
+ for k, v := range gvkStringMap {
+ gvkMap[k] = v
+ }
+
+ }
+
+ group, ok := gvkMap["group"].(string)
+ if !ok {
+ continue
+ }
+ version, ok := gvkMap["version"].(string)
+ if !ok {
+ continue
+ }
+ kind, ok := gvkMap["kind"].(string)
+ if !ok {
+ continue
+ }
+
+ gvkListResult = append(gvkListResult, schema.GroupVersionKind{
+ Group: group,
+ Version: version,
+ Kind: kind,
+ })
+ }
+
+ return gvkListResult
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_cluster_kcp.go b/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_cluster_kcp.go
new file mode 100644
index 0000000000000..efa4517ecfdd8
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_cluster_kcp.go
@@ -0,0 +1,110 @@
+/*
+Copyright 2023 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package request
+
+import (
+ "context"
+ "errors"
+ "fmt"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+)
+
+type clusterKey int
+
+const (
+ // clusterKey is the context key for the request namespace.
+ clusterContextKey clusterKey = iota
+)
+
+type Cluster struct {
+ // Name holds a cluster name. This is empty for wildcard requests.
+ Name logicalcluster.Name
+
+ // If true the query applies to all clusters. Name is empty if this is true.
+ Wildcard bool
+
+ // PartialMetadataRequest indicates if the incoming request is for partial metadata. This is set by the kcp
+ // server handlers and is necessary to get the right plumbing in place for wildcard partial metadata requests for
+ // custom resources.
+ PartialMetadataRequest bool
+}
+
+// WithCluster returns a context that describes the nested cluster context
+func WithCluster(parent context.Context, cluster Cluster) context.Context {
+ return context.WithValue(parent, clusterContextKey, cluster)
+}
+
+// ClusterFrom returns the value of the cluster key on the ctx, or nil if there
+// is no cluster key.
+func ClusterFrom(ctx context.Context) *Cluster {
+ cluster, ok := ctx.Value(clusterContextKey).(Cluster)
+ if !ok {
+ return nil
+ }
+ return &cluster
+}
+
+func buildClusterError(message string, ctx context.Context) error {
+ if ri, ok := RequestInfoFrom(ctx); ok {
+ message = message + fmt.Sprintf(" - RequestInfo: %#v", ri)
+ }
+ return errors.New(message)
+}
+
+// ValidClusterFrom returns the value of the cluster key on the ctx.
+// If there's no cluster key, or if the cluster name is empty
+// and it's not a wildcard context, then return an error.
+func ValidClusterFrom(ctx context.Context) (*Cluster, error) {
+ cluster := ClusterFrom(ctx)
+ if cluster == nil {
+ return nil, buildClusterError("no cluster in the request context", ctx)
+ }
+ if cluster.Name.Empty() && !cluster.Wildcard {
+ return nil, buildClusterError("cluster path is empty in the request context", ctx)
+ }
+ return cluster, nil
+}
+
+// ClusterNameOrWildcardFrom returns a cluster.Name or true for a wildcard from
+// the value of the cluster key on the ctx.
+func ClusterNameOrWildcardFrom(ctx context.Context) (logicalcluster.Name, bool, error) {
+ cluster, err := ValidClusterFrom(ctx)
+ if err != nil {
+ return "", false, err
+ }
+ if cluster.Name.Empty() && !cluster.Wildcard {
+ return "", false, buildClusterError("cluster name is empty in the request context", ctx)
+ }
+ return cluster.Name, cluster.Wildcard, nil
+}
+
+// ClusterNameFrom returns a cluster.Name from the value of the cluster key on the ctx.
+// If the cluster name is not present or cannot be constructed, then return an error.
+func ClusterNameFrom(ctx context.Context) (logicalcluster.Name, error) {
+ cluster, err := ValidClusterFrom(ctx)
+ if err != nil {
+ return "", err
+ }
+ if cluster.Wildcard {
+ return "", buildClusterError("wildcard not supported", ctx)
+ }
+ if cluster.Name.Empty() {
+ return "", buildClusterError("cluster name is empty in the request context", ctx)
+ }
+ return cluster.Name, nil
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_shard_kcp.go b/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_shard_kcp.go
new file mode 100644
index 0000000000000..93cc615dd941d
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/endpoints/request/context_shard_kcp.go
@@ -0,0 +1,64 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package request
+
+import (
+ "context"
+)
+
+type shardKey int
+
+const (
+ // shardKey is the context key for the request.
+ shardContextKey shardKey = iota
+
+ // ShardAnnotationKey is the name of the annotation key used to denote an object's shard name.
+ ShardAnnotationKey = "kcp.io/shard"
+)
+
+// Shard describes a shard
+type Shard string
+
+// Empty returns true if the name of the shard is empty.
+func (s Shard) Empty() bool {
+ return s == ""
+}
+
+// Wildcard checks if the given shard name matches wildcard.
+// If true the query applies to all shards.
+func (s Shard) Wildcard() bool {
+ return s == "*"
+}
+
+// String casts Shard to string type
+func (s Shard) String() string {
+ return string(s)
+}
+
+// WithShard returns a context that holds the given shard.
+func WithShard(parent context.Context, shard Shard) context.Context {
+ return context.WithValue(parent, shardContextKey, shard)
+}
+
+// ShardFrom returns the value of the shard key in the context, or an empty value if there is no shard key.
+func ShardFrom(ctx context.Context) Shard {
+ shard, ok := ctx.Value(shardContextKey).(Shard)
+ if !ok {
+ return ""
+ }
+ return shard
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/informerfactoryhack/adapter.go b/staging/src/k8s.io/apiserver/pkg/informerfactoryhack/adapter.go
new file mode 100644
index 0000000000000..8b4ca55865a02
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/informerfactoryhack/adapter.go
@@ -0,0 +1,191 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// +kcp-code-generator:skip
+
+package informerfactoryhack
+
+import (
+ "reflect"
+
+ kcpkubernetesinformers "github.com/kcp-dev/client-go/informers"
+ "k8s.io/client-go/informers/resource"
+
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/client-go/informers"
+ "k8s.io/client-go/informers/admissionregistration"
+ "k8s.io/client-go/informers/apiserverinternal"
+ "k8s.io/client-go/informers/apps"
+ "k8s.io/client-go/informers/autoscaling"
+ "k8s.io/client-go/informers/batch"
+ "k8s.io/client-go/informers/certificates"
+ "k8s.io/client-go/informers/coordination"
+ "k8s.io/client-go/informers/core"
+ "k8s.io/client-go/informers/discovery"
+ "k8s.io/client-go/informers/events"
+ "k8s.io/client-go/informers/extensions"
+ "k8s.io/client-go/informers/flowcontrol"
+ "k8s.io/client-go/informers/internalinterfaces"
+ "k8s.io/client-go/informers/networking"
+ "k8s.io/client-go/informers/node"
+ "k8s.io/client-go/informers/policy"
+ "k8s.io/client-go/informers/rbac"
+ "k8s.io/client-go/informers/scheduling"
+ "k8s.io/client-go/informers/storage"
+ "k8s.io/client-go/informers/storagemigration"
+ "k8s.io/client-go/tools/cache"
+)
+
+// Interface allows us to hold onto a strongly-typed cluster-aware informer factory here, while
+// passing in a cluster-unaware (but non-functional) factory to k8s libraries. We export this type so that we
+// can get the cluster-aware factory back using casting in admission plugin initialization.
+type Interface interface {
+ informers.SharedInformerFactory
+ ClusterAware() kcpkubernetesinformers.SharedInformerFactory
+}
+
+var _ Interface = (*hack)(nil)
+
+// Wrap adapts a cluster-aware informer factory to a cluster-unaware wrapper that can divulge it after casting.
+func Wrap(clusterAware kcpkubernetesinformers.SharedInformerFactory) Interface {
+ return &hack{clusterAware: clusterAware}
+}
+
+// Unwrap extracts a cluster-aware informer factory from the cluster-unaware wrapper, or panics if we get the wrong input.
+func Unwrap(clusterUnaware informers.SharedInformerFactory) kcpkubernetesinformers.SharedInformerFactory {
+ return clusterUnaware.(Interface).ClusterAware()
+}
+
+type hack struct {
+ clusterAware kcpkubernetesinformers.SharedInformerFactory
+}
+
+func (s *hack) Shutdown() {
+ panic("not implemented yet")
+}
+
+func (s *hack) Resource() resource.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Start(stopCh <-chan struct{}) {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) ExtraClusterScopedIndexers() cache.Indexers {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) ExtraNamespaceScopedIndexers() cache.Indexers {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) KeyFunction() cache.KeyFunc {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) ForResource(resource schema.GroupVersionResource) (informers.GenericInformer, error) {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Admissionregistration() admissionregistration.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Internal() apiserverinternal.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Apps() apps.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Autoscaling() autoscaling.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Batch() batch.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Certificates() certificates.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Coordination() coordination.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Core() core.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Discovery() discovery.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Events() events.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Extensions() extensions.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Flowcontrol() flowcontrol.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Networking() networking.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Node() node.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Policy() policy.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Rbac() rbac.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Scheduling() scheduling.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Storage() storage.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) Storagemigration() storagemigration.Interface {
+ panic("programmer error: using a cluster-unaware informer factory, need to cast this to use the cluster-aware one!")
+}
+
+func (s *hack) ClusterAware() kcpkubernetesinformers.SharedInformerFactory {
+ return s.clusterAware
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/kcp/crd_context.go b/staging/src/k8s.io/apiserver/pkg/kcp/crd_context.go
new file mode 100644
index 0000000000000..13ed9c106487a
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/kcp/crd_context.go
@@ -0,0 +1,43 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package kcp
+
+import "context"
+
+type key int
+
+var crdKey key
+
+// WithCustomResourceIndicator wraps ctx and returns a new context.Context that indicates the current request is for a
+// CustomResource. This is required to support wildcard (cross-cluster) partial metadata requests, as the keys in
+// storage for built-in types and custom resources differ in format. Built-in types have the format
+// /registry/$group/$resource/$cluster/[$namespace]/$name, whereas custom resources have the format
+// /registry/$group/$resource/$identity/$cluster/[$namespace]/$name.
+func WithCustomResourceIndicator(ctx context.Context) context.Context {
+ return context.WithValue(ctx, crdKey, true)
+}
+
+// CustomResourceIndicatorFrom returns true if this is a custom resource request.
+func CustomResourceIndicatorFrom(ctx context.Context) bool {
+ v := ctx.Value(crdKey)
+
+ if v == nil {
+ return false
+ }
+
+ return v.(bool)
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/storage_factory.go b/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/storage_factory.go
index d36dd263ff716..d79476f02e085 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/storage_factory.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/storage_factory.go
@@ -17,6 +17,7 @@ limitations under the License.
package registry
import (
+ "context"
"fmt"
"sync"
@@ -37,7 +38,7 @@ func StorageWithCacher() generic.StorageDecorator {
return func(
storageConfig *storagebackend.ConfigForResource,
resourcePrefix string,
- keyFunc func(obj runtime.Object) (string, error),
+ keyFunc func(ctx context.Context, obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
@@ -66,6 +67,8 @@ func StorageWithCacher() generic.StorageDecorator {
IndexerFuncs: triggerFuncs,
Indexers: indexers,
Codec: storageConfig.Codec,
+
+ KcpExtraStorageMetadata: storageConfig.KcpExtraStorageMetadata,
}
cacher, err := cacherstorage.NewCacherFromConfig(cacherConfig)
if err != nil {
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go b/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go
index 56e1720f76430..90376efb0dd7b 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go
@@ -263,10 +263,32 @@ const (
resourceCountPollPeriodJitter = 1.2
)
+// NoNamespaceKeyRootFunc is the default function for constructing storage paths
+// to resource directories enforcing namespace rules.
+func NoNamespaceKeyRootFunc(ctx context.Context, prefix string) string {
+ key := prefix
+ shard := genericapirequest.ShardFrom(ctx)
+ if shard.Wildcard() {
+ return key
+ }
+ cluster, err := genericapirequest.ValidClusterFrom(ctx)
+ if err != nil {
+ klog.Errorf("invalid context cluster value: %v", err)
+ return key
+ }
+ if !shard.Empty() {
+ key += "/" + shard.String()
+ }
+ if !cluster.Wildcard {
+ key += "/" + cluster.Name.String()
+ }
+ return key
+}
+
// NamespaceKeyRootFunc is the default function for constructing storage paths
// to resource directories enforcing namespace rules.
func NamespaceKeyRootFunc(ctx context.Context, prefix string) string {
- key := prefix
+ key := NoNamespaceKeyRootFunc(ctx, prefix)
ns, ok := genericapirequest.NamespaceFrom(ctx)
if ok && len(ns) > 0 {
key = key + "/" + ns
@@ -278,7 +300,6 @@ func NamespaceKeyRootFunc(ctx context.Context, prefix string) string {
// a resource relative to the given prefix enforcing namespace rules. If the
// context does not contain a namespace, it errors.
func NamespaceKeyFunc(ctx context.Context, prefix string, name string) (string, error) {
- key := NamespaceKeyRootFunc(ctx, prefix)
ns, ok := genericapirequest.NamespaceFrom(ctx)
if !ok || len(ns) == 0 {
return "", apierrors.NewBadRequest("Namespace parameter required.")
@@ -289,8 +310,7 @@ func NamespaceKeyFunc(ctx context.Context, prefix string, name string) (string,
if msgs := path.IsValidPathSegmentName(name); len(msgs) != 0 {
return "", apierrors.NewBadRequest(fmt.Sprintf("Name parameter invalid: %q: %s", name, strings.Join(msgs, ";")))
}
- key = key + "/" + name
- return key, nil
+ return NoNamespaceKeyRootFunc(ctx, prefix) + "/" + ns + "/" + name, nil
}
// NoNamespaceKeyFunc is the default function for constructing storage paths
@@ -302,8 +322,7 @@ func NoNamespaceKeyFunc(ctx context.Context, prefix string, name string) (string
if msgs := path.IsValidPathSegmentName(name); len(msgs) != 0 {
return "", apierrors.NewBadRequest(fmt.Sprintf("Name parameter invalid: %q: %s", name, strings.Join(msgs, ";")))
}
- key := prefix + "/" + name
- return key, nil
+ return NoNamespaceKeyRootFunc(ctx, prefix) + "/" + name, nil
}
// New implements RESTStorage.New.
@@ -478,7 +497,8 @@ func (e *Store) create(ctx context.Context, obj runtime.Object, createValidation
var finishCreate FinishFunc = finishNothing
// Init metadata as early as possible.
- if objectMeta, err := meta.Accessor(obj); err != nil {
+ objectMeta, err := meta.Accessor(obj)
+ if err != nil {
return nil, err
} else {
rest.FillObjectMetaSystemFields(objectMeta)
@@ -501,6 +521,12 @@ func (e *Store) create(ctx context.Context, obj runtime.Object, createValidation
if err := rest.BeforeCreate(e.CreateStrategy, ctx, obj); err != nil {
return nil, err
}
+
+ if _, found := objectMeta.GetAnnotations()[genericapirequest.ShardAnnotationKey]; found {
+ // Remove the shard annotation so it is not persisted
+ delete(objectMeta.GetAnnotations(), genericapirequest.ShardAnnotationKey)
+ }
+
// at this point we have a fully formed object. It is time to call the validators that the apiserver
// handling chain wants to enforce.
if createValidation != nil {
@@ -1068,6 +1094,13 @@ func (e *Store) updateForGracefulDeletionAndFinalizers(ctx context.Context, name
if err != nil {
return nil, err
}
+
+ // the following annotation key indicates that the request is from the cache server
+ // in that case we've decided not to require finalization, the object will be deleted immediately
+ if _, hasShardAnnotation := existingAccessor.GetAnnotations()[genericapirequest.ShardAnnotationKey]; hasShardAnnotation {
+ return existing, nil
+ }
+
needsUpdate, newFinalizers := deletionFinalizersForGarbageCollection(ctx, e, existingAccessor, options)
if needsUpdate {
existingAccessor.SetFinalizers(newFinalizers)
@@ -1503,6 +1536,9 @@ func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
if (e.KeyRootFunc == nil) != (e.KeyFunc == nil) {
return fmt.Errorf("store for %s must set both KeyRootFunc and KeyFunc or neither", e.DefaultQualifiedResource.String())
}
+ if e.KeyFunc != nil || e.KeyFunc != nil {
+ return fmt.Errorf("DEBUG: keyfunc must be non-nil for all resources", e.DefaultQualifiedResource.String())
+ }
if e.TableConvertor == nil {
return fmt.Errorf("store for %s must set TableConvertor; rest.NewDefaultTableConvertor(e.DefaultQualifiedResource) can be used to output just name/creation time", e.DefaultQualifiedResource.String())
@@ -1563,6 +1599,13 @@ func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
return fmt.Errorf("store for %s has an invalid prefix %q", e.DefaultQualifiedResource.String(), opts.ResourcePrefix)
}
+ if e.KeyFunc != nil {
+ panic(fmt.Sprintf("KeyFunc is illegal for %v: %T", e.DefaultQualifiedResource, e.KeyFunc))
+ }
+ if e.KeyRootFunc != nil {
+ panic(fmt.Sprintf("KeyRootFunc is illegal for %v: %T", e.DefaultQualifiedResource, e.KeyRootFunc))
+ }
+
// Set the default behavior for storage key generation
if e.KeyRootFunc == nil && e.KeyFunc == nil {
if isNamespaced {
@@ -1574,7 +1617,7 @@ func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
}
} else {
e.KeyRootFunc = func(ctx context.Context) string {
- return prefix
+ return NoNamespaceKeyRootFunc(ctx, prefix)
}
e.KeyFunc = func(ctx context.Context, name string) (string, error) {
return NoNamespaceKeyFunc(ctx, prefix, name)
@@ -1584,17 +1627,17 @@ func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
// We adapt the store's keyFunc so that we can use it with the StorageDecorator
// without making any assumptions about where objects are stored in etcd
- keyFunc := func(obj runtime.Object) (string, error) {
+ keyFunc := func(ctx context.Context, obj runtime.Object) (string, error) {
accessor, err := meta.Accessor(obj)
if err != nil {
return "", err
}
if isNamespaced {
- return e.KeyFunc(genericapirequest.WithNamespace(genericapirequest.NewContext(), accessor.GetNamespace()), accessor.GetName())
+ return e.KeyFunc(genericapirequest.WithNamespace(ctx, accessor.GetNamespace()), accessor.GetName())
}
- return e.KeyFunc(genericapirequest.NewContext(), accessor.GetName())
+ return e.KeyFunc(ctx, accessor.GetName())
}
if e.DeleteCollectionWorkers == 0 {
@@ -1658,7 +1701,7 @@ func (e *Store) CompleteWithOptions(options *generic.StoreOptions) error {
// startObservingCount starts monitoring given prefix and periodically updating metrics. It returns a function to stop collection.
func (e *Store) startObservingCount(period time.Duration, objectCountTracker flowcontrolrequest.StorageObjectCountTracker) func() {
- prefix := e.KeyRootFunc(genericapirequest.NewContext())
+ prefix := e.KeyRootFunc(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Wildcard: true}))
resourceName := e.DefaultQualifiedResource.String()
klog.V(2).InfoS("Monitoring resource count at path", "resource", resourceName, "path", "/"+prefix)
stopCh := make(chan struct{})
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/generic/storage_decorator.go b/staging/src/k8s.io/apiserver/pkg/registry/generic/storage_decorator.go
index 4c2b2fc0ed55c..acaa17a7c0574 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/generic/storage_decorator.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/generic/storage_decorator.go
@@ -17,6 +17,8 @@ limitations under the License.
package generic
import (
+ "context"
+
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/storagebackend"
@@ -29,7 +31,7 @@ import (
type StorageDecorator func(
config *storagebackend.ConfigForResource,
resourcePrefix string,
- keyFunc func(obj runtime.Object) (string, error),
+ keyFunc func(ctx context.Context, obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
@@ -41,7 +43,7 @@ type StorageDecorator func(
func UndecoratedStorage(
config *storagebackend.ConfigForResource,
resourcePrefix string,
- keyFunc func(obj runtime.Object) (string, error),
+ keyFunc func(ctx context.Context, obj runtime.Object) (string, error),
newFunc func() runtime.Object,
newListFunc func() runtime.Object,
getAttrsFunc storage.AttrFunc,
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/generic/testing/tester.go b/staging/src/k8s.io/apiserver/pkg/registry/generic/testing/tester.go
index 78cc15bf151e5..227ca4f173fa4 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/generic/testing/tester.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/generic/testing/tester.go
@@ -27,6 +27,7 @@ import (
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
genericregistry "k8s.io/apiserver/pkg/registry/generic/registry"
"k8s.io/apiserver/pkg/registry/rest"
"k8s.io/apiserver/pkg/registry/rest/resttest"
@@ -143,11 +144,11 @@ func (t *Tester) TestWatch(valid runtime.Object, labelsPass, labelsFail []labels
// Helper functions
func (t *Tester) getObject(ctx context.Context, obj runtime.Object) (runtime.Object, error) {
+ ctx = genericapirequest.WithCluster(ctx, genericapirequest.Cluster{Name: t.tester.TestCluster()})
accessor, err := meta.Accessor(obj)
if err != nil {
return nil, err
}
-
result, err := t.storage.Get(ctx, accessor.GetName(), &metav1.GetOptions{})
if err != nil {
return nil, err
@@ -156,6 +157,7 @@ func (t *Tester) getObject(ctx context.Context, obj runtime.Object) (runtime.Obj
}
func (t *Tester) createObject(ctx context.Context, obj runtime.Object) error {
+ ctx = genericapirequest.WithCluster(ctx, genericapirequest.Cluster{Name: t.tester.TestCluster()})
accessor, err := meta.Accessor(obj)
if err != nil {
return err
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/rest/meta.go b/staging/src/k8s.io/apiserver/pkg/registry/rest/meta.go
index fc4fc81e13845..882ba668b0f87 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/rest/meta.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/rest/meta.go
@@ -21,6 +21,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/uuid"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
)
// metav1Now returns metav1.Now(), but allows override for unit testing
@@ -28,6 +29,10 @@ var metav1Now = func() metav1.Time { return metav1.Now() }
// WipeObjectMetaSystemFields erases fields that are managed by the system on ObjectMeta.
func WipeObjectMetaSystemFields(meta metav1.Object) {
+ if _, found := meta.GetAnnotations()[genericapirequest.ShardAnnotationKey]; found {
+ // Do not wipe system fields if we are storing a cached object
+ return
+ }
meta.SetCreationTimestamp(metav1.Time{})
meta.SetUID("")
meta.SetDeletionTimestamp(nil)
@@ -37,6 +42,12 @@ func WipeObjectMetaSystemFields(meta metav1.Object) {
// FillObjectMetaSystemFields populates fields that are managed by the system on ObjectMeta.
func FillObjectMetaSystemFields(meta metav1.Object) {
+ if _, found := meta.GetAnnotations()[genericapirequest.ShardAnnotationKey]; found {
+ // In general the shard annotation is not attached to objects. Instead, it is assigned by the storage layer on the fly.
+ // To avoid an additional UPDATE request (mismatch on the creationTime and UID fields) replicated objects have those fields already set.
+ // Thus all we have to do is to simply return early.
+ return
+ }
meta.SetCreationTimestamp(metav1Now())
meta.SetUID(uuid.NewUUID())
}
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go b/staging/src/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go
index f4f3519b521f3..fcb3b5423f199 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go
@@ -25,6 +25,7 @@ import (
"testing"
"time"
+ "github.com/kcp-dev/logicalcluster/v3"
apiequality "k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
@@ -102,10 +103,14 @@ func (t *Tester) TestNamespace() string {
return "test"
}
+func (t *Tester) TestCluster() logicalcluster.Name {
+ return logicalcluster.New("root")
+}
+
// TestContext returns a namespaced context that will be used when making storage calls.
// Namespace is determined by TestNamespace()
func (t *Tester) TestContext() context.Context {
- return genericapirequest.WithNamespace(genericapirequest.NewContext(), t.TestNamespace())
+ return genericapirequest.WithCluster(genericapirequest.WithNamespace(genericapirequest.NewContext(), t.TestNamespace()), genericapirequest.Cluster{Name: t.TestCluster()})
}
func (t *Tester) getObjectMetaOrFail(obj runtime.Object) metav1.Object {
@@ -314,6 +319,7 @@ func (t *Tester) testCreateDryRunEquals(obj runtime.Object) {
createdFakeMeta.SetResourceVersion("")
createdMeta.SetResourceVersion("")
createdMeta.SetUID(createdFakeMeta.GetUID())
+ createdMeta.SetZZZ_DeprecatedClusterName(createdFakeMeta.GetZZZ_DeprecatedClusterName())
if e, a := created, createdFake; !apiequality.Semantic.DeepEqual(e, a) {
t.Errorf("unexpected obj: %#v, expected %#v", e, a)
@@ -341,6 +347,7 @@ func (t *Tester) testCreateEquals(obj runtime.Object, getFn GetFunc) {
createdMeta := t.getObjectMetaOrFail(created)
gotMeta := t.getObjectMetaOrFail(got)
createdMeta.SetResourceVersion(gotMeta.GetResourceVersion())
+ createdMeta.SetZZZ_DeprecatedClusterName(gotMeta.GetZZZ_DeprecatedClusterName())
if e, a := created, got; !apiequality.Semantic.DeepEqual(e, a) {
t.Errorf("unexpected obj: %#v, expected %#v", e, a)
@@ -402,7 +409,7 @@ func (t *Tester) testCreateHasMetadata(valid runtime.Object) {
func (t *Tester) testCreateIgnoresContextNamespace(valid runtime.Object, opts metav1.CreateOptions) {
// Ignore non-empty namespace in context
- ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), "not-default2")
+ ctx := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "not-default2")
// Ideally, we'd get an error back here, but at least verify the namespace wasn't persisted
created, err := t.storage.(rest.Creater).Create(ctx, valid.DeepCopyObject(), rest.ValidateAllObjectFunc, &opts)
@@ -421,7 +428,7 @@ func (t *Tester) testCreateIgnoresMismatchedNamespace(valid runtime.Object, opts
// Ignore non-empty namespace in object meta
objectMeta.SetNamespace("not-default")
- ctx := genericapirequest.WithNamespace(genericapirequest.NewContext(), "not-default2")
+ ctx := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "not-default2")
// Ideally, we'd get an error back here, but at least verify the namespace wasn't persisted
created, err := t.storage.(rest.Creater).Create(ctx, valid.DeepCopyObject(), rest.ValidateAllObjectFunc, &opts)
@@ -1164,14 +1171,14 @@ func (t *Tester) testGetDifferentNamespace(obj runtime.Object) {
objMeta := t.getObjectMetaOrFail(obj)
objMeta.SetName(t.namer(5))
- ctx1 := genericapirequest.WithNamespace(genericapirequest.NewContext(), "bar3")
+ ctx1 := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "bar3")
objMeta.SetNamespace(genericapirequest.NamespaceValue(ctx1))
_, err := t.storage.(rest.Creater).Create(ctx1, obj, rest.ValidateAllObjectFunc, &metav1.CreateOptions{})
if err != nil {
t.Errorf("unexpected error: %v", err)
}
- ctx2 := genericapirequest.WithNamespace(genericapirequest.NewContext(), "bar4")
+ ctx2 := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "bar4")
objMeta.SetNamespace(genericapirequest.NamespaceValue(ctx2))
_, err = t.storage.(rest.Creater).Create(ctx2, obj, rest.ValidateAllObjectFunc, &metav1.CreateOptions{})
if err != nil {
@@ -1225,8 +1232,8 @@ func (t *Tester) testGetFound(obj runtime.Object) {
}
func (t *Tester) testGetMimatchedNamespace(obj runtime.Object) {
- ctx1 := genericapirequest.WithNamespace(genericapirequest.NewContext(), "bar1")
- ctx2 := genericapirequest.WithNamespace(genericapirequest.NewContext(), "bar2")
+ ctx1 := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "bar1")
+ ctx2 := genericapirequest.WithNamespace(genericapirequest.WithCluster(genericapirequest.NewContext(), genericapirequest.Cluster{Name: t.TestCluster()}), "bar2")
objMeta := t.getObjectMetaOrFail(obj)
objMeta.SetName(t.namer(4))
objMeta.SetNamespace(genericapirequest.NamespaceValue(ctx1))
@@ -1319,7 +1326,7 @@ func (t *Tester) testListMatchLabels(obj runtime.Object, assignFn AssignFunc) {
foo4Meta.SetNamespace(genericapirequest.NamespaceValue(ctx))
foo4Meta.SetLabels(testLabels)
- objs := ([]runtime.Object{foo3, foo4})
+ objs := []runtime.Object{foo3, foo4}
assignFn(objs)
filtered := []runtime.Object{objs[1]}
@@ -1373,7 +1380,7 @@ func (t *Tester) testListTableConversion(obj runtime.Object, assignFn AssignFunc
foo4Meta.SetNamespace(genericapirequest.NamespaceValue(ctx))
foo4Meta.SetLabels(testLabels)
- objs := ([]runtime.Object{foo3, foo4})
+ objs := []runtime.Object{foo3, foo4}
assignFn(objs)
diff --git a/staging/src/k8s.io/apiserver/pkg/registry/rest/update.go b/staging/src/k8s.io/apiserver/pkg/registry/rest/update.go
index dc63caf0b5cbc..3cc63159f4a0d 100644
--- a/staging/src/k8s.io/apiserver/pkg/registry/rest/update.go
+++ b/staging/src/k8s.io/apiserver/pkg/registry/rest/update.go
@@ -124,7 +124,12 @@ func BeforeUpdate(strategy RESTUpdateStrategy, ctx context.Context, obj, old run
if err != nil {
return err
}
- objectMeta.SetGeneration(oldMeta.GetGeneration())
+ if len(oldMeta.GetAnnotations()[genericapirequest.ShardAnnotationKey]) == 0 || objectMeta.GetGeneration() == 0 {
+ // the absence of the annotation indicates the object is NOT from the cache server,
+ // if the new object doesn't have its generation set, just rewrite it from the old object
+ // otherwise we are dealing with an object from the cache server that wants its generation to be updated
+ objectMeta.SetGeneration(oldMeta.GetGeneration())
+ }
strategy.PrepareForUpdate(ctx, obj, old)
diff --git a/staging/src/k8s.io/apiserver/pkg/server/config.go b/staging/src/k8s.io/apiserver/pkg/server/config.go
index 3e8a6b8c0685e..b7ae354f00184 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/config.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/config.go
@@ -35,6 +35,7 @@ import (
"github.com/google/uuid"
"golang.org/x/crypto/cryptobyte"
jsonpatch "gopkg.in/evanphx/json-patch.v4"
+ "k8s.io/apiserver/pkg/informerfactoryhack"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -903,7 +904,7 @@ func (c completedConfig) New(name string, delegationTarget DelegationTarget) (*G
if c.SharedInformerFactory != nil {
if !s.isPostStartHookRegistered(genericApiServerHookName) {
err := s.AddPostStartHook(genericApiServerHookName, func(hookContext PostStartHookContext) error {
- c.SharedInformerFactory.Start(hookContext.Done())
+ informerfactoryhack.Unwrap(c.SharedInformerFactory).Start(hookContext.Done())
return nil
})
if err != nil {
@@ -911,7 +912,7 @@ func (c completedConfig) New(name string, delegationTarget DelegationTarget) (*G
}
}
// TODO: Once we get rid of /healthz consider changing this to post-start-hook.
- err := s.AddReadyzChecks(healthz.NewInformerSyncHealthz(c.SharedInformerFactory))
+ err := s.AddReadyzChecks(healthz.NewInformerSyncHealthz(informerfactoryhack.Unwrap(c.SharedInformerFactory)))
if err != nil {
return nil, err
}
@@ -1008,16 +1009,34 @@ func (c completedConfig) New(name string, delegationTarget DelegationTarget) (*G
func BuildHandlerChainWithStorageVersionPrecondition(apiHandler http.Handler, c *Config) http.Handler {
// WithStorageVersionPrecondition needs the WithRequestInfo to run first
handler := genericapifilters.WithStorageVersionPrecondition(apiHandler, c.StorageVersionManager, c.Serializer)
- return DefaultBuildHandlerChain(handler, c)
+ return DefaultBuildHandlerChainFromAuthzToCompletion(handler, c)
}
+// These handlers are split into 3, allowing us to insert additional handlers between them.
+// {DefaultBuildHandlerChain}FromStartToBeforeImpersonation
+// FromImpersonationToAuthz
+// FromAuthzToCompletion
+// Note that they are called in reverse order, so the first handler in the chain is the last one to run.
+
func DefaultBuildHandlerChain(apiHandler http.Handler, c *Config) http.Handler {
- handler := apiHandler
+ handler := DefaultBuildHandlerChainFromAuthzToCompletion(apiHandler, c)
+ handler = DefaultBuildHandlerChainFromImpersonationToAuthz(handler, c)
+ handler = DefaultBuildHandlerChainFromStartToBeforeImpersonation(handler, c)
+ return handler
+}
+// DefaultBuildHandlerChainFromAuthzToCompletion builds the handler chain from authorization to completion. Its last in the chain.
+func DefaultBuildHandlerChainFromAuthzToCompletion(apiHandler http.Handler, c *Config) http.Handler {
+ handler := apiHandler
handler = filterlatency.TrackCompleted(handler)
handler = genericapifilters.WithAuthorization(handler, c.Authorization.Authorizer, c.Serializer)
handler = filterlatency.TrackStarted(handler, c.TracerProvider, "authorization")
+ return handler
+}
+// DefaultBuildHandlerChainFromImpersonationToAuthz builds the handler chain from impersonation to authorization. Its Middle in the chain.
+func DefaultBuildHandlerChainFromImpersonationToAuthz(apiHandler http.Handler, c *Config) http.Handler {
+ handler := apiHandler
if c.FlowControl != nil {
workEstimatorCfg := flowcontrolrequest.DefaultWorkEstimatorConfig()
requestWorkEstimator := flowcontrolrequest.NewWorkEstimator(
@@ -1033,6 +1052,12 @@ func DefaultBuildHandlerChain(apiHandler http.Handler, c *Config) http.Handler {
handler = genericapifilters.WithImpersonation(handler, c.Authorization.Authorizer, c.Serializer)
handler = filterlatency.TrackStarted(handler, c.TracerProvider, "impersonation")
+ return handler
+}
+
+// DefaultBuildHandlerChainFromStartToBeforeImpersonation builds the handler chain from the start to before impersonation. Its first in the chain.
+func DefaultBuildHandlerChainFromStartToBeforeImpersonation(apiHandler http.Handler, c *Config) http.Handler {
+ handler := apiHandler
handler = filterlatency.TrackCompleted(handler)
handler = genericapifilters.WithAudit(handler, c.AuditBackend, c.AuditPolicyRuleEvaluator, c.LongRunningFunc)
handler = filterlatency.TrackStarted(handler, c.TracerProvider, "audit")
diff --git a/staging/src/k8s.io/apiserver/pkg/server/genericapiserver.go b/staging/src/k8s.io/apiserver/pkg/server/genericapiserver.go
index a0ff71b9b8e85..f09939c056303 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/genericapiserver.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/genericapiserver.go
@@ -47,6 +47,8 @@ import (
genericapi "k8s.io/apiserver/pkg/endpoints"
"k8s.io/apiserver/pkg/endpoints/discovery"
discoveryendpoint "k8s.io/apiserver/pkg/endpoints/discovery/aggregated"
+ genericrequest "k8s.io/apiserver/pkg/endpoints/request"
+
"k8s.io/apiserver/pkg/features"
"k8s.io/apiserver/pkg/registry/rest"
"k8s.io/apiserver/pkg/server/healthz"
@@ -59,7 +61,6 @@ import (
"k8s.io/klog/v2"
openapibuilder3 "k8s.io/kube-openapi/pkg/builder3"
openapicommon "k8s.io/kube-openapi/pkg/common"
- "k8s.io/kube-openapi/pkg/handler"
"k8s.io/kube-openapi/pkg/handler3"
openapiutil "k8s.io/kube-openapi/pkg/util"
"k8s.io/kube-openapi/pkg/validation/spec"
@@ -170,7 +171,7 @@ type GenericAPIServer struct {
// OpenAPIVersionedService controls the /openapi/v2 endpoint, and can be used to update the served spec.
// It is set during PrepareRun if `openAPIConfig` is non-nil unless `skipOpenAPIInstallation` is true.
- OpenAPIVersionedService *handler.OpenAPIService
+ OpenAPIVersionedService routes.OpenAPIServiceProvider
// OpenAPIV3VersionedService controls the /openapi/v3 endpoint and can be used to update the served spec.
// It is set during PrepareRun if `openAPIConfig` is non-nil unless `skipOpenAPIInstallation` is true.
@@ -314,7 +315,7 @@ type DelegationTarget interface {
HealthzChecks() []healthz.HealthChecker
// ListedPaths returns the paths for supporting an index
- ListedPaths() []string
+ ListedPaths(cluster *genericrequest.Cluster) []string
// NextDelegate returns the next delegationTarget in the chain of delegations
NextDelegate() DelegationTarget
@@ -344,8 +345,8 @@ func (s *GenericAPIServer) PreShutdownHooks() map[string]preShutdownHookEntry {
func (s *GenericAPIServer) HealthzChecks() []healthz.HealthChecker {
return s.healthzRegistry.checks
}
-func (s *GenericAPIServer) ListedPaths() []string {
- return s.listedPathProvider.ListedPaths()
+func (s *GenericAPIServer) ListedPaths(cluster *genericrequest.Cluster) []string {
+ return s.listedPathProvider.ListedPaths(cluster)
}
func (s *GenericAPIServer) NextDelegate() DelegationTarget {
@@ -411,7 +412,7 @@ func (s emptyDelegate) PreShutdownHooks() map[string]preShutdownHookEntry {
func (s emptyDelegate) HealthzChecks() []healthz.HealthChecker {
return []healthz.HealthChecker{}
}
-func (s emptyDelegate) ListedPaths() []string {
+func (s emptyDelegate) ListedPaths(cluster *genericrequest.Cluster) []string {
return []string{}
}
func (s emptyDelegate) NextDelegate() DelegationTarget {
diff --git a/staging/src/k8s.io/apiserver/pkg/server/handler.go b/staging/src/k8s.io/apiserver/pkg/server/handler.go
index b829ade745cc1..7436820b10bee 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/handler.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/handler.go
@@ -25,12 +25,14 @@ import (
"strings"
"github.com/emicklei/go-restful/v3"
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/klog/v2"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
+ genericrequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/server/mux"
)
@@ -64,6 +66,8 @@ type APIServerHandler struct {
// we should consider completely removing gorestful.
// Other servers should only use this opaquely to delegate to an API server.
Director http.Handler
+
+ PathValidForCluster func(path string, clusterName logicalcluster.Name) bool
}
// HandlerChainBuilderFn is used to wrap the GoRestfulContainer handler using the provided handler chain.
@@ -100,13 +104,18 @@ func NewAPIServerHandler(name string, s runtime.NegotiatedSerializer, handlerCha
}
// ListedPaths returns the paths that should be shown under /
-func (a *APIServerHandler) ListedPaths() []string {
+func (a *APIServerHandler) ListedPaths(cluster *genericrequest.Cluster) []string {
var handledPaths []string
// Extract the paths handled using restful.WebService
for _, ws := range a.GoRestfulContainer.RegisteredWebServices() {
handledPaths = append(handledPaths, ws.RootPath())
}
- handledPaths = append(handledPaths, a.NonGoRestfulMux.ListedPaths()...)
+
+ for _, path := range a.NonGoRestfulMux.ListedPaths(logicalcluster.Name("")) {
+ if a.PathValidForCluster == nil || a.PathValidForCluster(path, cluster.Name) {
+ handledPaths = append(handledPaths, path)
+ }
+ }
sort.Strings(handledPaths)
return handledPaths
diff --git a/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go b/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go
index 3ed92d96c6d29..29bee9bfe03a3 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go
@@ -27,6 +27,7 @@ import (
"k8s.io/klog/v2"
+ "github.com/kcp-dev/logicalcluster/v3"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
)
@@ -95,7 +96,7 @@ func NewPathRecorderMux(name string) *PathRecorderMux {
}
// ListedPaths returns the registered handler exposedPaths.
-func (m *PathRecorderMux) ListedPaths() []string {
+func (m *PathRecorderMux) ListedPaths(clusterName logicalcluster.Name) []string {
m.lock.Lock()
handledPaths := append([]string{}, m.exposedPaths...)
m.lock.Unlock()
diff --git a/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder_test.go b/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder_test.go
index 65c8b028f8ea5..697ca27205994 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder_test.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder_test.go
@@ -21,6 +21,7 @@ import (
"net/http/httptest"
"testing"
+ "github.com/kcp-dev/logicalcluster/v3"
"github.com/stretchr/testify/assert"
)
@@ -28,8 +29,8 @@ func TestSecretHandlers(t *testing.T) {
c := NewPathRecorderMux("test")
c.UnlistedHandleFunc("/secret", func(http.ResponseWriter, *http.Request) {})
c.HandleFunc("/nonswagger", func(http.ResponseWriter, *http.Request) {})
- assert.NotContains(t, c.ListedPaths(), "/secret")
- assert.Contains(t, c.ListedPaths(), "/nonswagger")
+ assert.NotContains(t, c.ListedPaths(logicalcluster.New("")), "/secret")
+ assert.Contains(t, c.ListedPaths(logicalcluster.New("")), "/nonswagger")
}
func TestUnregisterHandlers(t *testing.T) {
@@ -44,15 +45,15 @@ func TestUnregisterHandlers(t *testing.T) {
c.HandleFunc("/nonswagger", func(http.ResponseWriter, *http.Request) {
first = first + 1
})
- assert.NotContains(t, c.ListedPaths(), "/secret")
- assert.Contains(t, c.ListedPaths(), "/nonswagger")
+ assert.NotContains(t, c.ListedPaths(logicalcluster.New("")), "/secret")
+ assert.Contains(t, c.ListedPaths(logicalcluster.New("")), "/nonswagger")
resp, _ := http.Get(s.URL + "/nonswagger")
assert.Equal(t, 1, first)
assert.Equal(t, http.StatusOK, resp.StatusCode)
c.Unregister("/nonswagger")
- assert.NotContains(t, c.ListedPaths(), "/nonswagger")
+ assert.NotContains(t, c.ListedPaths(logicalcluster.New("")), "/nonswagger")
resp, _ = http.Get(s.URL + "/nonswagger")
assert.Equal(t, 1, first)
@@ -61,7 +62,7 @@ func TestUnregisterHandlers(t *testing.T) {
c.HandleFunc("/nonswagger", func(http.ResponseWriter, *http.Request) {
second = second + 1
})
- assert.Contains(t, c.ListedPaths(), "/nonswagger")
+ assert.Contains(t, c.ListedPaths(logicalcluster.New("")), "/nonswagger")
resp, _ = http.Get(s.URL + "/nonswagger")
assert.Equal(t, 1, first)
assert.Equal(t, 1, second)
@@ -98,8 +99,8 @@ func TestPrefixHandlers(t *testing.T) {
fallThroughCount = fallThroughCount + 1
}))
- assert.NotContains(t, c.ListedPaths(), "/secretPrefix/")
- assert.Contains(t, c.ListedPaths(), "/publicPrefix/")
+ assert.NotContains(t, c.ListedPaths(logicalcluster.New("")), "/secretPrefix/")
+ assert.Contains(t, c.ListedPaths(logicalcluster.New("")), "/publicPrefix/")
resp, _ := http.Get(s.URL + "/fallthrough")
assert.Equal(t, 1, fallThroughCount)
diff --git a/staging/src/k8s.io/apiserver/pkg/server/options/admission.go b/staging/src/k8s.io/apiserver/pkg/server/options/admission.go
index 6b4669e450637..7c200d9f3b032 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/options/admission.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/options/admission.go
@@ -19,14 +19,12 @@ package options
import (
"fmt"
"strings"
- "time"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
- utilwait "k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apiserver/pkg/admission"
"k8s.io/apiserver/pkg/admission/initializer"
admissionmetrics "k8s.io/apiserver/pkg/admission/metrics"
@@ -39,11 +37,9 @@ import (
apiserverapiv1 "k8s.io/apiserver/pkg/apis/apiserver/v1"
apiserverapiv1alpha1 "k8s.io/apiserver/pkg/apis/apiserver/v1alpha1"
"k8s.io/apiserver/pkg/server"
- cacheddiscovery "k8s.io/client-go/discovery/cached/memory"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
- "k8s.io/client-go/restmapper"
"k8s.io/component-base/featuregate"
)
@@ -151,24 +147,11 @@ func (a *AdmissionOptions) ApplyTo(
return fmt.Errorf("failed to read plugin config: %v", err)
}
- discoveryClient := cacheddiscovery.NewMemCacheClient(kubeClient.Discovery())
- discoveryRESTMapper := restmapper.NewDeferredDiscoveryRESTMapper(discoveryClient)
genericInitializer := initializer.New(kubeClient, dynamicClient, informers, c.Authorization.Authorizer, features,
- c.DrainedNotify(), discoveryRESTMapper)
+ c.DrainedNotify(), nil)
initializersChain := admission.PluginInitializers{genericInitializer}
initializersChain = append(initializersChain, pluginInitializers...)
- admissionPostStartHook := func(hookContext server.PostStartHookContext) error {
- discoveryRESTMapper.Reset()
- go utilwait.Until(discoveryRESTMapper.Reset, 30*time.Second, hookContext.Done())
- return nil
- }
-
- err = c.AddPostStartHook("start-apiserver-admission-initializer", admissionPostStartHook)
- if err != nil {
- return fmt.Errorf("failed to add post start hook for policy admission: %w", err)
- }
-
admissionChain, err := a.Plugins.NewFromPlugins(pluginNames, pluginsConfigProvider, initializersChain, a.Decorators)
if err != nil {
return err
diff --git a/staging/src/k8s.io/apiserver/pkg/server/routes/index.go b/staging/src/k8s.io/apiserver/pkg/server/routes/index.go
index 14075798867cc..47572a4e3a6a7 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/routes/index.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/routes/index.go
@@ -22,23 +22,24 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/server/mux"
)
// ListedPathProvider is an interface for providing paths that should be reported at /.
type ListedPathProvider interface {
// ListedPaths is an alphabetically sorted list of paths to be reported at /.
- ListedPaths() []string
+ ListedPaths(cluster *genericapirequest.Cluster) []string
}
// ListedPathProviders is a convenient way to combine multiple ListedPathProviders
type ListedPathProviders []ListedPathProvider
// ListedPaths unions and sorts the included paths.
-func (p ListedPathProviders) ListedPaths() []string {
+func (p ListedPathProviders) ListedPaths(cluster *genericapirequest.Cluster) []string {
ret := sets.String{}
for _, provider := range p {
- for _, path := range provider.ListedPaths() {
+ for _, path := range provider.ListedPaths(cluster) {
ret.Insert(path)
}
}
@@ -65,5 +66,6 @@ type IndexLister struct {
// ServeHTTP serves the available paths.
func (i IndexLister) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- responsewriters.WriteRawJSON(i.StatusCode, metav1.RootPaths{Paths: i.PathProvider.ListedPaths()}, w)
+ cluster := genericapirequest.ClusterFrom(r.Context())
+ responsewriters.WriteRawJSON(i.StatusCode, metav1.RootPaths{Paths: i.PathProvider.ListedPaths(cluster)}, w)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/server/routes/openapi.go b/staging/src/k8s.io/apiserver/pkg/server/routes/openapi.go
index 12c8b1ad9100b..f24b88ea7d809 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/routes/openapi.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/routes/openapi.go
@@ -17,12 +17,18 @@ limitations under the License.
package routes
import (
- restful "github.com/emicklei/go-restful/v3"
+ "net/http"
+
+ "github.com/emicklei/go-restful/v3"
+
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/klog/v2"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/server/mux"
builder2 "k8s.io/kube-openapi/pkg/builder"
"k8s.io/kube-openapi/pkg/builder3"
+ "k8s.io/kube-openapi/pkg/cached"
"k8s.io/kube-openapi/pkg/common"
"k8s.io/kube-openapi/pkg/common/restfuladapter"
"k8s.io/kube-openapi/pkg/handler"
@@ -36,17 +42,129 @@ type OpenAPI struct {
V3Config *common.OpenAPIV3Config
}
+// OpenAPIServiceProvider is a hacky way to
+// replace a single OpenAPIService by a provider which will
+// provide an distinct openAPIService per logical cluster.
+// This is required to implement CRD tenancy and have the openAPI
+// models be conistent with the current logical cluster.
+//
+// However this is just a first step, since a better way
+// would be to completly avoid the need of registering a OpenAPIService
+// for each logical cluster. See the addition comments below.
+type OpenAPIServiceProvider interface {
+ ForCluster(clusterName logicalcluster.Name) *handler.OpenAPIService
+ AddCuster(clusterName logicalcluster.Name)
+ RemoveCuster(clusterName logicalcluster.Name)
+ UpdateSpecLazy(swagger cached.Value[*spec.Swagger])
+}
+
+type clusterAwarePathHandler struct {
+ clusterName logicalcluster.Name
+ addHandlerForCluster func(clusterName logicalcluster.Name, handler http.Handler)
+}
+
+func (c *clusterAwarePathHandler) Handle(path string, handler http.Handler) {
+ c.addHandlerForCluster(c.clusterName, handler)
+}
+
+// HACK: This is the implementation of OpenAPIServiceProvider
+// that allows supporting several logical clusters for CRD tenancy.
+//
+// However this should be conisdered a temporary step, to cope with the
+// current design of OpenAPI publishing. But having to register every logical
+// cluster creates more cost on creating logical clusters.
+// Instead, we'd expect us to slowly refactor the openapi generation code so
+// that it can be used dynamically, and time limited or size limited openapi caches
+// would be used to serve the calculated version.
+// Finally a development princple for the logical cluster prototype would be
+// - don't do static registration of logical clusters
+// - do lazy instantiation wherever possible so that starting a new logical cluster remains as cheap as possible
+type openAPIServiceProvider struct {
+ staticSpec *spec.Swagger
+ defaultOpenAPIServiceHandler http.Handler
+ defaultOpenAPIService *handler.OpenAPIService
+ openAPIServices map[logicalcluster.Name]*handler.OpenAPIService
+ handlers map[logicalcluster.Name]http.Handler
+ path string
+ mux *mux.PathRecorderMux
+}
+
+var _ OpenAPIServiceProvider = (*openAPIServiceProvider)(nil)
+
+func (p *openAPIServiceProvider) ForCluster(clusterName logicalcluster.Name) *handler.OpenAPIService {
+ return p.openAPIServices[clusterName]
+}
+
+func (p *openAPIServiceProvider) AddCuster(clusterName logicalcluster.Name) {
+ if _, found := p.openAPIServices[clusterName]; !found {
+ openAPIVersionedService := handler.NewOpenAPIService(p.staticSpec)
+
+ openAPIVersionedService.RegisterOpenAPIVersionedService(p.path, &clusterAwarePathHandler{
+ clusterName: clusterName,
+ addHandlerForCluster: func(clusterName logicalcluster.Name, handler http.Handler) {
+ p.handlers[clusterName] = handler
+ },
+ })
+
+ p.openAPIServices[clusterName] = openAPIVersionedService
+ }
+}
+
+func (p *openAPIServiceProvider) RemoveCuster(clusterName logicalcluster.Name) {
+ delete(p.openAPIServices, clusterName)
+ delete(p.handlers, clusterName)
+}
+
+func (p *openAPIServiceProvider) ServeHTTP(resp http.ResponseWriter, req *http.Request) {
+ cluster := genericapirequest.ClusterFrom(req.Context())
+ if cluster == nil {
+ p.defaultOpenAPIServiceHandler.ServeHTTP(resp, req)
+ return
+ }
+ handler, found := p.handlers[cluster.Name]
+ if !found {
+ resp.WriteHeader(404)
+ return
+ }
+ handler.ServeHTTP(resp, req)
+}
+
+func (o *openAPIServiceProvider) UpdateSpecLazy(openapiSpec cached.Value[*spec.Swagger]) {
+ o.defaultOpenAPIService.UpdateSpecLazy(openapiSpec)
+}
+
+func (p *openAPIServiceProvider) Register() {
+ defaultOpenAPIService := handler.NewOpenAPIService(p.staticSpec)
+
+ defaultOpenAPIService.RegisterOpenAPIVersionedService(p.path, &clusterAwarePathHandler{
+ addHandlerForCluster: func(clusterName logicalcluster.Name, handler http.Handler) {
+ p.defaultOpenAPIServiceHandler = handler
+ },
+ })
+
+ p.defaultOpenAPIService = defaultOpenAPIService
+ p.mux.Handle(p.path, p)
+}
+
// Install adds the SwaggerUI webservice to the given mux.
-func (oa OpenAPI) InstallV2(c *restful.Container, mux *mux.PathRecorderMux) (*handler.OpenAPIService, *spec.Swagger) {
- spec, err := builder2.BuildOpenAPISpecFromRoutes(restfuladapter.AdaptWebServices(c.RegisteredWebServices()), oa.Config)
+func (oa OpenAPI) InstallV2(c *restful.Container, mux *mux.PathRecorderMux) (OpenAPIServiceProvider, *spec.Swagger) {
+ spec, err := builder2.BuildOpenAPISpec(c.RegisteredWebServices(), oa.Config)
if err != nil {
klog.Fatalf("Failed to build open api spec for root: %v", err)
}
spec.Definitions = handler.PruneDefaults(spec.Definitions)
- openAPIVersionedService := handler.NewOpenAPIService(spec)
- openAPIVersionedService.RegisterOpenAPIVersionedService("/openapi/v2", mux)
- return openAPIVersionedService, spec
+ provider := &openAPIServiceProvider{
+ mux: mux,
+ staticSpec: spec,
+ openAPIServices: map[logicalcluster.Name]*handler.OpenAPIService{},
+ handlers: map[logicalcluster.Name]http.Handler{},
+ path: "/openapi/v2",
+ }
+
+ provider.Register()
+
+ return provider, spec
}
// InstallV3 adds the static group/versions defined in the RegisteredWebServices to the OpenAPI v3 spec
diff --git a/staging/src/k8s.io/apiserver/pkg/server/storage/storage_factory.go b/staging/src/k8s.io/apiserver/pkg/server/storage/storage_factory.go
index f4ccc62f65c9f..1059388ef0232 100644
--- a/staging/src/k8s.io/apiserver/pkg/server/storage/storage_factory.go
+++ b/staging/src/k8s.io/apiserver/pkg/server/storage/storage_factory.go
@@ -71,6 +71,10 @@ type DefaultStorageFactory struct {
DefaultResourcePrefixes map[schema.GroupResource]string
+ // LegacyUseResourceAsPrefixDefault applies the legacy behavior of defaulting prefix to the
+ // resource name if no group prefix is set.
+ LegacyUseResourceAsPrefixDefault bool
+
// DefaultMediaType is the media type used to store resources. If it is not set, "application/json" is used.
DefaultMediaType string
@@ -365,8 +369,18 @@ func (s *DefaultStorageFactory) ResourcePrefix(groupResource schema.GroupResourc
etcdResourcePrefix = exactResourceOverride.etcdResourcePrefix
}
if len(etcdResourcePrefix) == 0 {
- etcdResourcePrefix = strings.ToLower(chosenStorageResource.Resource)
+ if s.LegacyUseResourceAsPrefixDefault {
+ etcdResourcePrefix = strings.ToLower(chosenStorageResource.Resource)
+ } else {
+ groupName := chosenStorageResource.Group
+ if len(groupName) == 0 {
+ groupName = "core"
+ }
+ etcdResourcePrefix = groupName + "/" + chosenStorageResource.Resource
+ }
}
+ klog.V(6).Infof("prefix for %s=%s", groupResource, etcdResourcePrefix)
+
return etcdResourcePrefix
}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go
index 5d18d83133e51..9d377cdf9db37 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go
@@ -21,6 +21,7 @@ import (
"fmt"
"net/http"
"reflect"
+ "strconv"
"strings"
"sync"
"time"
@@ -40,12 +41,15 @@ import (
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/audit"
"k8s.io/apiserver/pkg/endpoints/request"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/features"
+ kcpapi "k8s.io/apiserver/pkg/kcp"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/cacher/delegator"
"k8s.io/apiserver/pkg/storage/cacher/metrics"
"k8s.io/apiserver/pkg/storage/cacher/progress"
etcdfeature "k8s.io/apiserver/pkg/storage/feature"
+ "k8s.io/apiserver/pkg/storage/storagebackend"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/client-go/tools/cache"
"k8s.io/component-base/tracing"
@@ -97,7 +101,7 @@ type Config struct {
ResourcePrefix string
// KeyFunc is used to get a key in the underlying storage for a given object.
- KeyFunc func(runtime.Object) (string, error)
+ KeyFunc func(context.Context, runtime.Object) (string, error)
// GetAttrsFunc is used to get object labels, fields
GetAttrsFunc func(runtime.Object) (label labels.Set, field fields.Set, err error)
@@ -120,6 +124,9 @@ type Config struct {
Codec runtime.Codec
Clock clock.WithTicker
+
+ // KcpExtraStorageMetadata holds metadata used by the watchCache's reflector to instruct the storage layer how to assign/extract the cluster name
+ KcpExtraStorageMetadata *storagebackend.KcpStorageMetadata
}
type watchersMap map[int]*cacheWatcher
@@ -422,11 +429,18 @@ func NewCacherFromConfig(config Config) (*Cacher, error) {
return nil, fmt.Errorf("config.EventsHistoryWindow (%v) must not be below %v", eventFreshDuration, DefaultEventFreshDuration)
}
+ // empty storage metadata usually indicate build-in resources
+ // for those we require only a WildCard cluster to be present in the ctx
+ if config.KcpExtraStorageMetadata == nil {
+ config.KcpExtraStorageMetadata = &storagebackend.KcpStorageMetadata{Cluster: genericapirequest.Cluster{Wildcard: true}}
+ }
+
progressRequester := progress.NewConditionalProgressRequester(config.Storage.RequestWatchProgress, config.Clock, contextMetadata)
watchCache := newWatchCache(
config.KeyFunc, cacher.processEvent, config.GetAttrsFunc, config.Versioner, config.Indexers,
config.Clock, eventFreshDuration, config.GroupResource, progressRequester)
- listerWatcher := NewListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc, contextMetadata)
+ listerWatcher := NewListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc, contextMetadata, config.KcpExtraStorageMetadata)
+
reflectorName := "storage/cacher.go:" + config.ResourcePrefix
reflector := cache.NewNamedReflector(reflectorName, listerWatcher, obj, watchCache, 0)
@@ -1196,6 +1210,50 @@ func (c *Cacher) LastSyncResourceVersion() (uint64, error) {
return c.versioner.ParseResourceVersion(resourceVersion)
}
+// getCurrentResourceVersionFromStorage gets the current resource version from the underlying storage engine.
+// this method issues an empty list request and reads only the ResourceVersion from the object metadata
+func (c *Cacher) getCurrentResourceVersionFromStorage(ctx context.Context) (uint64, error) {
+ if c.newListFunc == nil {
+ return 0, fmt.Errorf("newListFunction wasn't provided for %v", c.objectType)
+ }
+ emptyList := c.newListFunc()
+ pred := storage.SelectionPredicate{
+ Label: labels.Everything(),
+ Field: fields.Everything(),
+ Limit: 1, // just in case we actually hit something
+ }
+
+ err := c.storage.GetList(ctx, c.resourcePrefix, storage.ListOptions{Predicate: pred}, emptyList)
+ if err != nil {
+ return 0, err
+ }
+ emptyListAccessor, err := meta.ListAccessor(emptyList)
+ if err != nil {
+ return 0, err
+ }
+ if emptyListAccessor == nil {
+ return 0, fmt.Errorf("unable to extract a list accessor from %T", emptyList)
+ }
+
+ currentResourceVersion, err := strconv.Atoi(emptyListAccessor.GetResourceVersion())
+ if err != nil {
+ return 0, err
+ }
+
+ if currentResourceVersion == 0 {
+ return 0, fmt.Errorf("the current resource version must be greater than 0")
+ }
+ return uint64(currentResourceVersion), nil
+}
+
+// cacherListerWatcher opaques storage.Interface to expose cache.ListerWatcher.
+type cacherListerWatcher struct {
+ storage storage.Interface
+ resourcePrefix string
+ newListFunc func() runtime.Object
+ kcpExtraStorageMetadata *storagebackend.KcpStorageMetadata
+}
+
// getBookmarkAfterResourceVersionLockedFunc returns a function that
// spits a ResourceVersion after which the bookmark event will be delivered.
//
@@ -1297,6 +1355,13 @@ func (c *Cacher) Ready() bool {
return err == nil
}
+func createKCPClusterAwareContext(ctx context.Context, meta *storagebackend.KcpStorageMetadata) context.Context {
+ if meta.IsCRD {
+ ctx = kcpapi.WithCustomResourceIndicator(ctx)
+ }
+ return genericapirequest.WithCluster(ctx, meta.Cluster)
+}
+
// errWatcher implements watch.Interface to return a single error
type errWatcher struct {
result chan watch.Event
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher_kcp.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher_kcp.go
new file mode 100644
index 0000000000000..bc9b2dfd1e48d
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher_kcp.go
@@ -0,0 +1,27 @@
+/*
+Copyright 2023 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cacher
+
+func (lw *listerWatcher) kcpAwareResourcePrefix() string {
+ if lw.kcpExtraStorageMetadata.Cluster.Wildcard {
+ return lw.resourcePrefix
+ }
+
+ // This is a request for normal (non-bound) CRs outside of system:system-crds. Make sure we only list in the
+ // specific logical cluster.
+ return lw.resourcePrefix + "/" + lw.kcpExtraStorageMetadata.Cluster.Name.String()
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go
index 2817a93dd0cc2..5f8a064c887eb 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go
@@ -27,24 +27,27 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/storage"
+ "k8s.io/apiserver/pkg/storage/storagebackend"
"k8s.io/client-go/tools/cache"
)
// listerWatcher opaques storage.Interface to expose cache.ListerWatcher.
type listerWatcher struct {
- storage storage.Interface
- resourcePrefix string
- newListFunc func() runtime.Object
- contextMetadata metadata.MD
+ storage storage.Interface
+ resourcePrefix string
+ newListFunc func() runtime.Object
+ contextMetadata metadata.MD
+ kcpExtraStorageMetadata *storagebackend.KcpStorageMetadata
}
// NewListerWatcher returns a storage.Interface backed ListerWatcher.
-func NewListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object, contextMetadata metadata.MD) cache.ListerWatcher {
+func NewListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object, contextMetadata metadata.MD, kcpStorageMetadata *storagebackend.KcpStorageMetadata) cache.ListerWatcher {
return &listerWatcher{
- storage: storage,
- resourcePrefix: resourcePrefix,
- newListFunc: newListFunc,
- contextMetadata: contextMetadata,
+ storage: storage,
+ resourcePrefix: resourcePrefix,
+ newListFunc: newListFunc,
+ contextMetadata: contextMetadata,
+ kcpExtraStorageMetadata: kcpStorageMetadata,
}
}
@@ -67,7 +70,10 @@ func (lw *listerWatcher) List(options metav1.ListOptions) (runtime.Object, error
if lw.contextMetadata != nil {
ctx = metadata.NewOutgoingContext(ctx, lw.contextMetadata)
}
- if err := lw.storage.GetList(ctx, lw.resourcePrefix, storageOpts, list); err != nil {
+ if lw.kcpExtraStorageMetadata != nil {
+ ctx = createKCPClusterAwareContext(ctx, lw.kcpExtraStorageMetadata)
+ }
+ if err := lw.storage.GetList(ctx, lw.kcpAwareResourcePrefix(), storageOpts, list); err != nil {
return nil, err
}
return list, nil
@@ -85,5 +91,8 @@ func (lw *listerWatcher) Watch(options metav1.ListOptions) (watch.Interface, err
if lw.contextMetadata != nil {
ctx = metadata.NewOutgoingContext(ctx, lw.contextMetadata)
}
- return lw.storage.Watch(ctx, lw.resourcePrefix, opts)
+ if lw.kcpExtraStorageMetadata != nil {
+ ctx = createKCPClusterAwareContext(ctx, lw.kcpExtraStorageMetadata)
+ }
+ return lw.storage.Watch(ctx, lw.kcpAwareResourcePrefix(), opts)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go
index 967a60c9b1d68..7b915cac622c8 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go
@@ -101,7 +101,7 @@ type watchCache struct {
lowerBoundCapacity int
// keyFunc is used to get a key in the underlying storage for a given object.
- keyFunc func(runtime.Object) (string, error)
+ keyFunc func(context.Context, runtime.Object) (string, error)
// getAttrsFunc is used to get labels and fields of an object.
getAttrsFunc func(runtime.Object) (labels.Set, fields.Set, error)
@@ -159,7 +159,7 @@ type watchCache struct {
}
func newWatchCache(
- keyFunc func(runtime.Object) (string, error),
+ keyFunc func(context.Context, runtime.Object) (string, error),
eventHandler func(*watchCacheEvent),
getAttrsFunc func(runtime.Object) (labels.Set, fields.Set, error),
versioner storage.Versioner,
@@ -274,10 +274,11 @@ func (w *watchCache) objectToVersionedRuntimeObject(obj interface{}) (runtime.Ob
func (w *watchCache) processEvent(event watch.Event, resourceVersion uint64, updateFunc func(*storeElement) error) error {
metrics.EventsReceivedCounter.WithLabelValues(w.groupResource.String()).Inc()
- key, err := w.keyFunc(event.Object)
+ key, err := w.keyFunc(createClusterAwareContext(event.Object), event.Object)
if err != nil {
return fmt.Errorf("couldn't compute key: %v", err)
}
+
elem := &storeElement{Key: key, Object: event.Object}
elem.Labels, elem.Fields, err = w.getAttrsFunc(event.Object)
if err != nil {
@@ -638,7 +639,7 @@ func (w *watchCache) Get(obj interface{}) (interface{}, bool, error) {
if !ok {
return nil, false, fmt.Errorf("obj does not implement runtime.Object interface: %v", obj)
}
- key, err := w.keyFunc(object)
+ key, err := w.keyFunc(createClusterAwareContext(object), object)
if err != nil {
return nil, false, fmt.Errorf("couldn't compute key: %v", err)
}
@@ -664,7 +665,7 @@ func (w *watchCache) Replace(objs []interface{}, resourceVersion string) error {
if !ok {
return fmt.Errorf("didn't get runtime.Object for replace: %#v", obj)
}
- key, err := w.keyFunc(object)
+ key, err := w.keyFunc(createClusterAwareContext(object), object)
if err != nil {
return fmt.Errorf("couldn't compute key: %v", err)
}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp.go
new file mode 100644
index 0000000000000..648ffc2a6f39c
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp.go
@@ -0,0 +1,38 @@
+package cacher
+
+import (
+ "context"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+ "k8s.io/apimachinery/pkg/runtime"
+
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+ "k8s.io/klog/v2"
+)
+
+// createClusterAwareContext extracts the clusterName from the given object and puts it into a context
+// the context is used by the key function to compute the key under which the object will be stored
+//
+// background:
+//
+// resources in the db are stored without the clusterName, since the reflector used by the cache uses the logicalcluster.Wildcard
+// the clusterName will be assigned to object by the storage layer upon retrieval.
+// we need take it into consideration and change the key to contain the clusterName
+// because this is how clients are going to be retrieving data from the cache.
+func createClusterAwareContext(object runtime.Object) context.Context {
+ var clusterName logicalcluster.Name
+
+ o, ok := object.(logicalcluster.Object)
+ if !ok {
+ klog.Warningf("unknown object, could not get a clusterName and a namespace from: %T", object)
+ return context.Background()
+ }
+
+ clusterName = logicalcluster.From(o)
+ if clusterName.Empty() {
+ klog.Warningf("unknown object, could not get a clusterName and a namespace from: %T", object)
+ return context.Background()
+ }
+
+ return genericapirequest.WithCluster(context.Background(), genericapirequest.Cluster{Name: clusterName})
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp_test.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp_test.go
new file mode 100644
index 0000000000000..d97955bbfa48d
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_kcp_test.go
@@ -0,0 +1,107 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cacher
+
+import (
+ "testing"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+)
+
+func TestCreateClusterAwareContext(t *testing.T) {
+ scenarios := []struct {
+ name string
+ existingObject runtime.Object
+ expectedCluster genericapirequest.Cluster
+ }{
+ {
+ name: "scoped, built-in type",
+ existingObject: makePod("pod1", "root:org:abc", "default"),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+ {
+ name: "cluster wide, built-in type",
+ existingObject: func() *v1.Pod {
+ p := makePod("pod1", "root:org:abc", "default")
+ p.Namespace = ""
+ return p
+ }(),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+ {
+ name: "scoped, identity, built-in type",
+ existingObject: makePod("pod1", "root:org:abc", "default"),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+
+ {
+ name: "scoped, unstructured type",
+ existingObject: makeUnstructured("root:org:abc", "default"),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+ {
+ name: "cluster wide, unstructured type",
+ existingObject: makeUnstructured("root:org:abc", ""),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+ {
+ name: "scoped, identity, unstructured type",
+ existingObject: makeUnstructured("root:org:abc", "default"),
+ expectedCluster: genericapirequest.Cluster{Name: logicalcluster.Name("root:org:abc")},
+ },
+ }
+
+ for _, scenario := range scenarios {
+ t.Run(scenario.name, func(t *testing.T) {
+ actualCtx := createClusterAwareContext(scenario.existingObject)
+ actualCluster, err := genericapirequest.ValidClusterFrom(actualCtx)
+ if err != nil {
+ t.Fatal(err)
+ }
+ if *actualCluster != scenario.expectedCluster {
+ t.Errorf("expected %v, got %v", scenario.expectedCluster, actualCluster)
+ }
+ })
+ }
+}
+
+func makePod(name, clusterName, ns string) *v1.Pod {
+ return &v1.Pod{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: ns,
+ Name: name,
+ Annotations: map[string]string{
+ logicalcluster.AnnotationKey: clusterName,
+ },
+ },
+ }
+}
+
+func makeUnstructured(clusterName, ns string) *unstructured.Unstructured {
+ obj := &unstructured.Unstructured{}
+ obj.SetAnnotations(map[string]string{
+ logicalcluster.AnnotationKey: clusterName,
+ })
+ obj.SetNamespace(ns)
+ return obj
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_test.go b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_test.go
index 20a76bbec694e..cc7dac5b10146 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_test.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/cacher/watch_cache_test.go
@@ -114,7 +114,7 @@ func (w *testWatchCache) getCacheIntervalForEvents(resourceVersion uint64, opts
// newTestWatchCache just adds a fake clock.
func newTestWatchCache(capacity int, eventFreshDuration time.Duration, indexers *cache.Indexers) *testWatchCache {
- keyFunc := func(obj runtime.Object) (string, error) {
+ keyFunc := func(_ context.Context, obj runtime.Object) (string, error) {
return storage.NamespaceKeyFunc("prefix", obj)
}
getAttrsFunc := func(obj runtime.Object) (labels.Set, fields.Set, error) {
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store.go
index ee5f3d676827b..40320e2537b14 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store.go
@@ -26,10 +26,12 @@ import (
"strings"
"time"
+ "github.com/kcp-dev/logicalcluster/v3"
"go.etcd.io/etcd/api/v3/mvccpb"
clientv3 "go.etcd.io/etcd/client/v3"
"go.etcd.io/etcd/client/v3/kubernetes"
"go.opentelemetry.io/otel/attribute"
+ "k8s.io/apiserver/pkg/kcp"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
@@ -41,6 +43,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/audit"
+ endpointsrequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/features"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/etcd3/metrics"
@@ -211,6 +214,11 @@ func (s *store) Versioner() storage.Versioner {
// Get implements storage.Interface.Get.
func (s *store) Get(ctx context.Context, key string, opts storage.GetOptions, out runtime.Object) error {
+ clusterName, err := endpointsrequest.ClusterNameFrom(ctx)
+ if err != nil {
+ klog.Errorf("No cluster defined in Get action for key %s : %s", key, err.Error())
+ }
+
preparedKey, err := s.prepareKey(key)
if err != nil {
return err
@@ -237,7 +245,8 @@ func (s *store) Get(ctx context.Context, key string, opts storage.GetOptions, ou
return storage.NewInternalError(err)
}
- err = s.decoder.Decode(data, out, getResp.KV.ModRevision)
+ shardName := endpointsrequest.ShardFrom(ctx)
+ err = decode(s.codec, s.versioner, data, out, getResp.KV.ModRevision, clusterName, shardName)
if err != nil {
recordDecodeError(s.groupResourceString, preparedKey)
return err
@@ -247,6 +256,11 @@ func (s *store) Get(ctx context.Context, key string, opts storage.GetOptions, ou
// Create implements storage.Interface.Create.
func (s *store) Create(ctx context.Context, key string, obj, out runtime.Object, ttl uint64) error {
+ clusterName, err := endpointsrequest.ClusterNameFrom(ctx)
+ if err != nil {
+ klog.Errorf("No cluster defined in Create action for key %s : %s", key, err.Error())
+ }
+
preparedKey, err := s.prepareKey(key)
if err != nil {
return err
@@ -301,7 +315,8 @@ func (s *store) Create(ctx context.Context, key string, obj, out runtime.Object,
}
if out != nil {
- err = s.decoder.Decode(data, out, txnResp.Revision)
+ shardName := endpointsrequest.ShardFrom(ctx)
+ err = decode(s.codec, s.versioner, data, out, txnResp.Revision, clusterName, shardName)
if err != nil {
span.AddEvent("decode failed", attribute.Int("len", len(data)), attribute.String("err", err.Error()))
recordDecodeError(s.groupResourceString, preparedKey)
@@ -335,10 +350,15 @@ func (s *store) Delete(
func (s *store) conditionalDelete(
ctx context.Context, key string, out runtime.Object, v reflect.Value, preconditions *storage.Preconditions,
validateDeletion storage.ValidateObjectFunc, cachedExistingObject runtime.Object, skipTransformDecode bool) error {
- getCurrentState := s.getCurrentState(ctx, key, v, false, skipTransformDecode)
+ clusterName, err := endpointsrequest.ClusterNameFrom(ctx)
+ if err != nil {
+ klog.Errorf("No cluster defined in conditionalDelete action for key %s : %s", key, err.Error())
+ }
+ shardName := endpointsrequest.ShardFrom(ctx)
+
+ getCurrentState := s.getCurrentState(ctx, key, v, false, skipTransformDecode, clusterName, shardName)
var origState *objState
- var err error
var origStateIsCurrent bool
if cachedExistingObject != nil {
origState, err = s.getStateFromObject(cachedExistingObject)
@@ -414,7 +434,7 @@ func (s *store) conditionalDelete(
}
if !txnResp.Succeeded {
klog.V(4).Infof("deletion of %s failed because of a conflict, going to retry", key)
- origState, err = s.getState(ctx, txnResp.KV, key, v, false, skipTransformDecode)
+ origState, err = s.getState(ctx, txnResp.KV, key, v, false, skipTransformDecode, clusterName, shardName)
if err != nil {
return err
}
@@ -423,13 +443,13 @@ func (s *store) conditionalDelete(
}
if !skipTransformDecode {
- err = s.decoder.Decode(origState.data, out, txnResp.Revision)
+ err = decode(s.codec, s.versioner, origState.data, out, txnResp.Revision, clusterName, shardName)
if err != nil {
recordDecodeError(s.groupResourceString, key)
return err
}
+ return nil
}
- return nil
}
}
@@ -437,6 +457,12 @@ func (s *store) conditionalDelete(
func (s *store) GuaranteedUpdate(
ctx context.Context, key string, destination runtime.Object, ignoreNotFound bool,
preconditions *storage.Preconditions, tryUpdate storage.UpdateFunc, cachedExistingObject runtime.Object) error {
+ clusterName, err := endpointsrequest.ClusterNameFrom(ctx)
+ if err != nil {
+ klog.Errorf("No cluster defined in GuaranteedUpdate action for key %s : %s", key, err.Error())
+ }
+ shardName := endpointsrequest.ShardFrom(ctx)
+
preparedKey, err := s.prepareKey(key)
if err != nil {
return err
@@ -454,7 +480,7 @@ func (s *store) GuaranteedUpdate(
}
skipTransformDecode := false
- getCurrentState := s.getCurrentState(ctx, preparedKey, v, ignoreNotFound, skipTransformDecode)
+ getCurrentState := s.getCurrentState(ctx, preparedKey, v, ignoreNotFound, skipTransformDecode, clusterName, shardName)
var origState *objState
var origStateIsCurrent bool
@@ -540,7 +566,7 @@ func (s *store) GuaranteedUpdate(
}
// recheck that the data from etcd is not stale before short-circuiting a write
if !origState.stale {
- err = s.decoder.Decode(origState.data, destination, origState.rev)
+ err = decode(s.codec, s.versioner, origState.data, destination, origState.rev, clusterName, shardName)
if err != nil {
recordDecodeError(s.groupResourceString, preparedKey)
return err
@@ -580,7 +606,7 @@ func (s *store) GuaranteedUpdate(
span.AddEvent("Transaction committed")
if !txnResp.Succeeded {
klog.V(4).Infof("GuaranteedUpdate of %s failed because of a conflict, going to retry", preparedKey)
- origState, err = s.getState(ctx, txnResp.KV, preparedKey, v, ignoreNotFound, skipTransformDecode)
+ origState, err = s.getState(ctx, txnResp.KV, preparedKey, v, ignoreNotFound, skipTransformDecode, clusterName, shardName)
if err != nil {
return err
}
@@ -589,7 +615,7 @@ func (s *store) GuaranteedUpdate(
continue
}
- err = s.decoder.Decode(data, destination, txnResp.Revision)
+ err = decode(s.codec, s.versioner, data, destination, txnResp.Revision, clusterName, shardName)
if err != nil {
span.AddEvent("decode failed", attribute.Int("len", len(data)), attribute.String("err", err.Error()))
recordDecodeError(s.groupResourceString, preparedKey)
@@ -716,6 +742,15 @@ func (s *store) GetList(ctx context.Context, key string, opts storage.ListOption
return err
}
+ // kcp
+ cluster, err := endpointsrequest.ValidClusterFrom(ctx)
+ if err != nil {
+ return storage.NewInternalError(fmt.Errorf("unable to get cluster for list key %q: %v", keyPrefix, err))
+ }
+ shard := endpointsrequest.ShardFrom(ctx)
+ crdIndicator := kcp.CustomResourceIndicatorFrom(ctx)
+ // end kcp
+
// loop until we have filled the requested limit from etcd or there are no more results
var lastKey []byte
var hasMore bool
@@ -792,7 +827,10 @@ func (s *store) GetList(ctx context.Context, key string, opts storage.ListOption
default:
}
- obj, err := s.decoder.DecodeListItem(ctx, data, uint64(kv.ModRevision), newItemFunc)
+ // kcp
+ clusterName := adjustClusterNameIfWildcard(shard, cluster, crdIndicator, keyPrefix, string(kv.Key))
+ shardName := adjustShardNameIfWildcard(shard, keyPrefix, string(kv.Key))
+ obj, err := decodeListItem(ctx, data, uint64(kv.ModRevision), s.codec, s.versioner, newItemFunc, clusterName, shardName)
if err != nil {
recordDecodeError(s.groupResourceString, string(kv.Key))
if done := aggregator.Aggregate(string(kv.Key), err); done {
@@ -909,7 +947,7 @@ func (s *store) Watch(ctx context.Context, key string, opts storage.ListOptions)
if err != nil {
return nil, err
}
- return s.watcher.Watch(s.watchContext(ctx), preparedKey, int64(rev), opts)
+ return s.watcher.Watch(s.watchContext(ctx), preparedKey, int64(rev), opts, nil)
}
func (s *store) watchContext(ctx context.Context) context.Context {
@@ -923,7 +961,7 @@ func (s *store) watchContext(ctx context.Context) context.Context {
return clientv3.WithRequireLeader(ctx)
}
-func (s *store) getCurrentState(ctx context.Context, key string, v reflect.Value, ignoreNotFound bool, skipTransformDecode bool) func() (*objState, error) {
+func (s *store) getCurrentState(ctx context.Context, key string, v reflect.Value, ignoreNotFound bool, skipTransformDecode bool, clusterName logicalcluster.Name, shardName endpointsrequest.Shard) func() (*objState, error) {
return func() (*objState, error) {
startTime := time.Now()
getResp, err := s.client.Kubernetes.Get(ctx, key, kubernetes.GetOptions{})
@@ -931,7 +969,7 @@ func (s *store) getCurrentState(ctx context.Context, key string, v reflect.Value
if err != nil {
return nil, err
}
- return s.getState(ctx, getResp.KV, key, v, ignoreNotFound, skipTransformDecode)
+ return s.getState(ctx, getResp.KV, key, v, ignoreNotFound, skipTransformDecode, clusterName, shardName)
}
}
@@ -941,7 +979,7 @@ func (s *store) getCurrentState(ctx context.Context, key string, v reflect.Value
// storage will be transformed and decoded.
// NOTE: when skipTransformDecode is true, the 'data', and the 'obj' fields
// of the objState will be nil, and 'stale' will be set to true.
-func (s *store) getState(ctx context.Context, kv *mvccpb.KeyValue, key string, v reflect.Value, ignoreNotFound bool, skipTransformDecode bool) (*objState, error) {
+func (s *store) getState(ctx context.Context, kv *mvccpb.KeyValue, key string, v reflect.Value, ignoreNotFound bool, skipTransformDecode bool, clusterName logicalcluster.Name, shardName endpointsrequest.Shard) (*objState, error) {
state := &objState{
meta: &storage.ResponseMeta{},
}
@@ -977,8 +1015,7 @@ func (s *store) getState(ctx context.Context, kv *mvccpb.KeyValue, key string, v
state.data = data
state.stale = stale
-
- if err := s.decoder.Decode(state.data, state.obj, state.rev); err != nil {
+ if err := decode(s.codec, s.versioner, state.data, state.obj, state.rev, clusterName, shardName); err != nil {
recordDecodeError(s.groupResourceString, key)
return nil, err
}
@@ -1072,6 +1109,49 @@ func (s *store) prepareKey(key string) (string, error) {
return s.pathPrefix + key[startIndex:], nil
}
+// decode decodes value of bytes into object. It will also set the object resource version to rev.
+// On success, objPtr would be set to the object.
+func decode(codec runtime.Codec, versioner storage.Versioner, value []byte, objPtr runtime.Object, rev int64, clusterName logicalcluster.Name, shardName endpointsrequest.Shard) error {
+ if _, err := conversion.EnforcePtr(objPtr); err != nil {
+ return fmt.Errorf("unable to convert output object to pointer: %v", err)
+ }
+ _, _, err := codec.Decode(value, nil, objPtr)
+ if err != nil {
+ return err
+ }
+ // being unable to set the version does not prevent the object from being extracted
+ if err := versioner.UpdateObject(objPtr, uint64(rev)); err != nil {
+ klog.Errorf("failed to update object version: %v", err)
+ }
+
+ // kcp: apply clusterName to the decoded object, as the name is not persisted in storage.
+ annotateDecodedObjectWith(objPtr, clusterName, shardName)
+
+ return nil
+}
+
+// decodeListItem decodes bytes value in array into object.
+func decodeListItem(ctx context.Context, data []byte, rev uint64, codec runtime.Codec, versioner storage.Versioner, newItemFunc func() runtime.Object, clusterName logicalcluster.Name, shardName endpointsrequest.Shard) (runtime.Object, error) {
+ startedAt := time.Now()
+ defer func() {
+ endpointsrequest.TrackDecodeLatency(ctx, time.Since(startedAt))
+ }()
+
+ obj, _, err := codec.Decode(data, nil, newItemFunc())
+ if err != nil {
+ return nil, err
+ }
+
+ if err := versioner.UpdateObject(obj, rev); err != nil {
+ klog.Errorf("failed to update object version: %v", err)
+ }
+
+ // kcp: apply clusterName and shardName to the decoded object, as the name is not persisted in storage.
+ annotateDecodedObjectWith(obj, clusterName, shardName)
+
+ return obj, nil
+}
+
// recordDecodeError record decode error split by object type.
func recordDecodeError(resource string, key string) {
metrics.RecordDecodeError(resource)
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp.go
new file mode 100644
index 0000000000000..e77464a061af8
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp.go
@@ -0,0 +1,116 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package etcd3
+
+import (
+ "strings"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+ "k8s.io/klog/v2"
+)
+
+// adjustClusterNameIfWildcard determines the logical cluster name. If this is not a cluster-wildcard list/watch request,
+// the cluster name is returned unmodified. Otherwise, the cluster name is extracted from the key based on whether it is
+// - a shard-wildcard request: /shardName/clusterName/
+// - CR partial metadata request: /identity/clusterName/
+// - any other request: /clusterName/.
+func adjustClusterNameIfWildcard(shard genericapirequest.Shard, cluster *genericapirequest.Cluster, crdRequest bool, keyPrefix, key string) logicalcluster.Name {
+ if !cluster.Wildcard {
+ return cluster.Name
+ }
+
+ keyWithoutPrefix := strings.TrimPrefix(key, keyPrefix)
+ parts := strings.SplitN(keyWithoutPrefix, "/", 3)
+
+ extract := func(minLen, i int) logicalcluster.Name {
+ if len(parts) < minLen {
+ klog.Warningf("shard=%s cluster=%s invalid key=%s had %d parts, not %d", shard, cluster, keyWithoutPrefix, len(parts), minLen)
+ return ""
+ }
+ return logicalcluster.Name(parts[i])
+ }
+
+ switch {
+ case cluster.PartialMetadataRequest && crdRequest:
+ // expecting 2699f4d273d342adccdc8a32663408226ecf66de7d191113ed3d4dc9bccec2f2/root:org:ws/
+ // OR customresources/root:org:ws/
+ return extract(3, 1)
+ case shard.Wildcard():
+ // expecting shardName/clusterName/
+ return extract(3, 1)
+ default:
+ // expecting root:org:ws/
+ return extract(2, 0)
+ }
+}
+
+// adjustShardNameIfWildcard determines a shard name. If this is not a shard-wildcard request,
+// the shard name is returned unmodified. Otherwise, the shard name is extracted from the storage key.
+func adjustShardNameIfWildcard(shard genericapirequest.Shard, keyPrefix, key string) genericapirequest.Shard {
+ if !shard.Empty() && !shard.Wildcard() {
+ return shard
+ }
+
+ if !shard.Wildcard() {
+ // no-op: we can only assign shard names
+ // to a request that explicitly asked for it
+ return ""
+ }
+
+ keyWithoutPrefix := strings.TrimPrefix(key, keyPrefix)
+ parts := strings.SplitN(keyWithoutPrefix, "/", 3)
+ if len(parts) < 3 {
+ klog.Warningf("unable to extract a shard name, invalid key=%s had %d parts, not %d", keyWithoutPrefix, len(parts), 3)
+ return ""
+ }
+ return genericapirequest.Shard(parts[0])
+}
+
+// annotateDecodedObjectWith applies clusterName and shardName to an object.
+// This is necessary because we don't store the cluster name and the shard name in the objects in storage.
+// Instead, they are derived from the storage key, and then applied after retrieving the object from storage.
+func annotateDecodedObjectWith(obj interface{}, clusterName logicalcluster.Name, shardName genericapirequest.Shard) {
+ var s nameSetter
+
+ switch t := obj.(type) {
+ case metav1.ObjectMetaAccessor:
+ s = t.GetObjectMeta()
+ case nameSetter:
+ s = t
+ default:
+ klog.Warningf("Could not set ClusterName %s, ShardName %s on object: %T", clusterName, shardName, obj)
+ return
+ }
+
+ annotations := s.GetAnnotations()
+ if annotations == nil {
+ annotations = make(map[string]string)
+ }
+ annotations[logicalcluster.AnnotationKey] = clusterName.String()
+ if !shardName.Empty() {
+ annotations[genericapirequest.ShardAnnotationKey] = shardName.String()
+ }
+ s.SetAnnotations(annotations)
+}
+
+type nameSetter interface {
+ GetAnnotations() map[string]string
+ SetAnnotations(a map[string]string)
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp_test.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp_test.go
new file mode 100644
index 0000000000000..46ba04b2b0f7f
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_kcp_test.go
@@ -0,0 +1,131 @@
+/*
+Copyright 2022 The KCP Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package etcd3
+
+import (
+ "testing"
+
+ "github.com/kcp-dev/logicalcluster/v3"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+)
+
+func TestAdjustClusterNameIfWildcard(t *testing.T) {
+ tests := map[string]struct {
+ wildcard bool
+ partialMetadata bool
+ prefix string
+ builtInType bool
+ }{
+ "not clusterWildcard": {
+ prefix: "/registry/group/resource/identity/",
+ },
+ "clusterWildcard, not partial": {
+ wildcard: true,
+ prefix: "/registry/group/resource/identity/",
+ },
+ "clusterWildcard, partial": {
+ wildcard: true,
+ partialMetadata: true,
+ prefix: "/registry/group/resource/",
+ },
+ "clusterWildcard, partial, built-in type": {
+ wildcard: true,
+ partialMetadata: true,
+ prefix: "/registry/core/configmaps/",
+ builtInType: true,
+ },
+ }
+
+ for name, tc := range tests {
+ t.Run(name, func(t *testing.T) {
+ cluster := &genericapirequest.Cluster{
+ Name: logicalcluster.New("root:org:ws"),
+ PartialMetadataRequest: tc.partialMetadata,
+ }
+
+ if tc.wildcard {
+ cluster.Name = logicalcluster.Wildcard
+ }
+
+ key := "/registry/group/resource/identity/root:org:ws/somename"
+ if tc.builtInType {
+ key = "/registry/core/configmaps/root:org:ws/somename"
+ }
+ expected := "root:org:ws"
+
+ clusterName := adjustClusterNameIfWildcard(genericapirequest.Shard(""), cluster, !tc.builtInType, tc.prefix, key)
+ if e, a := expected, clusterName.String(); e != a {
+ t.Errorf("expected: %q, actual %q", e, a)
+ }
+ })
+ }
+}
+
+func TestAdjustClusterNameIfWildcardWithShardSupport(t *testing.T) {
+ tests := map[string]struct {
+ cluster genericapirequest.Cluster
+ shard genericapirequest.Shard
+ key string
+ keyPrefix string
+ expectedClusterName string
+ }{
+ "not wildcard": {
+ cluster: genericapirequest.Cluster{Name: logicalcluster.New("root:org:ws")},
+ shard: "amber",
+ key: "/registry/group/resource:identity/amber/root:org:ws/somename",
+ keyPrefix: "/registry/group/resource:identity/",
+ expectedClusterName: "root:org:ws",
+ },
+ "both wildcard": {
+ cluster: genericapirequest.Cluster{Name: logicalcluster.Wildcard},
+ shard: "*",
+ key: "/registry/group/resource:identity/amber/root:org:ws/somename",
+ keyPrefix: "/registry/group/resource:identity/",
+ expectedClusterName: "root:org:ws",
+ },
+ "both wildcard, built-in type": {
+ cluster: genericapirequest.Cluster{Name: logicalcluster.Wildcard},
+ shard: "*",
+ key: "/registry/core/configmaps/amber/root:org:ws/somename",
+ keyPrefix: "/registry/core/configmaps/",
+ expectedClusterName: "root:org:ws",
+ },
+ "only cluster wildcard": {
+ cluster: genericapirequest.Cluster{Name: logicalcluster.Wildcard},
+ shard: "amber",
+ key: "/registry/group/resource:identity/amber/root:org:ws/somename",
+ keyPrefix: "/registry/group/resource:identity/amber/",
+ expectedClusterName: "root:org:ws",
+ },
+ "only shard wildcard": {
+ cluster: genericapirequest.Cluster{Name: logicalcluster.New("root:org:ws")},
+ shard: "*",
+ key: "/registry/core/configmaps/amber/root:org:ws/somename",
+ keyPrefix: "/registry/group/resource:identity/",
+ expectedClusterName: "root:org:ws",
+ },
+ }
+
+ for name, tc := range tests {
+ t.Run(name, func(t *testing.T) {
+ clusterName := adjustClusterNameIfWildcard(tc.shard, &tc.cluster, false, tc.keyPrefix, tc.key)
+ if tc.expectedClusterName != clusterName.String() {
+ t.Errorf("expected: %q, actual %q", tc.expectedClusterName, clusterName)
+ }
+ })
+ }
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_test.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_test.go
index 4a4ef819c9d4a..ba8d6f20847a4 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_test.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/store_test.go
@@ -33,6 +33,9 @@ import (
"go.etcd.io/etcd/client/v3/kubernetes"
"go.etcd.io/etcd/server/v3/embed"
"google.golang.org/grpc/grpclog"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
+
+ "github.com/kcp-dev/logicalcluster/v3"
"k8s.io/apimachinery/pkg/api/apitesting"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -469,6 +472,7 @@ func TestLeaseMaxObjectCount(t *testing.T) {
ReuseDurationSeconds: defaultLeaseReuseDurationSeconds,
MaxObjectCount: 2,
}))
+ ctx = genericapirequest.WithCluster(ctx, genericapirequest.Cluster{Name: logicalcluster.Name("root")})
obj := &example.Pod{ObjectMeta: metav1.ObjectMeta{Name: "foo"}}
out := &example.Pod{}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher.go
index e2141395b3014..51a431ed83b6d 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher.go
@@ -35,7 +35,9 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/features"
+ "k8s.io/apiserver/pkg/kcp"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/etcd3/metrics"
"k8s.io/apiserver/pkg/storage/value"
@@ -92,6 +94,11 @@ type watchChan struct {
incomingEventChan chan *event
resultChan chan watch.Event
errChan chan error
+
+ // kcp
+ cluster *genericapirequest.Cluster
+ shard genericapirequest.Shard
+ crdRequest bool
}
// Watch watches on a key and returns a watch.Interface that transfers relevant notifications.
@@ -101,7 +108,13 @@ type watchChan struct {
// If opts.Recursive is false, it watches on given key.
// If opts.Recursive is true, it watches any children and directories under the key, excluding the root key itself.
// pred must be non-nil. Only if opts.Predicate matches the change, it will be returned.
-func (w *watcher) Watch(ctx context.Context, key string, rev int64, opts storage.ListOptions) (watch.Interface, error) {
+func (w *watcher) Watch(ctx context.Context, key string, rev int64, opts storage.ListOptions, transformer value.Transformer) (watch.Interface, error) {
+ cluster, err := genericapirequest.ValidClusterFrom(ctx)
+ if err != nil {
+ return nil, err
+ }
+ shard := genericapirequest.ShardFrom(ctx)
+
if opts.Recursive && !strings.HasSuffix(key, "/") {
key += "/"
}
@@ -112,7 +125,7 @@ func (w *watcher) Watch(ctx context.Context, key string, rev int64, opts storage
if err != nil {
return nil, err
}
- wc := w.createWatchChan(ctx, key, startWatchRV, opts.Recursive, opts.ProgressNotify, opts.Predicate)
+ wc := w.createWatchChan(ctx, key, startWatchRV, shard, cluster, opts.Recursive, opts.ProgressNotify, opts.Predicate, transformer)
go wc.run(isInitialEventsEndBookmarkRequired(opts), areInitialEventsRequired(rev, opts))
// For etcd watch we don't have an easy way to answer whether the watch
@@ -125,7 +138,7 @@ func (w *watcher) Watch(ctx context.Context, key string, rev int64, opts storage
return wc, nil
}
-func (w *watcher) createWatchChan(ctx context.Context, key string, rev int64, recursive, progressNotify bool, pred storage.SelectionPredicate) *watchChan {
+func (w *watcher) createWatchChan(ctx context.Context, key string, rev int64, shard genericapirequest.Shard, cluster *genericapirequest.Cluster, recursive, progressNotify bool, pred storage.SelectionPredicate, transformer value.Transformer) *watchChan {
wc := &watchChan{
watcher: w,
key: key,
@@ -136,6 +149,11 @@ func (w *watcher) createWatchChan(ctx context.Context, key string, rev int64, re
incomingEventChan: make(chan *event, incomingBufSize),
resultChan: make(chan watch.Event, outgoingBufSize),
errChan: make(chan error, 1),
+
+ // kcp
+ cluster: cluster,
+ shard: shard,
+ crdRequest: kcp.CustomResourceIndicatorFrom(ctx),
}
if pred.Empty() {
// The filter doesn't filter out any object.
@@ -686,6 +704,11 @@ func (wc *watchChan) prepareObjs(e *event) (curObj runtime.Object, oldObj runtim
if err != nil {
return nil, nil, err
}
+
+ // kcp: apply clusterName to the decoded object, as the name is not persisted in storage.
+ clusterName := adjustClusterNameIfWildcard(wc.shard, wc.cluster, wc.crdRequest, wc.key, e.key)
+ shardName := adjustShardNameIfWildcard(wc.shard, wc.key, e.key)
+ annotateDecodedObjectWith(curObj, clusterName, shardName)
}
// We need to decode prevValue, only if this is deletion event or
// the underlying filter doesn't accept all objects (otherwise we
@@ -703,6 +726,11 @@ func (wc *watchChan) prepareObjs(e *event) (curObj runtime.Object, oldObj runtim
if err != nil {
return nil, nil, wc.watcher.transformIfCorruptObjectError(e, err)
}
+
+ // kcp: apply clusterName to the decoded object, as the name is not persisted in storage.
+ clusterName := adjustClusterNameIfWildcard(wc.shard, wc.cluster, wc.crdRequest, wc.key, e.key)
+ shardName := adjustShardNameIfWildcard(wc.shard, wc.key, e.key)
+ annotateDecodedObjectWith(oldObj, clusterName, shardName)
}
return curObj, oldObj, nil
}
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher_test.go b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher_test.go
index 11afc61532cbd..c9454a66d1f7d 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher_test.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher_test.go
@@ -33,6 +33,7 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/apiserver/pkg/apis/example"
+ genericapirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/features"
"k8s.io/apiserver/pkg/storage"
"k8s.io/apiserver/pkg/storage/etcd3/testserver"
@@ -171,7 +172,7 @@ func TestWatchErrorEventIsBlockingFurtherEvent(t *testing.T) {
func TestWatchErrResultNotBlockAfterCancel(t *testing.T) {
origCtx, store, _ := testSetup(t)
ctx, cancel := context.WithCancel(origCtx)
- w := store.watcher.createWatchChan(ctx, "/abc", 0, false, false, storage.Everything)
+ w := store.watcher.createWatchChan(ctx, "/abc", 0, genericapirequest.Shard(""), &genericapirequest.Cluster{}, false, false, storage.Everything, newTestTransformer())
// make resultChan and errChan blocking to ensure ordering.
w.resultChan = make(chan watch.Event)
w.errChan = make(chan error)
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config.go b/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config.go
index c948d6411647a..3ed838ad8d11c 100644
--- a/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config.go
+++ b/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config.go
@@ -102,6 +102,11 @@ type ConfigForResource struct {
// GroupResource is the relevant one
GroupResource schema.GroupResource
+
+ // The following fields hold config required/specific to KCP
+ //
+ // KcpExtraStorageMetadata holds metadata used by the watchCache's reflector to instruct the storage layer how to assign/extract the cluster name
+ KcpExtraStorageMetadata *KcpStorageMetadata
}
// ForResource specializes to the given resource
diff --git a/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config_kcp.go b/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config_kcp.go
new file mode 100644
index 0000000000000..8359bebcb7323
--- /dev/null
+++ b/staging/src/k8s.io/apiserver/pkg/storage/storagebackend/config_kcp.go
@@ -0,0 +1,12 @@
+package storagebackend
+
+import genericrequest "k8s.io/apiserver/pkg/endpoints/request"
+
+// KcpStorageMetadata holds KCP specific metadata that is used by the reflector to instruct the storage layer how to assign/extract the cluster name
+type KcpStorageMetadata struct {
+ // IsCRD indicate that the storage deals with CustomResourceDefinition
+ IsCRD bool
+
+ // Cluster holds a KCP cluster
+ Cluster genericrequest.Cluster
+}
diff --git a/staging/src/k8s.io/apiserver/pkg/util/openapi/proto.go b/staging/src/k8s.io/apiserver/pkg/util/openapi/proto.go
index e36bf452b1626..8c97876650d5f 100644
--- a/staging/src/k8s.io/apiserver/pkg/util/openapi/proto.go
+++ b/staging/src/k8s.io/apiserver/pkg/util/openapi/proto.go
@@ -20,6 +20,8 @@ import (
"encoding/json"
openapi_v2 "github.com/google/gnostic-models/openapiv2"
+ openapi_v3 "github.com/google/gnostic-models/openapiv3"
+ "k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/util/proto"
"k8s.io/kube-openapi/pkg/validation/spec"
@@ -44,3 +46,23 @@ func ToProtoModels(openAPISpec *spec.Swagger) (proto.Models, error) {
return models, nil
}
+
+// ToProtoModelsV3 builds the proto formatted models from OpenAPI spec
+func ToProtoModelsV3(openAPISpec *spec3.OpenAPI) (proto.Models, error) {
+ specBytes, err := json.MarshalIndent(openAPISpec, " ", " ")
+ if err != nil {
+ return nil, err
+ }
+
+ doc, err := openapi_v3.ParseDocument(specBytes)
+ if err != nil {
+ return nil, err
+ }
+
+ models, err := proto.NewOpenAPIV3Data(doc)
+ if err != nil {
+ return nil, err
+ }
+
+ return models, nil
+}
diff --git a/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go b/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go
index 3df8e580e4422..2e74e956a5358 100644
--- a/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go
+++ b/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go
@@ -41,6 +41,7 @@ import (
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/apiserver/pkg/authorization/authorizer"
authorizationcel "k8s.io/apiserver/pkg/authorization/cel"
+ "k8s.io/apiserver/pkg/endpoints/request"
genericfeatures "k8s.io/apiserver/pkg/features"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/apiserver/pkg/util/webhook"
@@ -54,6 +55,9 @@ import (
const (
// The maximum length of requester-controlled attributes to allow caching.
maxControlledAttrCacheSize = 10000
+
+ // ClusterNameKey is the logical cluster name a webhook message originates from.
+ ClusterNameKey = "authorization.kubernetes.io/cluster-name"
)
// DefaultRetryBackoff returns the default backoff parameters for webhook retry.
@@ -196,6 +200,13 @@ func (w *WebhookAuthorizer) Authorize(ctx context.Context, attr authorizer.Attri
}
}
+ if clusterName, err := request.ClusterNameFrom(ctx); err == nil {
+ if r.Spec.Extra == nil {
+ r.Spec.Extra = map[string]authorizationv1.ExtraValue{}
+ }
+ r.Spec.Extra[ClusterNameKey] = authorizationv1.ExtraValue{clusterName.Path().String()}
+ }
+
if attr.IsResourceRequest() {
r.Spec.ResourceAttributes = resourceAttributesFrom(attr)
} else {
diff --git a/staging/src/k8s.io/code-generator/kube_codegen.sh b/staging/src/k8s.io/code-generator/kube_codegen.sh
index 478ddde11a6a4..3936f119736f6 100755
--- a/staging/src/k8s.io/code-generator/kube_codegen.sh
+++ b/staging/src/k8s.io/code-generator/kube_codegen.sh
@@ -111,7 +111,6 @@ function kube::codegen::gen_helpers() {
conversion-gen"${CODEGEN_VERSION_SPEC}"
deepcopy-gen"${CODEGEN_VERSION_SPEC}"
defaulter-gen"${CODEGEN_VERSION_SPEC}"
- validation-gen"${CODEGEN_VERSION_SPEC}"
)
# shellcheck disable=2046 # printf word-splitting is intentional
GO111MODULE=on go install $(printf "k8s.io/code-generator/cmd/%s " "${BINS[@]}")
diff --git a/staging/src/k8s.io/kube-aggregator/hack/update-codegen.sh b/staging/src/k8s.io/kube-aggregator/hack/update-codegen.sh
index a58ff7ffad362..bc3d689617052 100755
--- a/staging/src/k8s.io/kube-aggregator/hack/update-codegen.sh
+++ b/staging/src/k8s.io/kube-aggregator/hack/update-codegen.sh
@@ -44,6 +44,7 @@ kube::codegen::gen_openapi \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
+# kcp: TODO(gman0) re-add `--prefers-protobuf` once kcp-dev/{client-go,kcp} supports protobuf codec.
kube::codegen::gen_client \
--with-watch \
--output-dir "${SCRIPT_ROOT}/pkg/client" \
@@ -51,5 +52,4 @@ kube::codegen::gen_client \
--clientset-name "clientset_generated" \
--versioned-name "clientset" \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
- --prefers-protobuf \
"${SCRIPT_ROOT}/pkg/apis"
diff --git a/staging/src/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator/aggregator.go b/staging/src/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator/aggregator.go
index 7d4281f23461d..0766d67e3d787 100644
--- a/staging/src/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator/aggregator.go
+++ b/staging/src/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator/aggregator.go
@@ -26,6 +26,7 @@ import (
restful "github.com/emicklei/go-restful/v3"
+ genericrequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/server"
"k8s.io/klog/v2"
v1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
@@ -124,7 +125,7 @@ func BuildAndRegisterAggregator(downloader *Downloader, delegationTarget server.
// ignore errors for the empty delegate we attach at the end the chain
// atm the empty delegate returns 503 when the server hasn't been fully initialized
// and the spec downloader only silences 404s
- if len(delegate.ListedPaths()) == 0 && delegate.NextDelegate() == nil {
+ if len(delegate.ListedPaths(&genericrequest.Cluster{})) == 0 && delegate.NextDelegate() == nil { // TODO(kcp-1.28): Should be removed once all rebase lands
continue
}
delegationHandlers = append(delegationHandlers, handler)
diff --git a/staging/src/k8s.io/metrics/hack/update-codegen.sh b/staging/src/k8s.io/metrics/hack/update-codegen.sh
index 23e4e3ee2419d..e4befb5340577 100755
--- a/staging/src/k8s.io/metrics/hack/update-codegen.sh
+++ b/staging/src/k8s.io/metrics/hack/update-codegen.sh
@@ -32,9 +32,9 @@ kube::codegen::gen_helpers \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
+# kcp: TODO(gman0) re-add `--prefers-protobuf` once kcp-dev/{client-go,kcp} supports protobuf codec.
kube::codegen::gen_client \
--output-dir "${SCRIPT_ROOT}/pkg/client" \
--output-pkg "${THIS_PKG}/pkg/client" \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
- --prefers-protobuf \
"${SCRIPT_ROOT}/pkg/apis"
diff --git a/test/e2e/apimachinery/discovery.go b/test/e2e/apimachinery/discovery.go
index f98ce820bf31f..7def10dfebb0f 100644
--- a/test/e2e/apimachinery/discovery.go
+++ b/test/e2e/apimachinery/discovery.go
@@ -22,6 +22,7 @@ import (
"path"
"strings"
+ "github.com/kcp-dev/logicalcluster/v3"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
utilversion "k8s.io/apimachinery/pkg/util/version"
@@ -102,7 +103,7 @@ var _ = SIGDescribe("Discovery", func() {
// is an implementation detail, which shouldn't be relied on by
// the clients. The following calculation is for test purpose
// only.
- expected := discovery.StorageVersionHash(spec.Group, storageVersion, spec.Names.Kind)
+ expected := discovery.StorageVersionHash(logicalcluster.From(testcrd.Crd), spec.Group, storageVersion, spec.Names.Kind)
for _, r := range resources.APIResources {
if r.Name == spec.Names.Plural {
diff --git a/test/e2e_node/services/namespace_controller.go b/test/e2e_node/services/namespace_controller.go
index dc3ddb16029bb..995a78c08fc4a 100644
--- a/test/e2e_node/services/namespace_controller.go
+++ b/test/e2e_node/services/namespace_controller.go
@@ -20,7 +20,9 @@ import (
"context"
"time"
+ "github.com/kcp-dev/logicalcluster/v3"
v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/metadata"
@@ -71,7 +73,9 @@ func (n *NamespaceController) Start(ctx context.Context) error {
if err != nil {
return err
}
- discoverResourcesFn := client.Discovery().ServerPreferredNamespacedResources
+ discoverResourcesFn := func(clusterName logicalcluster.Name) ([]*metav1.APIResourceList, error) {
+ return client.Discovery().ServerPreferredNamespacedResources()
+ }
informerFactory := informers.NewSharedInformerFactory(client, ncResyncPeriod)
nc := namespacecontroller.NewNamespaceController(
diff --git a/test/integration/namespace/ns_conditions_test.go b/test/integration/namespace/ns_conditions_test.go
index 2f8a1d405185f..e7e4f93b9b6ac 100644
--- a/test/integration/namespace/ns_conditions_test.go
+++ b/test/integration/namespace/ns_conditions_test.go
@@ -23,6 +23,10 @@ import (
"testing"
"time"
+ kcpcorev1informers "github.com/kcp-dev/client-go/informers/core/v1"
+ kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes"
+ kcpmetadata "github.com/kcp-dev/client-go/metadata"
+ "github.com/kcp-dev/logicalcluster/v3"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -32,7 +36,6 @@ import (
"k8s.io/client-go/dynamic"
"k8s.io/client-go/informers"
clientset "k8s.io/client-go/kubernetes"
- "k8s.io/client-go/metadata"
restclient "k8s.io/client-go/rest"
"k8s.io/klog/v2/ktesting"
kubeapiservertesting "k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
@@ -181,26 +184,28 @@ func namespaceLifecycleSetup(t *testing.T) (context.Context, kubeapiservertestin
config := restclient.CopyConfig(server.ClientConfig)
config.QPS = 10000
config.Burst = 10000
- clientSet, err := clientset.NewForConfig(config)
+ clientSet, err := kcpkubernetesclientset.NewForConfig(config)
if err != nil {
t.Fatalf("error in create clientset: %v", err)
}
resyncPeriod := 12 * time.Hour
- informers := informers.NewSharedInformerFactory(clientset.NewForConfigOrDie(restclient.AddUserAgent(config, "deployment-informers")), resyncPeriod)
+ informers := kcpcorev1informers.NewNamespaceClusterInformer(kcpkubernetesclientset.NewForConfigOrDie(restclient.AddUserAgent(config, "deployment-informers")), resyncPeriod, nil)
- metadataClient, err := metadata.NewForConfig(config)
+ metadataClient, err := kcpmetadata.NewForConfig(config)
if err != nil {
t.Fatal(err)
}
- discoverResourcesFn := clientSet.Discovery().ServerPreferredNamespacedResources
+ discoverResourcesFn := func(clusterName logicalcluster.Path) ([]*metav1.APIResourceList, error) {
+ return clientSet.Discovery().ServerPreferredNamespacedResources()
+ }
_, ctx := ktesting.NewTestContext(t)
controller := namespace.NewNamespaceController(
ctx,
clientSet,
metadataClient,
discoverResourcesFn,
- informers.Core().V1().Namespaces(),
+ informers,
10*time.Hour,
corev1.FinalizerKubernetes)