-
Notifications
You must be signed in to change notification settings - Fork 8
Description
What would you like to be added:
Support for cluster-scoped resources (like Namespace, Node, ClusterRole, CustomResourceDefinition, etc.) in controller sharding.
Why is this needed:
Currently, controller sharding only works with namespace-scoped resources. However, most of the sharding infrastructure already supports cluster-scoped resources - there's just one filtering issue preventing it from working.
This would be useful for namespace management at scale (control plane clusters serving thousands of namespaces, K8s-as-a-Service providers), managing nodes in large clusters, or distributing RBAC operations across ClusterRole controllers.
Looking at the code, most components already work fine - the hash key generation already handles cluster-scoped resources, the webhook works with any GroupResource type, and the controller library handles both scoped types correctly. The only issue is in the sharder controller where cluster-scoped resources get filtered out:
if !o.Namespaces.Has(obj.GetNamespace()) {
return nil // cluster-scoped resources have empty namespace, always skipped
}This requires two complementary changes. First, modify the namespace filtering to allow cluster-scoped resources:
// Current implementation
if !o.Namespaces.Has(obj.GetNamespace()) {
return nil
}
// Proposed fix
if obj.GetNamespace() != "" && !o.Namespaces.Has(obj.GetNamespace()) {
return nil
}Second, fix the hash key generation issue.
Current hash key format group/kind/namespace/name creates inconsistencies when Namespace itself is a sharding target:
// Namespace "sandbox" hash key: "//Namespace//sandbox"
// Pod controlled by Namespace "sandbox": "//Namespace/sandbox/sandbox"Proposed hash key improvement using controller-runtime's official API:
func ForObject(obj client.Object, scheme *runtime.Scheme, restMapper meta.RESTMapper) (string, error) {
gvk := obj.GetObjectKind().GroupVersionKind()
// ...existing validation...
// Use controller-runtime's official scope detection
isNamespaced, err := apiutil.IsObjectNamespaced(obj, scheme, restMapper)
if err != nil {
return "", fmt.Errorf("error determining object scope: %w", err)
}
if isNamespaced {
return gvk.Group + "/" + gvk.Kind + "/" + obj.GetNamespace() + "/" + obj.GetName(), nil
}
// cluster-scoped: exclude namespace component
return gvk.Group + "/" + gvk.Kind + "/" + obj.GetName(), nil
}This approach uses controller-runtime's standard scope detection, works with custom resources, and fixes the hash key consistency issue where Namespace "sandbox" and its controlled objects would end up with the same key.
Since the project is not yet production-ready, this breaking change to hash key generation should be acceptable.
Does this align with the project's design philosophy? I see this is already in the backlog (kubernetes-controller-sharding (view)), so I wanted to confirm there are no design considerations that would make this implementation inappropriate.
Metadata
Metadata
Assignees
Labels
Projects
Status