-
Notifications
You must be signed in to change notification settings - Fork 21
feat: auto tolerate daemonsets with MAP #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
| --- | ||
| # MutatingAdmissionPolicyBinding binds the policy to the ConfigMap parameter | ||
| apiVersion: admissionregistration.k8s.io/v1beta1 | ||
| kind: MutatingAdmissionPolicyBinding | ||
| metadata: | ||
| name: inject-daemonset-readiness-tolerations-binding | ||
| spec: | ||
| policyName: inject-daemonset-readiness-tolerations | ||
| # Reference the ConfigMap containing toleration data | ||
| paramRef: | ||
| name: readiness-taints | ||
| namespace: nrr-system | ||
| parameterNotFoundAction: Deny | ||
| matchResources: | ||
| namespaceSelector: {} | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| --- | ||
| # ConfigMap that stores readiness tolerations | ||
| # This will be populated/updated by the NodeReadinessRule controller | ||
| apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: readiness-taints | ||
| namespace: nrr-system | ||
| data: | ||
| # Store each toleration key separately for easier CEL access | ||
| # Format: key1=readiness.k8s.io/NetworkReady,key2=readiness.k8s.io/StorageReady | ||
| taint-keys: "" |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| apiVersion: kustomize.config.k8s.io/v1beta1 | ||
| kind: Kustomization | ||
|
|
||
| resources: | ||
| - configmap.yaml | ||
| - policy.yaml | ||
| - binding.yaml | ||
|
|
||
| labels: | ||
| - pairs: | ||
| app.kubernetes.io/name: nrrcontroller | ||
| app.kubernetes.io/component: admission-policy |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,71 @@ | ||||||
| --- | ||||||
| # MutatingAdmissionPolicy for automatic DaemonSet toleration injection | ||||||
| # Reads taint keys from a ConfigMap parameter resource | ||||||
| # Requires: MutatingAdmissionPolicy feature enabled in the cluster | ||||||
| apiVersion: admissionregistration.k8s.io/v1beta1 | ||||||
| kind: MutatingAdmissionPolicy | ||||||
| metadata: | ||||||
| name: inject-daemonset-readiness-tolerations | ||||||
| spec: | ||||||
| failurePolicy: Fail | ||||||
|
|
||||||
| # Define what this policy watches | ||||||
| matchConstraints: | ||||||
| resourceRules: | ||||||
| - apiGroups: ["apps"] | ||||||
| apiVersions: ["v1"] | ||||||
| operations: ["CREATE", "UPDATE"] | ||||||
| resources: ["daemonsets"] | ||||||
|
|
||||||
| # Reference the ConfigMap that contains toleration data | ||||||
| paramKind: | ||||||
| apiVersion: v1 | ||||||
| kind: ConfigMap | ||||||
|
|
||||||
| # Variables for CEL expressions | ||||||
| variables: | ||||||
| # Check if opt-out annotation is set | ||||||
| - name: optedOut | ||||||
| expression: | | ||||||
| has(object.metadata.annotations) && | ||||||
| object.metadata.annotations.exists(k, k == "readiness.k8s.io/auto-tolerate" && object.metadata.annotations[k] == "false") | ||||||
|
|
||||||
| # Get existing tolerations (empty array if none) | ||||||
| - name: existingTolerations | ||||||
| expression: | | ||||||
| has(object.spec.template.spec.tolerations) ? | ||||||
| object.spec.template.spec.tolerations : [] | ||||||
|
|
||||||
| # Get taint keys from ConfigMap and parse to array | ||||||
| - name: taintKeys | ||||||
| expression: | | ||||||
| ("taint-keys" in params.data) && params.data["taint-keys"] != "" ? | ||||||
| params.data["taint-keys"].split(",") : [] | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
currently, if someone intentionally add a empty taint key to the configmap our MAP policy will create a matching empty-key wildcard toleration For example ❯ kubectl patch configmap readiness-taints -n nrr-system --type=merge -p '{"data":{"taint-keys":"readiness.k8s.io/NetworkReady,,readiness.k8s.io/StorageReady"}}'
configmap/readiness-taints patched
// notice the two commas which basically mean an empty taint key
❯ kubectl get configmap -n nrr-system readiness-taints -o jsonpath='{.data.taint-keys}'
readiness.k8s.io/NetworkReady,,readiness.k8s.io/StorageReady
// the above empty taint, injected an empty-key wildcard toleration on the ds
❯ kubectl get ds test-ds -o jsonpath='{.spec.template.spec.tolerations[*]}' | jq
{
"effect": "NoSchedule",
"key": "readiness.k8s.io/NetworkReady",
"operator": "Exists"
}
{
"effect": "NoSchedule",
"operator": "Exists"
}
{
"effect": "NoSchedule",
"key": "readiness.k8s.io/StorageReady",
"operator": "Exists"
} |
||||||
|
|
||||||
| # Create tolerations from taint keys (as plain maps since CEL has issues with complex types) | ||||||
| - name: tolerationsToInject | ||||||
| expression: | | ||||||
| variables.taintKeys | ||||||
| .filter(key, !variables.existingTolerations.exists(t, t.key == key)) | ||||||
| .map(key, { | ||||||
| "key": key, | ||||||
| "operator": "Exists", | ||||||
| "effect": "NoSchedule" | ||||||
| }) | ||||||
|
|
||||||
| # Apply mutations | ||||||
| mutations: | ||||||
| - patchType: JSONPatch | ||||||
| jsonPatch: | ||||||
| expression: | | ||||||
| !variables.optedOut && size(variables.tolerationsToInject) > 0 ? | ||||||
| [ | ||||||
| JSONPatch{ | ||||||
| op: has(object.spec.template.spec.tolerations) ? "replace" : "add", | ||||||
| path: "/spec/template/spec/tolerations", | ||||||
| value: variables.existingTolerations + variables.tolerationsToInject | ||||||
| } | ||||||
| ] : [] | ||||||
|
|
||||||
| # Never reinvoke this policy | ||||||
| reinvocationPolicy: Never | ||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -7,6 +7,7 @@ rules: | |
| - apiGroups: | ||
| - "" | ||
| resources: | ||
| - configmaps | ||
| - nodes | ||
| verbs: | ||
| - create | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,54 @@ | ||
| # MutatingAdmissionPolicy for DaemonSet Toleration Injection | ||
|
|
||
| This document describes how to deploy and use the MutatingAdmissionPolicy-based approach for automatically injecting readiness tolerations into DaemonSets. | ||
|
|
||
| ## Overview | ||
|
|
||
| The MutatingAdmissionPolicy approach uses Kubernetes's native admission control mechanism with CEL (Common Expression Language) to inject tolerations **without running a webhook server**. This provides a simpler, more declarative alternative to the webhook-based approach. | ||
|
|
||
| ## Requirements | ||
|
|
||
| > [!IMPORTANT] | ||
| > MutatingAdmissionPolicy is needed to be enabled in the cluster. | ||
|
|
||
| - Feature gate: `MutatingAdmissionPolicy=true` | ||
| - Runtime config: `admissionregistration.k8s.io/v1beta1=true` | ||
| - `kubectl` configured to access your cluster | ||
| - NodeReadinessRule CRDs installed | ||
|
|
||
| ## Architecture | ||
|
|
||
| ``` | ||
| User applies DaemonSet | ||
| ↓ | ||
| API Server evaluates CEL policy | ||
| ↓ | ||
| Fetches Tolerations ConfigMap which contains the tolerations to be injected | ||
| ↓ | ||
| Injects tolerations (if applicable) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This doesnt tell what is applicable. could we also add details on the annotation requirement? |
||
| ↓ | ||
| DaemonSet created with tolerations | ||
| ``` | ||
|
|
||
| ## Deployment | ||
|
|
||
| ### Option 1: Using kustomize | ||
|
|
||
| ```bash | ||
| # Install CRDs first | ||
| make install | ||
|
|
||
| # Deploy the admission policy | ||
| kubectl apply -k config/admission-policy | ||
| ``` | ||
|
|
||
| ### Option 2: Direct kubectl apply | ||
|
|
||
| ```bash | ||
| # Install CRDs first | ||
| make install | ||
|
|
||
| # Deploy policy and binding | ||
| kubectl apply -f config/admission-policy/policy.yaml | ||
| kubectl apply -f config/admission-policy/binding.yaml | ||
| ``` | ||
|
Comment on lines
+44
to
+54
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hmm. this looks redundant. also, dont we need to show updating the config-map and how we maintain it with readiness-taints introduced later? |
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -90,6 +90,8 @@ func (r *RuleReconciler) SetupWithManager(ctx context.Context, mgr ctrl.Manager) | |||||||||||||||||||||||||||||||||
| // +kubebuilder:rbac:groups=readiness.node.x-k8s.io,resources=nodereadinessrules,verbs=get;list;watch;create;update;patch;delete | ||||||||||||||||||||||||||||||||||
| // +kubebuilder:rbac:groups=readiness.node.x-k8s.io,resources=nodereadinessrules/status,verbs=get;update;patch | ||||||||||||||||||||||||||||||||||
| // +kubebuilder:rbac:groups=readiness.node.x-k8s.io,resources=nodereadinessrules/finalizers,verbs=update | ||||||||||||||||||||||||||||||||||
| // +kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch;create;update;patch;delete | ||||||||||||||||||||||||||||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
@Priyankasaggu11929 regarding to your comment above I added this inline kubebuilder comments and regenerated manifests. |
||||||||||||||||||||||||||||||||||
| // +kubebuilder:rbac:groups="",resources=nodes,verbs=get;list;watch;create;update;patch;delete | ||||||||||||||||||||||||||||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Although we had nodes rbac roles, it was missing in the inline comments so I added them.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @pehlicd, I looked into it further. Firstly, the node resource RBAC added above - node-readiness-controller/internal/controller/node_controller.go Lines 84 to 86 in 354b151
So, Let's remove the redundant markers from here. What I realised is we don't need the above cluster-scoped configmap RBAC as well - node-readiness-controller/config/rbac/leader_election_role.yaml Lines 9 to 21 in 354b151
The only problem is that currently the (controller-runtime's informer) cache is trying to watch configmaps at the cluster scope (see the error below again) but our namespaced role only provide permissions for nrr-system namespace, so I suggest we configure the cache to only watch configmaps in our required namespace. I tried like following, it worked - diff --git a/cmd/main.go b/cmd/main.go
index 34652e9..1527845 100644
--- a/cmd/main.go
+++ b/cmd/main.go
@@ -28,11 +28,14 @@ import (
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/client-go/rest"
+ corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/cache"
+ "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
@@ -107,6 +110,15 @@ func main() {
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "ba65f13e.readiness.node.x-k8s.io",
+ Cache: cache.Options{
+ ByObject: map[client.Object]cache.ByObject{
+ &corev1.ConfigMap{}: {
+ Namespaces: map[string]cache.Config{
+ "nrr-system": {},
+ },
+ },
+ },
+ },
})
if err != nil {
setupLog.Error(err, "unable to start manager")
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ajaysundark, for your review as well ^ |
||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| func (r *RuleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { | ||||||||||||||||||||||||||||||||||
| log := ctrl.LoggerFrom(ctx) | ||||||||||||||||||||||||||||||||||
|
|
@@ -166,13 +168,20 @@ func (r *RuleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl. | |||||||||||||||||||||||||||||||||
| return ctrl.Result{RequeueAfter: time.Minute}, err | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Sync taints to ConfigMap for MutatingAdmissionPolicy | ||||||||||||||||||||||||||||||||||
| if err := r.Controller.syncTaintsConfigMap(ctx); err != nil { | ||||||||||||||||||||||||||||||||||
| log.Error(err, "Failed to sync taints configmap", "rule", rule.Name) | ||||||||||||||||||||||||||||||||||
| // Don't fail reconciliation for this - log and continue | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
pehlicd marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| return ctrl.Result{}, nil | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // reconcileDelete handles the rules deletion, It performs following actions | ||||||||||||||||||||||||||||||||||
| // 1. Deletes the taints associated with the rule. | ||||||||||||||||||||||||||||||||||
| // 2. Remove the rule from the cache. | ||||||||||||||||||||||||||||||||||
| // 3. Remove the finalizer from the rule. | ||||||||||||||||||||||||||||||||||
| // 4. Sync the Taints ConfigMap. | ||||||||||||||||||||||||||||||||||
| func (r *RuleReconciler) reconcileDelete(ctx context.Context, rule *readinessv1alpha1.NodeReadinessRule) (ctrl.Result, error) { | ||||||||||||||||||||||||||||||||||
| log := ctrl.LoggerFrom(ctx) | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
|
|
@@ -196,6 +205,13 @@ func (r *RuleReconciler) reconcileDelete(ctx context.Context, rule *readinessv1a | |||||||||||||||||||||||||||||||||
| if err != nil { | ||||||||||||||||||||||||||||||||||
| return ctrl.Result{}, err | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Sync taints to ConfigMap for MutatingAdmissionPolicy | ||||||||||||||||||||||||||||||||||
| if err := r.Controller.syncTaintsConfigMap(ctx); err != nil { | ||||||||||||||||||||||||||||||||||
| log.Error(err, "Failed to sync taints configmap", "rule", rule.Name) | ||||||||||||||||||||||||||||||||||
| // Don't fail reconciliation for this - log and continue | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| return ctrl.Result{}, nil | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
|
|
@@ -671,6 +687,79 @@ func (r *RuleReadinessController) cleanupNodesAfterSelectorChange(ctx context.Co | |||||||||||||||||||||||||||||||||
| return nil | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // syncTaintsConfigMap synchronizes readiness taints to a ConfigMap for admission policy. | ||||||||||||||||||||||||||||||||||
| func (r *RuleReadinessController) syncTaintsConfigMap(ctx context.Context) error { | ||||||||||||||||||||||||||||||||||
| log := ctrl.LoggerFrom(ctx) | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // List all NodeReadinessRules | ||||||||||||||||||||||||||||||||||
| var ruleList readinessv1alpha1.NodeReadinessRuleList | ||||||||||||||||||||||||||||||||||
| if err := r.List(ctx, &ruleList); err != nil { | ||||||||||||||||||||||||||||||||||
| return fmt.Errorf("failed to list NodeReadinessRules: %w", err) | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Extract unique taint keys with readiness.k8s.io/ prefix and NoSchedule effect | ||||||||||||||||||||||||||||||||||
| taintKeysSet := make(map[string]struct{}) | ||||||||||||||||||||||||||||||||||
| for _, rule := range ruleList.Items { | ||||||||||||||||||||||||||||||||||
| // Skip rules that are being deleted | ||||||||||||||||||||||||||||||||||
| if !rule.DeletionTimestamp.IsZero() { | ||||||||||||||||||||||||||||||||||
| continue | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
| if rule.Spec.Taint.Key != "" && | ||||||||||||||||||||||||||||||||||
pehlicd marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||||||||||||||||
| strings.HasPrefix(rule.Spec.Taint.Key, "readiness.k8s.io/") && | ||||||||||||||||||||||||||||||||||
| rule.Spec.Taint.Effect == corev1.TaintEffectNoSchedule { | ||||||||||||||||||||||||||||||||||
| taintKeysSet[rule.Spec.Taint.Key] = struct{}{} | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Convert set to comma-separated string | ||||||||||||||||||||||||||||||||||
| taintKeys := make([]string, 0, len(taintKeysSet)) | ||||||||||||||||||||||||||||||||||
| for key := range taintKeysSet { | ||||||||||||||||||||||||||||||||||
| taintKeys = append(taintKeys, key) | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
| taintKeysStr := strings.Join(taintKeys, ",") | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Update or create ConfigMap | ||||||||||||||||||||||||||||||||||
| cm := &corev1.ConfigMap{ | ||||||||||||||||||||||||||||||||||
| ObjectMeta: metav1.ObjectMeta{ | ||||||||||||||||||||||||||||||||||
| Name: "readiness-taints", | ||||||||||||||||||||||||||||||||||
| Namespace: "nrr-system", | ||||||||||||||||||||||||||||||||||
| }, | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Try to get existing ConfigMap | ||||||||||||||||||||||||||||||||||
| existingCM := &corev1.ConfigMap{} | ||||||||||||||||||||||||||||||||||
| err := r.Get(ctx, client.ObjectKey{Name: "readiness-taints", Namespace: "nrr-system"}, existingCM) | ||||||||||||||||||||||||||||||||||
| if err != nil && !apierrors.IsNotFound(err) { | ||||||||||||||||||||||||||||||||||
| return fmt.Errorf("failed to get configmap: %w", err) | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| // Set data | ||||||||||||||||||||||||||||||||||
| cm.Data = map[string]string{ | ||||||||||||||||||||||||||||||||||
| "taint-keys": taintKeysStr, | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| if apierrors.IsNotFound(err) { | ||||||||||||||||||||||||||||||||||
| // Create new ConfigMap | ||||||||||||||||||||||||||||||||||
| log.Info("Creating readiness-taints ConfigMap", "taintCount", len(taintKeys)) | ||||||||||||||||||||||||||||||||||
| if err := r.Create(ctx, cm); err != nil { | ||||||||||||||||||||||||||||||||||
| return fmt.Errorf("failed to create configmap: %w", err) | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
| } else { | ||||||||||||||||||||||||||||||||||
| // Update existing ConfigMap | ||||||||||||||||||||||||||||||||||
| log.V(1).Info("Updating readiness-taints ConfigMap", "taintCount", len(taintKeys)) | ||||||||||||||||||||||||||||||||||
| patch := client.MergeFrom(existingCM.DeepCopy()) | ||||||||||||||||||||||||||||||||||
| existingCM.Data = cm.Data | ||||||||||||||||||||||||||||||||||
| if err := r.Patch(ctx, existingCM, patch); err != nil { | ||||||||||||||||||||||||||||||||||
| return fmt.Errorf("failed to update configmap: %w", err) | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| log.V(2).Info("Successfully synced taints to ConfigMap", | ||||||||||||||||||||||||||||||||||
| "totalRules", len(ruleList.Items), | ||||||||||||||||||||||||||||||||||
| "readinessTaints", len(taintKeys)) | ||||||||||||||||||||||||||||||||||
| return nil | ||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| func (r *RuleReconciler) ensureFinalizer(ctx context.Context, rule *readinessv1alpha1.NodeReadinessRule, finalizer string) (finalizerAdded bool, err error) { | ||||||||||||||||||||||||||||||||||
| // Finalizers can only be added when the deletionTimestamp is not set. | ||||||||||||||||||||||||||||||||||
| if !rule.GetDeletionTimestamp().IsZero() { | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does
parameterNotFoundAction:Denyblocks daemonset creations in the absence of this config-map param?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, in my local testing with the PR changes, it is blocking creation of new daemonsets.
It works if a user creates node-readiness-rules first and then daemonsets, but the other way around blocks the creation of new daemonsets.
I tested the behaviour by disabling the empty configmap file in the kustomization.yaml, because I was thinking why do we need to have an empty configmap file, if the configmap creation is taken care of by the controller itself - https://github.com/kubernetes-sigs/node-readiness-controller/pull/117/changes#diff-b857080def3b2fecd3967ca35b1bdc29564dda0b064fa902fa24efcd55471899R721-R727
But the thing is - the controller logic only kicks in when a user create a new node-readiness-rule.
So if a user created a daemonset first before creating the node-readiness-rules, that daemonset object creation is blocked.
The presence of the empty configmap file at the same time of creating MAP Policy/Policy-Binding fixes it.
❯ kubectl get node nrr-test-worker -o jsonpath={.spec.taints} | jq . [ { "effect": "NoSchedule", "key": "readiness.k8s.io/NetworkReady", "value": "pending" } ] // applying MAP policy/policy-binding without empty configmap ❯ kubectl apply -k config/admission-policy mutatingadmissionpolicy.admissionregistration.k8s.io/inject-daemonset-readiness-tolerations created mutatingadmissionpolicybinding.admissionregistration.k8s.io/inject-daemonset-readiness-tolerations-binding created ❯ kubectl get cm -n nrr-system NAME DATA AGE kube-root-ca.crt 1 41s // daemonset creation is blocked ❯ kubectl apply -f test-daemonset.yaml Error from server (Forbidden): error when creating "test-daemonset.yaml": policy 'inject-daemonset-readiness-tolerations' with binding 'inject-daemonset-readiness-tolerations-binding' denied request: failed to configure binding: no params found for policy binding with `Deny` parameterNotFoundAction // creating a new nrr rule, which will trigger in turn configmap creation/patch per PR changes ❯ kubectl apply -f examples/cni-readiness/network-readiness-rule.yaml nodereadinessrule.readiness.node.x-k8s.io/network-readiness-rule created ❯ kubectl get cm readiness-taints -n nrr-system -o jsonpath={.data} {"taint-keys":"readiness.k8s.io/NetworkReady"} // now daemonset creation works ❯ kubectl apply -f test-daemonset.yaml daemonset.apps/test-ds created