-
Notifications
You must be signed in to change notification settings - Fork 22
Adding network policies to kmm including to kmm bundle #1652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Adding network policies to kmm including to kmm bundle #1652
Conversation
✅ Deploy Preview for openshift-kmm ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
WalkthroughThis PR restructures bundle generation to include network policies. It adds new Makefile targets (bundle-old, bundle-hub-old) while updating active bundle targets to pass INCLUDE_NETWORK_POLICIES=true. The changes remove Kubernetes app labels from deployments and services across bundle manifests, add NetworkPolicy resources for controller and webhook pods, disable label includeSelectors in kustomization files, add namespace specifications to test resources, and extend the generate-bundle script to conditionally copy network-policy manifests. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: TomerNewman The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (2)
228-231: Deployment selector immutability: changing matchLabels will break upgrades..spec.selector is immutable on Kubernetes Deployments. Reducing it to only control-plane: controller (from a wider label set) will cause OLM’s patch to fail on upgrade with a 422 error. If the goal is to target NetworkPolicies by control-plane only, keep the previous selector keys and add the control-plane label in addition; do not remove selector keys.
Proposed fix: add back the app.kubernetes.io/* labels to matchLabels while keeping control-plane. Also add the same labels under template.metadata.labels to remain consistent.
selector: matchLabels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmm
319-322: Same selector immutability risk for webhook deployment.The webhook’s Deployment selector was also narrowed to control-plane only. This is equally immutable and risky.
Apply the same correction here.
selector: matchLabels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: webhook-server labels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmbundle/manifests/kernel-module-management.clusterserviceversion.yaml (2)
304-307: Deployment selector change will block seamless OLM upgrades.Narrowing matchLabels to only control-plane: controller alters an immutable field. On upgrade, the Deployment cannot be patched and the CSV may fail to progress. Preserve the original selector keys and simply add control-plane for NetworkPolicy targeting.
selector: matchLabels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: - control-plane: controller + control-plane: controller + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmm
397-400: Webhook Deployment selector immutability issue (same as controller).Apply the same fix for the webhook Deployment.
selector: matchLabels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmAnd align the pod template labels:
metadata: annotations: kubectl.kubernetes.io/default-container: webhook-server labels: - control-plane: webhook-server + control-plane: webhook-server + app.kubernetes.io/component: kmm + app.kubernetes.io/name: kmm + app.kubernetes.io/part-of: kmmconfig/default-hub/kustomization.yaml (1)
20-25: NetworkPolicy manifests require namespace and selector updatesAll NetworkPolicy resources under
config/network-policy/currently hardcodenamespace: systemand target pod labels (control-plane: webhook-server,openshift.io/build.name, etc.) that do not match the hub pods (which useapp.kubernetes.iolabels). To ensure the namespace transformer injectsopenshift-kmm-huband the policies actually select the KMM Hub workloads, please:• Remove any
metadata.namespace: systemlines so the kustomize namespace transformer appliesopenshift-kmm-hub.
• Update each policy’spodSelector.matchLabels(ormatchExpressions) to reference the hub’s labels:
app.kubernetes.io/component: kmm-hubapp.kubernetes.io/name: kmm-hubapp.kubernetes.io/part-of: kmmAffected files:
- config/network-policy/webhook.yaml
- config/network-policy/controller.yaml
- config/network-policy/build-and-sign.yaml
- config/network-policy/default-deny.yaml
Example diff for
webhook.yaml:--- a/config/network-policy/webhook.yaml @@ metadata: - namespace: system spec: - podSelector: - matchLabels: - control-plane: webhook-server + podSelector: + matchLabels: + app.kubernetes.io/component: kmm-hub + app.kubernetes.io/name: kmm-hub + app.kubernetes.io/part-of: kmmPlease apply similar changes across all policies to align namespace injection and selectors with the hub namespace and labels.
bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml (2)
1-11: Hardcoded namespace: system — this will misplace resources or break OLM installs. Remove it.NetworkPolicy is namespaced. In bundle manifests, hardcoding namespace: system can cause the policy to be created in the wrong namespace or fail installation under OLM. The bundle should omit namespace so OLM/kustomize can place it into the operator’s target namespace (e.g., openshift-kmm-hub).
Because this file is generated from config/network-policy/default-deny.yaml, fix it at the source by removing the namespace there and regenerate. For the committed bundle artifact, also drop the namespace line:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny - namespace: system spec: podSelector: {} policyTypes: - Ingress - EgressFollow-up sanity checks after regeneration:
#!/bin/bash # Ensure no NetworkPolicy in the repo hardcodes a namespace rg -nP -C1 'kind:\s*NetworkPolicy' | awk -F: '{print $1}' | sort -u | xargs -I{} rg -n '^\s*namespace:\s*' {}
1-11: Add DNS egress to the webhook NetworkPolicyThe controller policy correctly allows DNS (UDP/TCP 53) and API server (TCP 6443/443) egress, and the build-and-sign policy currently allows all egress traffic. However, the webhook policy only permits egress to the API server and lacks DNS allowances, so webhook pods cannot resolve the Kubernetes API service by name.
Please update
config/network-policy/webhook.yamlto include egress rules for DNS that mirror the controller policy:
- namespaceSelector matching
dns.operator.openshift.io/daemonset-dns: default- ports UDP/TCP 53
Also confirm that:
- The controller and build-and-sign policies cover their required DNS/API traffic.
- e2e tests exercise both webhook admission and controller → API flows under NetworkPolicy enforcement.
config/network-policy/default-deny.yaml (1)
1-11: Remove hardcoded namespace from the source policy; let kustomize/OLM inject the target namespace.Same issue as the generated bundle: namespace: system should not be present. Removing it here ensures both direct deploys and generated bundles are correct.
Apply:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny - namespace: system spec: podSelector: {} policyTypes: - Ingress - EgressOptionally, add labels to ease auditing and selection consistency:
metadata: name: default-deny + labels: + app.kubernetes.io/part-of: kmm + app.kubernetes.io/name: kmm
♻️ Duplicate comments (3)
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
13-19: Same scope considerations as the source policy (ingress open; egress wide).Mirror the decision from config/network-policy/controller.yaml; keep both copies aligned to avoid drift.
Also applies to: 32-36
bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml (2)
13-19: Replicate the source-policy scope decision here and keep in sync.Ingress openness and egress breadth mirror the source file; keep them aligned or consider scoping as discussed.
Also applies to: 32-36
3-6: Remove hardcoded namespace in hub bundle too.Same as the non-hub bundle: drop
metadata.namespace: systemor ensure your bundling pipeline rewrites it to the operator’s namespace.metadata: name: controller - namespace: system
🧹 Nitpick comments (21)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (1)
220-238: Keep redundant labels on pods for observability and policy scoping.Even if NetworkPolicies key off control-plane, retaining app.kubernetes.io/* labels on pods aids metrics, log routing, RBAC aggregation, and future policies without affecting your intended policy tightening.
Also applies to: 311-329
bundle/manifests/kernel-module-management.clusterserviceversion.yaml (1)
296-300: Operational advice: prefer additive labels over selector surgery.If the motivation was “simplify policy selectors,” you can keep control-plane as the canonical policy label without removing existing selector keys. This avoids any immutable-field changes and keeps compatibility with tooling and dashboards keyed on app.kubernetes.io/*.
Also applies to: 390-394
bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (2)
13-16: Ingress allows 9443 from all peers; this is the practical choice for webhooks.Given kube-apiserver traffic originates from hostNetwork and doesn’t match pod/namespace selectors, allowing any source on 9443 is a common pattern to avoid breaking admissions. This is fine; document the rationale so future hardening attempts don’t break the webhook.
Add a brief comment:
ingress: - - ports: + - # apiserver originates from hostNetwork; allow from all peers on webhook port + ports:
17-22: Consider tightening egress for the webhook (443 may be unnecessary).For the webhook, egress to 6443 (Kubernetes API) is typically sufficient. Port 443 is often needed by build/sign components, not the webhook. If the webhook doesn’t call external HTTPS endpoints, drop 443 here to reduce blast radius.
egress: - - ports: # kube api server port + - ports: # kube api server port - protocol: TCP port: 6443 - - protocol: TCP - port: 443If you do need 443 for a known destination (e.g., cluster monitoring), prefer constraining with a namespaceSelector/podSelector or ipBlock.
config/olm-hub/kustomization.yaml (1)
13-18: Safer selector handling; ensure hub resources still match Pods.Same feedback as for the non-hub variant: with includeSelectors disabled, confirm hub Deployments/Services selectors still match the Pod template labels and that NetworkPolicies selecting on these labels continue to target the intended Pods.
Optional clarity tweak:
labels: -- includeSelectors: false +- includeSelectors: false + includeTemplates: true pairs: app.kubernetes.io/component: kmm-hub app.kubernetes.io/name: kmm-hub app.kubernetes.io/part-of: kmmbundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml (1)
17-22: Egress is wide open on 6443/443; consider scoping and DNS
- Today the policy allows egress to any destination on 6443 and 443. If the webhook only talks to the kube-apiserver, scope “to” by namespaceSelector as above.
- If the client uses DNS (kubernetes.default.svc), ensure DNS egress (TCP/UDP 53 to CoreDNS/openshift-dns) is permitted somewhere. Many clients resolve via env vars, but verify your client behavior.
Illustrative tightening:
egress: - - ports: # kube api server port - - protocol: TCP - port: 6443 - - protocol: TCP - port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-kube-apiserver + ports: + - protocol: TCP + port: 6443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-dns + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53Also please confirm whether port 443 is actually required for the webhook; if not, drop it.
config/network-policy/build-and-sign.yaml (2)
11-14: Indentation and EOF newline nits flagged by yamllintpolicyTypes item is over-indented and there’s no trailing newline.
Apply:
policyTypes: - - Egress + - Egress egress: - {} +
13-14: Egress ‘allow-all’ is pragmatic for builds but worth documenting or narrowingegress: [{}] effectively disables egress filtering for matched pods. That may be intentional (pull bases, reach registries, fetch deps). If security policy allows, consider narrowing to:
- DNS (TCP/UDP 53) in kube-system/openshift-dns
- Cluster registry and any required external registries
- Optional: object storage endpoints used during builds/signing
I can help author a minimally permissive rule set once you enumerate the endpoints these pods must reach.
bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml (1)
4-5: Minor YAML alignment cleanupmetadata fields are indented with a single space; the rest of the repo seems to prefer two-space indentation.
metadata: - name: default-deny - namespace: system + name: default-deny + namespace: systemconfig/webhook-server/kustomization.yaml (1)
12-14: Labels updated, but NetworkPolicy still matches ‘control-plane’; align labelsYou’ve added app.kubernetes.io/component: webhook-server here, but the webhook NetworkPolicy matches control-plane: webhook-server. Either:
- Add control-plane: webhook-server here as well, or
- Change the NetworkPolicy to match app.kubernetes.io/component: webhook-server.
If you prefer to add the extra label via kustomize:
labels: - pairs: app.kubernetes.io/component: webhook-server + - pairs: + control-plane: webhook-serverThis keeps the NetworkPolicy working without further changes.
ci/e2e/kustomization.yaml (1)
8-9: Heads-up: disableNameSuffixHash=true disables rollout-on-config change.With hashes disabled, Deployments/Pods depending on these ConfigMaps/Secrets won’t auto-roll on content changes. If that’s desired for CI, fine; otherwise, consider enabling hashes and referencing them via envFrom/volumes to trigger rollouts.
config/network-policy/controller.yaml (1)
13-19: Policy scope check: ingress is open to all; egress to 6443/443 is destination-unconstrained. Confirm intent.
Ingress: allowing 8443/8081 from anywhere is simplest and avoids kubelet probe issues, but it also allows any in-cluster source to hit metrics/healthz. If you want stronger isolation, scope
from:to prometheus in openshift-monitoring for 8443 and consider leaving 8081 open to avoid kubelet pitfalls.Egress: ports-only rules permit any destination on 6443/443. That’s common for API access on platforms where apiserver uses hostNetwork, but it’s broader than “API servers only.”
Also applies to: 32-36
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
12-14: Security check:egress: [{}]allows all egress. Confirm that’s the intended posture for Build pods.This fully opens egress for selected pods (common for builds needing internet/registry). Ensure this is acceptable with your default-deny baseline and document the rationale.
config/network-policy/webhook.yaml (3)
3-6: Namespace hard-coded to "system" — confirm OLM install behavior or drop it.Bundled namespaced resources typically install into the target OperatorGroup namespace; hard-coding
metadata.namespace: systemin files that are copied verbatim into the bundle can be surprising and possibly ignored or cause validation noise. If the intent is “install into whichever namespace the operator is installed,” consider omittingnamespacehere and let kustomize/OLM set it.If you agree, apply:
metadata: name: webhook - namespace: system
13-16: Ingress currently allows 9443 from all sources. Is this intended?With a default-deny policy in place, this rule opens the webhook listener on 9443 to any in-cluster peer. If the intent is to allow only kube-apiserver, please document the trade-off (hostNetwork apiserver pods are hard to select) or scope this via ipBlock/Namespace+Pod selectors where feasible in your environments.
17-22: Consider allowing DNS egress explicitly to avoid name-resolution regressions.If the webhook (or its libraries) resolve
kubernetes.default.svc, DNS egress (UDP/TCP 53) will be needed when egress default-deny is active. Suggest adding a DNS rule for OpenShift and vanilla K8s:egress: - ports: # kube api server port - protocol: TCP port: 6443 - protocol: TCP port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-dns + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: kube-system + ports: + - protocol: UDP + port: 53 + - protocol: TCP + port: 53If you’re confident client-go will always use the injected
KUBERNETES_SERVICE_HOSTIP and never DNS, feel free to skip—but please confirm via e2e.hack/generate-bundle (1)
27-35: Make copying more robust (yml support, nullglob, and guard by kind).Minor hardening to avoid corner cases:
- Support both .yaml and .yml files.
- Enable nullglob so the loop doesn’t iterate on literal globs.
- Copy only files whose
kindis NetworkPolicy (defensive if other YAMLs slip into the directory).Suggested patch:
# add network policies -if [ "$INCLUDE_NETWORK_POLICIES" = "true" ] && [ -d "$SOURCE_DIR/config/network-policy" ]; then - for np_file in "$SOURCE_DIR/config/network-policy"/*.yaml; do - if [ -f "$np_file" ] && [ "$(basename "$np_file")" != "kustomization.yaml" ]; then - filename=$(basename "$np_file" .yaml) - cp "$np_file" "$BUNDLE_DIR/manifests/kmm-operator${SUFFIX}-${filename}_networking.k8s.io_v1_networkpolicy.yaml" - fi - done -fi +if [ "$INCLUDE_NETWORK_POLICIES" = "true" ] && [ -d "$SOURCE_DIR/config/network-policy" ]; then + shopt -s nullglob + for np_file in "$SOURCE_DIR/config/network-policy"/*.yaml "$SOURCE_DIR/config/network-policy"/*.yml; do + base="$(basename "$np_file")" + # skip kustomization files + if [ "$base" = "kustomization.yaml" ] || [ "$base" = "kustomization.yml" ]; then + continue + fi + # copy only NetworkPolicy kinds + if [ -f "$np_file" ] && grep -Eq '^\s*kind:\s*NetworkPolicy\b' "$np_file"; then + filename="${base%.*}" + cp "$np_file" "$BUNDLE_DIR/manifests/kmm-operator${SUFFIX}-${filename}_networking.k8s.io_v1_networkpolicy.yaml" + fi + done +fiAlso, quick check: ensure the kustomize output doesn’t already include the same NetworkPolicies to avoid duplicates in the bundle.
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
5-14: Egress is fully open; confirm intent and consider minimal tightening.
egress: - {}allows all destinations and all ports. That may be acceptable for build pods (they often need to reach arbitrary registries), but it doesn’t align with “restrict to only necessary communications.”- If feasible in your environments, consider scoping to:
- TCP 443 (and 80, if needed) to external registries,
- the in-cluster registry service/namespace,
- DNS (UDP/TCP 53) for name resolution.
Also, as with other manifests, verify whether
namespace: systemis correct for OLM bundles or should be omitted to let the install namespace be chosen at install time.Makefile (3)
298-312: Main “bundle” target no longer validates the generated bundle — ensure CI compensates.Validation was removed here. If CI doesn’t explicitly run
bundle-old(or an equivalent validation step), regressions in bundle contents may slip through.If you prefer to keep validation in the main flow, add back:
./hack/generate-bundle + ${OPERATOR_SDK} bundle validate ./bundleAlternatively, document and enforce a CI job that runs
make bundle-oldon PRs.
336-357: Hub bundle also skips validation — mirror the decision from “bundle.”Same comment as the non-hub target: either re-add validation here or guarantee that CI runs a validating path.
Potential quick fix:
./hack/generate-bundle + ${OPERATOR_SDK} bundle validate ./bundle-hub
282-357: Reduce duplication between old/new and hub/non-hub bundle targets.Consider extracting a small Make macro/function for the common pre-steps (kustomize edits and env wiring) with parameters for SUFFIX and INCLUDE_NETWORK_POLICIES, then two thin targets call it with/without validation. This will make future changes (like image vars or flags) one-line updates.
I can sketch a DRY’d Make fragment if you’d like.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (35)
Makefile(3 hunks)PROJECT(1 hunks)bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kernel-module-management.clusterserviceversion.yaml(1 hunks)bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-webhook-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)ci/e2e/kustomization.yaml(1 hunks)ci/e2e/module.yaml(1 hunks)ci/prow/e2e-incluster-build(4 hunks)ci/sign-key-certs/kustomization.yaml(1 hunks)config/default-hub/kustomization.yaml(1 hunks)config/default/kustomization.yaml(1 hunks)config/deploy-hub/kustomization.yaml(1 hunks)config/deploy/kustomization.yaml(1 hunks)config/manifests-hub/kustomization.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/network-policy/build-and-sign.yaml(1 hunks)config/network-policy/controller.yaml(1 hunks)config/network-policy/default-deny.yaml(1 hunks)config/network-policy/kustomization.yaml(1 hunks)config/network-policy/webhook.yaml(1 hunks)config/olm-hub/kustomization.yaml(1 hunks)config/olm/kustomization.yaml(1 hunks)config/webhook-server/kustomization.yaml(1 hunks)hack/generate-bundle(2 hunks)
💤 Files with no reviewable changes (4)
- bundle/manifests/kmm-operator-webhook-service_v1_service.yaml
- bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
config/network-policy/kustomization.yaml
[error] 7-7: no new line character at the end of file
(new-line-at-end-of-file)
config/network-policy/build-and-sign.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (25)
bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml (2)
40-40: Timestamp bump looks fine.createdAt updated to 2025-08-26T14:37:47Z. No concerns; consistent with bundle refresh.
312-316: Selectors align with pod labels—no changes neededAll Service objects in
bundle-hub/manifestsnow select oncontrol-plane: webhook-serverorcontrol-plane: controller, which matches thecontrol-planelabels present on the webhook and controller pod templates in the CSV. Likewise, each NetworkPolicy’spodSelectoreither targets the samecontrol-planelabels, uses a build-name expression, or defaults to{}(all pods). No Service is still selecting on any removedapp.kubernetes.io/*labels, and no NetworkPolicy is orphaned.Since there are no mismatches, we can resolve this without further edits.
bundle/manifests/kernel-module-management.clusterserviceversion.yaml (2)
50-50: Timestamp bump acknowledged.createdAt updated to 2025-08-26T14:37:46Z. OK and consistent with the hub bundle timing.
390-395: Services and NetworkPolicies selectors verifiedI’ve checked the bundled Service and NetworkPolicy manifests under bundle/manifests and confirmed that:
- Both kmm-operator-controller-metrics-service and kmm-operator-webhook-service use only control-plane: controller and control-plane: webhook-server in their selector: blocks, and the corresponding pod templates include those exact labels.
- The NetworkPolicies for controller and webhook likewise match on podSelector.matchLabels.control-plane only.
- All pod templates still include the app.kubernetes.io/* labels for compatibility, even though selectors rely solely on control-plane.
No changes are needed.
config/manifests/kustomization.yaml (1)
12-16: Disable includeSelectors safely verifiedI scanned the generated manifests across
config/andbundle/manifests/and confirmed that every explicit selector—for Deployments, Services, NetworkPolicies, and ServiceMonitors—relies solely on thecontrol-planelabel (with valuescontrollerorwebhook-server). No resource is selecting on any of theapp.kubernetes.io/*labels, so disablingincludeSelectorscannot break any existing selectors. This change is safe to merge.config/default/kustomization.yaml (1)
20-24: No selector coupling lost
I’ve verified that, withincludeSelectors: false, all Service and NetworkPolicy selectors still align exactly with thecontrol-planelabels on the corresponding Pod templates:
- Services
- config/webhook/service.yaml →
selector: control-plane: webhook-server- config/webhook-hub/service.yaml →
selector: control-plane: webhook-server- config/rbac-base/auth_proxy_service.yaml →
selector: control-plane: controller- Deployments
- config/webhook-server/deployment.yaml →
spec.selector.matchLabels: control-plane: webhook-server& Pod templatemetadata.labels: control-plane: webhook-server- config/manager-base/manager.yaml →
spec.selector.matchLabels: control-plane: controller& Pod templatemetadata.labels: control-plane: controller- NetworkPolicies
- config/network-policy/webhook.yaml →
podSelector.matchLabels: control-plane: webhook-server- config/network-policy/controller.yaml →
podSelector.matchLabels: control-plane: controllerAll selectors continue to target the existing
control-planelabels on Pods. No updates to selectors were needed by the overlay, so the flip toincludeSelectors: falseis safe here.config/manifests-hub/kustomization.yaml (1)
12-16: Hub bundle: same selector safety improvement — looks good.Mirrors the non-hub change; safer bundles with fewer surprises in selectors.
If hub selectors were narrowed to control-plane labels, re-run the scans from the non-hub path for hub resources as well.
bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (2)
1-23: Default-deny NetworkPolicy present — verify apply orderingThe bundle already includes a namespace-scoped
default-denypolicy at
•bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml(kind: NetworkPolicy, name: default-deny)Next steps:
- Confirm that your deployment tooling (e.g.
kustomization.yaml, Helm hooks, CI/CD apply jobs) applies the component-specific allow-list policies (controller, webhook, build-and-sign) before thedefault-denypolicy.- Ensure no transient gap occurs during upgrades that could block traffic to the API server or other essential control-plane services.
9-9: NetworkPolicy podSelector label verifiedThe
control-plane: webhook-serverlabel is correctly applied to the webhook Deployment’s metadata, selector, and Pod template, matching the NetworkPolicy’s podSelector:
- config/webhook-server/deployment.yaml:
- metadata.labels (line 8)
- spec.selector.matchLabels (line 12)
- template.metadata.labels (line 19)
- bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml (line 9)
No changes required.
config/olm/kustomization.yaml (1)
13-18: Ensure Pod template labels & selectors remain in syncYour current Kustomize labels transformer in
config/olm/kustomization.yamlsets onlyincludeSelectors: false. BecauseincludeTemplatesdefaults to false, Pod templates will not receive the newapp.kubernetes.io/*labels, and selectors will remain unchanged. The raw manifests below do not define those labels in their.spec.selectoror.spec.template.metadata.labels:•
config/manager-base/manager.yaml
•config/webhook-server/deployment.yaml
•config/webhook/service.yamlWithout applying both selectors and template labels, NetworkPolicies and Services won’t match the intended Pods. To address this, explicitly enable template labeling:
config/olm/kustomization.yaml labels: - includeSelectors: false + includeSelectors: false # keep selectors untouched + includeTemplates: true # ensure Pod templates get these labels pairs: app.kubernetes.io/component: kmm app.kubernetes.io/name: kmm app.kubernetes.io/part-of: kmmAfter applying, please verify:
- Each Deployment’s
.spec.template.metadata.labelsincludes all threeapp.kubernetes.io/*entries.- Each Service/Deployment
.spec.selector(or.spec.selector.matchLabels) matches those labels exactly.config/network-policy/default-deny.yaml (1)
1-11: Scope and coverage check: verify that compensating allow policies fully cover required traffic.Ensure all KMM Pods retain:
- Ingress/egress needed for health probes, metrics scraping (if any), webhook traffic from kube-apiserver, and inter-pod communications in the namespace.
- Egress to external registries or cluster-internal registries used by build-and-sign flows.
Please confirm e2e covers:
- Module build/sign success under policies
- Webhook admission requests succeeding with default-deny in place
- DNS resolution from Pods
If gaps are found, I can draft targeted allow rules (DNS, API, intra-namespace) for each component. Want me to propose those?
bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml (1)
6-10: Default-deny is fine; ensure companion allow policies cover DNS and metrics pathsThis blanket policy will block all traffic for pods in the namespace not matched by more specific policies. Double-check that:
- DNS egress is granted where needed (controller/webhook/build/sign as applicable).
- Any metrics scraping (Prometheus) has explicit ingress to controller/webhook if required.
I can scan the repo and list which pods would remain only subject to default-deny to help spot gaps.
ci/e2e/module.yaml (2)
6-6: LGTM: e2e Module now targets openshift-kmmAligns with the broader namespace scoping in this PR.
7-34: Verify all e2e assets and scripts are namespace-alignedConfirm ServiceAccounts, Roles/RoleBindings, Secrets/ConfigMaps, and script invocations also use -n openshift-kmm consistently so e2e doesn’t flake on namespace mismatches.
If you want, I can generate a quick repo scan to highlight any lingering -n defaults or hardcoded namespaces.
config/webhook-server/kustomization.yaml (1)
12-14: includeSelectors removal: double-check intended propagationRemoving includeSelectors means kustomize will not mutate selectors. Ensure the Deployment/Service selectors already carry the labels your policies depend on and that dropping includeSelectors doesn’t unintentionally change rollout behavior.
I can grep Deployments/Services under config/webhook-server to confirm selector/label symmetry if helpful.
ci/e2e/kustomization.yaml (1)
11-14: Kustomize binary not available—please verify namespace locallyThe sandbox environment lacks the
kustomizeCLI, so we couldn’t confirm that generated ConfigMap and Secret resources includemetadata.namespace: openshift-kmm. To ensure the patch works as intended, run the build in an environment with Kustomize installed and verify that both resources have the correct namespace:kustomize build ci/e2e | awk '/^kind: (ConfigMap|Secret)/, /^$/' | sed -n '1,12p'If you see
metadata.namespace: openshift-kmmon both objects, the refactor is correct. Otherwise, apply the diff in the original comment to movenamespace: openshift-kmmto the top level ofkustomization.yaml.config/network-policy/controller.yaml (1)
20-31: DNS egress peer selection is precise (namespace+pod selector intersection). LGTM.Selecting openshift-dns daemonset via both namespaceSelector and podSelector is accurate and avoids accidental matches.
ci/prow/e2e-incluster-build (1)
12-12: LGTM! Consistent namespace scoping to openshift-kmm.The changes consistently scope all KMM-related operations to the
openshift-kmmnamespace, which aligns with the PR's objective to add network policies and improve namespace isolation for KMM components.Also applies to: 49-50, 55-56, 69-69, 78-78
config/deploy-hub/kustomization.yaml (1)
12-12: LGTM! Hub deployment now includes network policies.Adding the
../network-policyresource enables network policies to be included in hub deployments, which aligns with the PR objective to add NetworkPolicy manifests for KMM pods.config/network-policy/kustomization.yaml (1)
1-7: LGTM! Well-structured network policy kustomization.The kustomization properly references all four network policy manifests (default-deny, controller, webhook, and build-and-sign), providing a clean organization for the NetworkPolicy resources.
config/deploy/kustomization.yaml (1)
13-13: LGTM! Standard deployment now includes network policies.Adding the
../network-policyresource enables network policies to be included in standard deployments, complementing the hub deployment configuration and supporting the PR's network security objectives.ci/sign-key-certs/kustomization.yaml (1)
9-9: LGTM! Proper namespace scoping for signing secrets.Adding the
namespace: openshift-kmmto both secret generators ensures the signing certificates and keys are created in the correct namespace, which is essential for the network policies to function properly.Also applies to: 12-12
hack/generate-bundle (1)
11-11: Nice: feature-flag for including NetworkPolicies.
INCLUDE_NETWORK_POLICIESdefaulting to false is a clean toggle and keeps the generator backward compatible.Makefile (2)
282-297: Legacy “bundle-old” target is fine as a compatibility/validation path.Keeping the original flow (including
operator-sdk bundle validate) under a distinct target is a pragmatic way to iterate on the new bundle contents.
313-335: Hub legacy target mirrors non-hub — looks good.Parity with the non-hub “old” flow reads well and should ease debugging differences between the tracks.
| podSelector: | ||
| matchExpressions: | ||
| - key: openshift.io/build.name | ||
| operator: Exists | ||
| policyTypes: | ||
| - Egress | ||
| egress: | ||
| - {} No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML lint issues (indentation, trailing newline) and keep minimal style.
YAMLLint flagged indentation under policyTypes and missing newline at EOF. Also normalize list indentation under matchExpressions.
spec:
- podSelector:
- matchExpressions:
- - key: openshift.io/build.name
- operator: Exists
- policyTypes:
- - Egress
- egress:
- - {}
+ podSelector:
+ matchExpressions:
+ - key: openshift.io/build.name
+ operator: Exists
+ policyTypes:
+ - Egress
+ egress:
+ - {}
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| podSelector: | |
| matchExpressions: | |
| - key: openshift.io/build.name | |
| operator: Exists | |
| policyTypes: | |
| - Egress | |
| egress: | |
| - {} | |
| podSelector: | |
| matchExpressions: | |
| - key: openshift.io/build.name | |
| operator: Exists | |
| policyTypes: | |
| - Egress | |
| egress: | |
| - {} | |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
around lines 7 to 14, the YAML has lint problems: incorrect indentation for the
policyTypes and list items under matchExpressions, and missing trailing newline.
Fix by aligning list items two spaces under their parent keys (ensure
matchExpressions items are indented to match podSelector and the dash items are
properly indented), place policyTypes and its list at the same indentation level
as podSelector and egress, and ensure the file ends with a single trailing
newline; keep the minimal style (no extra blank lines or comments).
bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml
Show resolved
Hide resolved
| ingress: | ||
| - ports: | ||
| - protocol: TCP | ||
| port: 9443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ingress currently allows from anywhere; restrict to API server
With an ingress rule that only lists ports and no from peers, traffic on 9443 is allowed from any source. For admission webhooks we should scope ingress to the API server namespaces to reduce blast radius.
Example tightening (covers both vanilla and OpenShift):
ingress:
- - ports:
- - protocol: TCP
- port: 9443
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: kube-system
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: openshift-kube-apiserver
+ ports:
+ - protocol: TCP
+ port: 9443If you prefer stricter scoping, we can further constrain with podSelector labels on the apiserver pods in those namespaces.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ingress: | |
| - ports: | |
| - protocol: TCP | |
| port: 9443 | |
| ingress: | |
| - from: | |
| - namespaceSelector: | |
| matchLabels: | |
| kubernetes.io/metadata.name: kube-system | |
| - namespaceSelector: | |
| matchLabels: | |
| kubernetes.io/metadata.name: openshift-kube-apiserver | |
| ports: | |
| - protocol: TCP | |
| port: 9443 |
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml
lines 13-16, the ingress rule lists only ports so port 9443 is open to any
source; add explicit from peers to restrict traffic to API server namespaces.
Modify the ingress stanza to include from entries for the API server namespaces
(e.g., a namespaceSelector with matchLabels: kubernetes.io/metadata.name:
kube-system and a second from entry for OpenShift API namespace like
matchLabels: kubernetes.io/metadata.name: openshift-apiserver / or
openshift-kube-apiserver as appropriate), and optionally add a podSelector
within those from entries to limit to apiserver pods; ensure multiple from
blocks are used so only requests originating from those namespaces/pods can
reach port 9443.
| policyTypes: | ||
| - Egress | ||
| egress: | ||
| - {} No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML indentation and ensure newline at EOF to satisfy linters/CI.
Yamllint flagged indentation and missing newline. Apply:
spec:
podSelector:
matchExpressions:
- - key: openshift.io/build.name
- operator: Exists
- policyTypes:
- - Egress
- egress:
- - {}
+ - key: openshift.io/build.name
+ operator: Exists
+ policyTypes:
+ - Egress
+ egress:
+ - {}
+ Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 12-12: wrong indentation: expected 2 but found 4
(indentation)
[error] 14-14: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In
bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
around lines 11 to 14, the YAML has incorrect indentation for the
policyTypes/egress block and is missing a newline at EOF; fix the indentation so
that "policyTypes:" and "egress:" are top-level entries with their lists
properly indented (e.g. policyTypes: followed by an indented "- Egress" and
egress: followed by an indented "- {}" or a proper rule), and ensure the file
ends with a single trailing newline character.
| metadata: | ||
| name: controller | ||
| namespace: system | ||
| spec: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Bundle manifest should not hardcode metadata.namespace: system.
OLM typically applies bundle objects into the operator’s install namespace. Shipping a hardcoded namespace can cause install-time failures or resources landing in the wrong namespace. Recommend dropping the field in the bundled file (or have the bundle generator strip it).
metadata:
name: controller
- namespace: system🏁 Script executed:
#!/usr/bin/env bash
# Locate the NetworkPolicy manifest and print its metadata section
manifest="bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml"
if [ -f "$manifest" ]; then
echo "Showing lines 1-20 of $manifest:"
sed -n '1,20p' "$manifest"
else
echo "$manifest not found."
fiLength of output: 871
Remove hardcoded namespace from NetworkPolicy manifest
The bundle’s NetworkPolicy manifest explicitly sets metadata.namespace: system, which will force the resource into the system namespace regardless of the operator’s install namespace. This can lead to install-time failures or resources landing in the wrong namespace. OLM by default applies namespaced objects into the operator’s install namespace, so you should omit the namespace field.
Please update the file as follows:
• File: bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml
• Remove or comment out the namespace: system line under metadata.
Suggested diff:
metadata:
name: controller
- namespace: system📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| name: controller | |
| namespace: system | |
| spec: | |
| metadata: | |
| name: controller | |
| spec: |
🤖 Prompt for AI Agents
In
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml
around lines 3 to 6, remove the hardcoded metadata.namespace: system line so the
NetworkPolicy does not force the resource into a specific namespace; delete or
comment out that namespace entry under metadata and leave metadata.name only so
OLM can apply the object into the operator’s install namespace.
| wait_for_pod_and_print_logs () { | ||
| # we can't exec a command nor get the logs on a pod that isn't `Running` yet. | ||
| oc wait "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s | ||
| oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix syntax error in the oc wait command.
There's a missing space between openshift-kmm and "pod/${POD_NAME}" which will cause a syntax error.
- oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s
+ oc wait -n openshift-kmm "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| oc wait -n openshift-kmm"pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s | |
| oc wait -n openshift-kmm "pod/${POD_NAME}" --for jsonpath='{.status.phase}'=Running --timeout=60s |
🤖 Prompt for AI Agents
ci/prow/e2e-incluster-build around line 9: the oc wait command lacks a space
between the namespace and the resource argument, causing a syntax error; fix by
inserting a space so the -n openshift-kmm flag and the pod/${POD_NAME} argument
are separate (or alternatively use the pod/NAME form), ensuring the command
syntax is valid and the --for/--timeout flags remain unchanged.
| podSelector: | ||
| matchExpressions: | ||
| - key: openshift.io/build.name | ||
| operator: Exists |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Selector only matches OpenShift Build pods; “sign” pods may remain blocked
The selector relies on openshift.io/build.name Exists, which matches Build/BuildConfig pods but not typical KMM signing Jobs/Pods. If “sign” runs as a Job/Pod without that label, it will still be default-denied.
Two robust options:
- Split into two policies: build.yaml (current selector) and sign.yaml (match the label your signing pods carry, e.g., app.kubernetes.io/component: sign or job-name: ...).
- Or, add the missing label to signing pods and update the selector to match it.
If you want, I can draft sign.yaml once you confirm the canonical labels on signing pods.
🤖 Prompt for AI Agents
In config/network-policy/build-and-sign.yaml around lines 7-10, the podSelector
only matches pods with openshift.io/build.name which excludes signing Job/Pod
pods so “sign” traffic remains default-denied; fix by either: (A) split into two
NetworkPolicy files — build.yaml using the existing openshift.io/build.name
selector and sign.yaml with a podSelector that matches the canonical label your
signing pods carry (e.g., matchLabels: app.kubernetes.io/component: sign or
matchLabels: job-name: <sign-job-name>), or (B) update this policy’s podSelector
to include the signing label as an additional selector (add a matchLabels entry
or an additional matchExpression that matches the signing pod label), or
alternatively ensure your signing Pods/Jobs are labeled with
openshift.io/build.name so the current selector applies.
| metadata: | ||
| name: controller | ||
| namespace: system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Avoid hardcoding namespace in source manifests used for bundling.
This file is copied verbatim into the bundle, so metadata.namespace: system will ship as-is. Unless your OLM pipeline rewrites namespaces at install time, this can mis-target the resource. Prefer omitting metadata.namespace here and let overlays/OLM inject it.
metadata:
name: controller
- namespace: system🏁 Script executed:
#!/bin/bash
# Search for Kustomize configuration files in the repo
find . -type f -name 'kustomization.yaml'
echo
# Look for any overlays directories
find . -type d | grep -E 'overlays?'
echo
# List occurrences of 'namespace:' in the config directory
grep -R -n 'namespace:' configLength of output: 5279
🏁 Script executed:
#!/bin/bash
# Print the first 200 lines of the network-policy kustomization to check for namespace transformers
sed -n '1,200p' config/network-policy/kustomization.yamlLength of output: 206
🏁 Script executed:
#!/bin/bash
# Check for namespace injector in network-policy overlay
grep -n "namespace:" -C2 config/network-policy/kustomization.yaml || echo "No namespace directive in kustomization"Length of output: 154
Remove hardcoded namespace from network-policy controller manifest
The config/network-policy/controller.yaml file is included directly by its kustomization without any namespace transformer, so the metadata.namespace: system setting will be baked into the final bundle as-is. To ensure the resource is scoped to whatever target namespace the end-user or OLM installs into, omit the namespace field here and let overlays or OLM inject the proper namespace.
• File needing update:
- config/network-policy/controller.yaml
Suggested change:
metadata:
name: controller
- namespace: system📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata: | |
| name: controller | |
| namespace: system | |
| metadata: | |
| name: controller |
🤖 Prompt for AI Agents
In config/network-policy/controller.yaml around lines 3 to 5, remove the
hardcoded metadata.namespace: system entry so the manifest does not bake a fixed
namespace into the kustomized bundle; leave the metadata block without a
namespace field so overlays or OLM can inject the target namespace at deploy
time and verify there are no other files or transformers that rely on this
hardcoded value.
| - default-deny.yaml | ||
| - controller.yaml | ||
| - webhook.yaml | ||
| - build-and-sign.yaml No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing newline at end of file.
The YAML file should end with a newline character as indicated by the static analysis tool.
-- build-and-sign.yaml
+- build-and-sign.yaml
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - build-and-sign.yaml | |
| - build-and-sign.yaml | |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 7-7: no new line character at the end of file
(new-line-at-end-of-file)
🤖 Prompt for AI Agents
In config/network-policy/kustomization.yaml at line ~7, the file is missing a
trailing newline; update the file by ensuring there is a single newline
character at the end of the file (i.e., add an empty line after
"build-and-sign.yaml") and save so the file ends with a newline to satisfy the
linter.
|
We are currently waiting until OLM is stable in older versions of OCP and Konflux allows NP manifests in the bundles. |
8d8c933 to
b31da52
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
Makefile (2)
336-356: Restore bundle validation to the bundle-hub target.The refactored
bundle-hubtarget (line 336) removed the validation step that was present in the original implementation (line 334 inbundle-hub-old). Validation should be restored to catch issues with generated hub bundle manifests.Apply this diff to restore validation:
OPERATOR_SDK="${OPERATOR_SDK}" \ BUNDLE_GEN_FLAGS="${BUNDLE_GEN_FLAGS}" \ MANIFESTS_DIR=config/manifests-hub \ PKG=kernel-module-management-hub \ SOURCE_DIR=$(dir $(realpath $(lastword $(MAKEFILE_LIST)))) \ SUFFIX="-hub" \ INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundle-hub
298-311: Restore bundle validation to the main targets.The refactored
bundleandbundle-hubtargets removed validation that was present in the originalbundle-oldandbundle-hub-oldimplementations. Thehack/generate-bundlescript does not perform validation itself, only manifest generation and network policy conditionals. Without validation, invalid bundle manifests can be generated silently. The help text still claims "validate generated files," making this a regression. Validation must be restored:OPERATOR_SDK="${OPERATOR_SDK}" \ BUNDLE_GEN_FLAGS="${BUNDLE_GEN_FLAGS} --extra-service-accounts kmm-operator-module-loader,kmm-operator-device-plugin" \ PKG=kernel-module-management \ SOURCE_DIR=$(dir $(realpath $(lastword $(MAKEFILE_LIST)))) \ INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundleApply the same fix to the
bundle-hubtarget.
♻️ Duplicate comments (3)
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
1-36: Remove hardcodedmetadata.namespacefrom controller NetworkPolicy bundle manifestIn bundle manifests, OLM applies namespaced resources into the operator’s install namespace. Keeping
metadata.namespace: systemcan cause install issues or place the NetworkPolicy in the wrong namespace. Please drop the explicit namespace from this file.metadata: name: controller - namespace: systemconfig/network-policy/build-and-sign.yaml (1)
1-14:build-and-signpolicy only selects build pods and hardcodes namespaceThe
podSelectormatches only pods withopenshift.io/build.name, so signing pods without that label will remain under the default-deny policy and may not be able to reach required endpoints. Either split this into separatebuildandsignpolicies with their respective labels, or extend labels/selectors so signing pods are explicitly covered as well.Also consider dropping
metadata.namespace: systemhere and letting the higher-level kustomization set the namespace, keeping this base policy namespace-agnostic.bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
1-14: Bundlebuild-and-signNetworkPolicy hardcodes namespace and likely misses sign podsThis bundled policy sets
metadata.namespace: system, which is problematic for OLM-driven installs—bundle resources should typically omitnamespaceso OLM can apply them into the operator’s chosen install namespace. Please remove the explicit namespace:metadata: name: build-and-sign - namespace: systemIn addition, the selector only matches pods with
openshift.io/build.name, so signing pods without that label will remain subject to default-deny and may not be able to egress as intended. Consider splitting this into separatebuild/signpolicies or extending labels/selectors so signing pods are explicitly covered.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (34)
Makefile(3 hunks)bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kernel-module-management.clusterserviceversion.yaml(0 hunks)bundle/manifests/kmm-operator-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-webhook-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)ci/e2e/kustomization.yaml(1 hunks)ci/e2e/module.yaml(1 hunks)ci/prow/e2e-incluster-build(4 hunks)ci/sign-key-certs/kustomization.yaml(1 hunks)config/default-hub/kustomization.yaml(1 hunks)config/default/kustomization.yaml(1 hunks)config/deploy-hub/kustomization.yaml(1 hunks)config/deploy/kustomization.yaml(1 hunks)config/manifests-hub/kustomization.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/network-policy/build-and-sign.yaml(1 hunks)config/network-policy/controller.yaml(1 hunks)config/network-policy/default-deny.yaml(1 hunks)config/network-policy/kustomization.yaml(1 hunks)config/network-policy/webhook.yaml(1 hunks)config/olm-hub/kustomization.yaml(1 hunks)config/olm/kustomization.yaml(1 hunks)config/webhook-server/kustomization.yaml(1 hunks)hack/generate-bundle(2 hunks)
💤 Files with no reviewable changes (6)
- bundle/manifests/kmm-operator-webhook-service_v1_service.yaml
- bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml
- bundle/manifests/kernel-module-management.clusterserviceversion.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml
- bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml
- bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml
🚧 Files skipped from review as they are similar to previous changes (18)
- bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml
- bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml
- config/manifests-hub/kustomization.yaml
- ci/e2e/module.yaml
- bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml
- ci/prow/e2e-incluster-build
- config/network-policy/controller.yaml
- ci/sign-key-certs/kustomization.yaml
- config/network-policy/default-deny.yaml
- bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml
- config/deploy/kustomization.yaml
- config/olm/kustomization.yaml
- ci/e2e/kustomization.yaml
- config/manifests/kustomization.yaml
- config/network-policy/kustomization.yaml
- config/network-policy/webhook.yaml
- bundle/manifests/kmm-operator-default-deny_networking.k8s.io_v1_networkpolicy.yaml
- config/olm-hub/kustomization.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (7)
config/deploy-hub/kustomization.yaml (1)
6-12: Including../network-policyin hub deploy overlay looks correctAdding
../network-policyhere cleanly wires the shared NetworkPolicy kustomization into the hub deployment, consistent with the rest of the NP rollout.config/default/kustomization.yaml (1)
19-24: ChangingincludeSelectorstofalsealters selector behaviorWith
includeSelectors: false, theapp.kubernetes.io/*labels in this block are no longer injected into object selectors, only into metadata. Please confirm thatDeployment,Service, andNetworkPolicyselectors that relied on these labels have been explicitly updated so they still match the intended pods.config/webhook-server/kustomization.yaml (1)
12-14: Webhook-server label now only applied to metadata, not selectorsRemoving
includeSelectorsmeansapp.kubernetes.io/component: webhook-serveris no longer propagated into selectors. Please verify thatServiceandNetworkPolicyselectors for the webhook now explicitly match the labels on the webhook pods.config/default-hub/kustomization.yaml (1)
19-24: HubincludeSelectorschange mirrors default; confirm selectors still matchSetting
includeSelectors: falsehere means hub-side selectors will also lose the injectedapp.kubernetes.io/*labels. Please confirm that hubDeployment/Service/NetworkPolicyselectors were updated so they still target the correct pods.hack/generate-bundle (1)
11-35: Conditional inclusion of network policies in bundles looks soundThe
INCLUDE_NETWORK_POLICIESflag and directory check keep the existing bundle generation path untouched by default, and the copy loop/naming convention (kmm-operator${SUFFIX}-${filename}_networking.k8s.io_v1_networkpolicy.yamlfromconfig/network-policy/*.yaml) looks correct assuming one NetworkPolicy per file.bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
1-36: Verify namespace consistency across NetworkPolicy manifests.The policy specifies namespace
system(line 5), but the PR summary mentions E2E tests targeting theopenshift-kmmnamespace. Confirm this is the correct namespace, or if policies should targetopenshift-kmmto align with operator deployment location.Makefile (1)
282-296: The hack/generate-bundle script correctly handles the INCLUDE_NETWORK_POLICIES flag.The script properly:
- Defaults the flag to
falseand allows override via environment variable- Conditionally includes NetworkPolicy manifests only when
INCLUDE_NETWORK_POLICIES=trueAND theconfig/network-policydirectory exists- Safely handles missing directories with the
-dcheckThe
bundletarget (line 314) correctly passesINCLUDE_NETWORK_POLICIES=true, whilebundle-oldandbundle-hub-oldtargets do not, which is appropriate since they are deprecated versions.
| - ports: # kube api server | ||
| - protocol: TCP | ||
| port: 6443 | ||
| - protocol: TCP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -path "*bundle-hub*" -name "*networkpolicy.yaml" -o -name "*network_policy*"Repository: rh-ecosystem-edge/kernel-module-management
Length of output: 452
🏁 Script executed:
cat -n ./bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yamlRepository: rh-ecosystem-edge/kernel-module-management
Length of output: 1169
🏁 Script executed:
cat -n ./bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yamlRepository: rh-ecosystem-edge/kernel-module-management
Length of output: 315
🏁 Script executed:
# Verify the context of how these NetworkPolicies work together
# Check if there are any other resources that might be relevant
git log --oneline -n 5 ./bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yamlRepository: rh-ecosystem-edge/kernel-module-management
Length of output: 125
🏁 Script executed:
# Check the commit that added these network policies
git show b31da524 --stat | head -30Repository: rh-ecosystem-edge/kernel-module-management
Length of output: 1548
Restrict egress to API servers with a destination selector.
The egress rule for Kubernetes API servers (lines 32-36) lacks a to selector, permitting egress to any destination on TCP ports 6443 and 443. In a default-deny context, this rule should restrict traffic to specific API server targets. Add a destination selector to limit traffic appropriately.
Consider applying this change:
- ports: # kube api server
- protocol: TCP
port: 6443
- protocol: TCP
port: 443
+ to:
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: default
+ podSelector:
+ matchLabels:
+ component: kube-apiserverCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml
around lines 32-35, the egress rule opens TCP 6443 and 443 to any destination;
update the egress to include a destination selector by adding a "to" clause that
limits targets to the API servers (for example use namespaceSelector:
matchLabels: {kubernetes.io/metadata.name: kube-system} plus a podSelector that
matches the kube-apiserver pods, or if your cluster exposes API servers via
known IPs/CIDR use an ipBlock with the API server CIDR), ensuring only those
selected endpoints are allowed egress to ports 6443 and 443.
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: build-and-sign |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already shipped NP for operands, we should now only add NP manifests to the bundle for the operator itself (controllers + webhooks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also deleted the default deny
b31da52 to
7744ec4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
Makefile (2)
298-311: Bundle validation step is missing from the newbundletarget.The
bundle-oldtarget includes${OPERATOR_SDK} bundle validate ./bundleat line 296, but the newbundletarget does not. This means bundles generated with network policies won't be validated for OLM compliance.Was this intentional due to current tooling limitations with NetworkPolicy manifests, or should validation be added?
INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundle
336-356: Bundle validation step is also missing from thebundle-hubtarget.Similar to the
bundletarget,bundle-hub-oldincludes validation at line 334 (${OPERATOR_SDK} bundle validate ./bundle-hub), but the newbundle-hubtarget omits it.INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundle-hub
♻️ Duplicate comments (5)
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
3-5: Namespace issue already flagged in previous review.The
namespace: systemconcern was raised previously. OLM applies namespaced objects into the operator's install namespace, so this hardcoded value should be removed.bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
7-14: YAML formatting issues previously flagged.Past reviews identified indentation inconsistencies and missing trailing newline in this file.
bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml (1)
13-16: Ingress allows traffic from any source.The ingress rule permits TCP port 9443 from any source because no
fromclause is specified. This was previously flagged and should be restricted to API server namespaces.config/network-policy/controller.yaml (1)
3-5: Hardcoded namespace should be removed.The
metadata.namespace: systemfield was previously flagged for removal to allow OLM or overlays to inject the correct target namespace at install time.bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
32-36: Egress to API servers lacks destination restrictions.This issue was previously flagged: the egress rule opens TCP ports 6443 and 443 to any destination without a
toselector.
🧹 Nitpick comments (3)
bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml (1)
32-36: Consider restricting kube-apiserver egress to the API server endpoint.The current egress rule allows traffic to any destination on ports 6443 and 443. While this works, it's more permissive than necessary. You could tighten this by targeting the
defaultnamespace'skubernetesservice or using IP blocks for the API server.For OpenShift, you might consider:
- ports: # kube api server - protocol: TCP port: 6443 - protocol: TCP port: 443 + to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: defaultAlternatively, if the API server IP is known/stable, use
ipBlock. However, if flexibility across clusters is required, the current approach may be acceptable.bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml (1)
17-22: Consider restricting egress to specific API server endpoints.The egress rule allows TCP ports 6443 and 443 to any destination. While webhooks need to reach the API server, consider adding a
toclause with namespace/pod selectors or IP blocks to limit the blast radius.Example tightening:
egress: - - ports: # kube api server port - - protocol: TCP - port: 6443 - - protocol: TCP - port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: default + podSelector: + matchLabels: + component: kube-apiserver + ports: + - protocol: TCP + port: 6443 + - protocol: TCP + port: 443config/network-policy/controller.yaml (1)
13-18: Consider restricting ingress sources for metrics and health endpoints.The ingress rule allows TCP ports 8443 (metrics) and 8081 (healthz) from any source. If these endpoints are accessed by specific monitoring systems (e.g., Prometheus in
openshift-monitoring), consider adding afromclause to limit access.Example for Prometheus metrics:
ingress: - - ports: - - protocol: TCP # metrics port - port: 8443 - - protocol: TCP - port: 8081 # Healthz + - from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-monitoring + ports: + - protocol: TCP # metrics port + port: 8443 + - protocol: TCP + port: 8081 # Healthz
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (30)
Makefile(3 hunks)bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kernel-module-management.clusterserviceversion.yaml(0 hunks)bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-webhook-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)ci/e2e/kustomization.yaml(1 hunks)ci/e2e/module.yaml(1 hunks)ci/prow/e2e-incluster-build(4 hunks)ci/sign-key-certs/kustomization.yaml(1 hunks)config/default-hub/kustomization.yaml(1 hunks)config/default/kustomization.yaml(1 hunks)config/deploy-hub/kustomization.yaml(1 hunks)config/deploy/kustomization.yaml(1 hunks)config/manifests-hub/kustomization.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/network-policy/controller.yaml(1 hunks)config/network-policy/kustomization.yaml(1 hunks)config/network-policy/webhook.yaml(1 hunks)config/olm-hub/kustomization.yaml(1 hunks)config/olm/kustomization.yaml(1 hunks)config/webhook-server/kustomization.yaml(1 hunks)hack/generate-bundle(2 hunks)
💤 Files with no reviewable changes (6)
- bundle/manifests/kmm-operator-webhook-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml
- bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml
- bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml
- bundle/manifests/kernel-module-management.clusterserviceversion.yaml
🚧 Files skipped from review as they are similar to previous changes (11)
- ci/prow/e2e-incluster-build
- hack/generate-bundle
- config/network-policy/webhook.yaml
- config/network-policy/kustomization.yaml
- ci/sign-key-certs/kustomization.yaml
- config/webhook-server/kustomization.yaml
- config/manifests/kustomization.yaml
- config/default/kustomization.yaml
- config/deploy-hub/kustomization.yaml
- bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml
- config/olm/kustomization.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (7)
config/manifests-hub/kustomization.yaml (1)
11-16: LGTM – selector exclusion aligns with NetworkPolicy requirements.Disabling
includeSelectorsensures that kustomize-generated labels are only added to resource metadata and not propagated into selector fields (e.g., Deploymentspec.selector, Servicespec.selector). This prevents label mismatches with NetworkPolicy pod selectors that target specific control-plane labels.config/default-hub/kustomization.yaml (1)
19-24: LGTM – consistent with the hub manifests configuration.The
includeSelectors: falsesetting here mirrors the change inconfig/manifests-hub/kustomization.yaml, ensuring deployment-time and bundle-time behavior remain aligned.config/olm-hub/kustomization.yaml (1)
12-17: LGTM – completes the consistent selector configuration across hub kustomizations.This change aligns with the modifications in
config/default-hub/kustomization.yamlandconfig/manifests-hub/kustomization.yaml.Makefile (1)
282-296: Preserving backward compatibility with-oldtargets is a good practice.The
bundle-oldandbundle-hub-oldtargets provide a fallback for environments where NetworkPolicy support isn't yet available (per PR comments, Konflux support is expected ~Q1 2026).config/deploy/kustomization.yaml (1)
13-13: LGTM! Network policy resource properly included.The addition of the network-policy resource reference correctly wires the NetworkPolicy manifests into the kustomize build for deployment.
bundle-hub/manifests/kmm-operator-hub-build-and-sign_networking.k8s.io_v1_networkpolicy.yaml (1)
13-14: Build pods have unrestricted egress access.The empty egress rule
- {}allows build-and-sign pods to reach any destination. While this may be necessary for builds to access container registries, Git repos, and other external resources, be aware this grants broad network access.Verify this aligns with your security requirements and consider whether egress can be scoped to specific registries and code repositories if they're known in advance.
ci/e2e/module.yaml (1)
6-6: All namespace references and variable substitution are correctly configured.The namespace addition at line 6 aligns with the kustomization configuration. Verification confirms:
- The
$MOD_NAMESPACEvariable at line 17 is substituted at runtime by the KMM operator with the module's namespace (openshift-kmm), as shown in the internal build logic.- The signing secrets referenced at lines 25-26 (
kmm-kmod-signing-certandkmm-kmod-signing-key) are created in theopenshift-kmmnamespace (ci/sign-key-certs/kustomization.yaml).- All referenced resources—build secrets, configmaps, and signing secrets—are consistently deployed to the
openshift-kmmnamespace, ensuring proper resource resolution.
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: default-deny | ||
| namespace: system | ||
| spec: | ||
| podSelector: {} | ||
| policyTypes: | ||
| - Ingress | ||
| - Egress |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NetworkPolicy has hardcoded namespace that won't follow operator installation.
NetworkPolicies apply to pods within a specific namespace and can only select Pods that are in the same namespace as the NetworkPolicy itself. The manifest defines namespace: system on line 5, which means this policy will always be created in the "system" namespace regardless of where the operator is installed. Bundle generation is supposed to be idempotent, so any changes to manifest fields must be made to the base set of manifests, typically found in config/. Update the source manifest to use a dynamic namespace reference (via kustomize or similar templating) rather than hardcoding "system", so the NetworkPolicy applies to the correct namespace during installation.
🤖 Prompt for AI Agents
In
bundle-hub/manifests/kmm-operator-hub-default-deny_networking.k8s.io_v1_networkpolicy.yaml
around lines 1 to 10, the NetworkPolicy hardcodes namespace: system which
prevents the policy from being created in the operator's install namespace;
remove the hardcoded namespace and make the manifest namespace-agnostic by
either deleting the metadata.namespace field so it is applied into the target
namespace at install time, or replace it with a kustomize variable or
placeholder (e.g. use metadata.namespace: ${NAMESPACE} or use kustomize's
namespace transformer) and update config/ base manifests to include the
corresponding kustomize patch/vars so bundle generation produces a
templated/dynamic namespace instead of "system".
| configMapGenerator: | ||
| - name: kmm-kmod-dockerfile | ||
| namespace: openshift-kmm | ||
| files: [dockerfile=Dockerfile] | ||
|
|
||
| secretGenerator: | ||
| - name: build-secret | ||
| namespace: openshift-kmm | ||
| literals: [ci-build-secret=super-secret-value] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add namespace resource or document pre-requisite namespace creation.
The configMapGenerator and secretGenerator both specify namespace: openshift-kmm, but there's no explicit Namespace resource defined in this kustomization file. Either add a Namespace resource to this file to ensure the namespace is created before the resources are applied, or update documentation to confirm the e2e test setup creates the namespace as a prerequisite step.
🤖 Prompt for AI Agents
In ci/e2e/kustomization.yaml around lines 11 to 19, the configMapGenerator and
secretGenerator specify namespace: openshift-kmm but no Namespace resource is
created; add a Namespace resource entry to this kustomization (or include a
resources: - namespace.yaml that defines metadata.name: openshift-kmm) so the
namespace is created prior to the generated configmap/secret, or alternatively
update the CI/e2E docs to state that openshift-kmm must be created as a
prerequisite before applying this kustomization.
| - ports: # kube api server | ||
| - protocol: TCP | ||
| port: 6443 | ||
| - protocol: TCP | ||
| port: 443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Egress to API servers lacks destination restrictions.
The egress rule permits TCP ports 6443 and 443 to any destination. In a default-deny context, this should be scoped to actual API server endpoints using namespace/pod selectors or IP blocks.
Apply this change to restrict API server access:
- - ports: # kube api server
- - protocol: TCP
- port: 6443
- - protocol: TCP
- port: 443
+ - to:
+ - namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: default
+ podSelector:
+ matchLabels:
+ component: kube-apiserver
+ ports:
+ - protocol: TCP
+ port: 6443
+ - protocol: TCP
+ port: 443📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - ports: # kube api server | |
| - protocol: TCP | |
| port: 6443 | |
| - protocol: TCP | |
| port: 443 | |
| - to: | |
| - namespaceSelector: | |
| matchLabels: | |
| kubernetes.io/metadata.name: default | |
| podSelector: | |
| matchLabels: | |
| component: kube-apiserver | |
| ports: | |
| - protocol: TCP | |
| port: 6443 | |
| - protocol: TCP | |
| port: 443 |
Due to security concerns, we need to allow KMM operator only the nessecery traffic. This commits adds Network policies for each kmm oeprator pod. 1. controller 2. webhook This commit also changes e2e tests to verify the network policies affect. This commit also affects KMM bundle to include the network policy mainfests.
7744ec4 to
727f38a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
ci/e2e/kustomization.yaml (1)
11-19: Past review concern remains valid: Document or create namespace prerequisite.The
configMapGeneratorandsecretGeneratorboth specifynamespace: openshift-kmm, but there's no explicit Namespace resource in this kustomization file. As noted in the previous review, either add a Namespace resource to ensure the namespace exists before these resources are applied, or document that the e2e test setup creates the namespace as a prerequisite step.#!/bin/bash # Description: Check if namespace creation is handled elsewhere in e2e setup # Search for namespace creation in e2e test files rg -n "namespace.*openshift-kmm" -g "*.go" -g "*.yaml" -g "*.sh" -A 3 -B 3 # Look for namespace resource definitions fd -e yaml | xargs rg -l "kind: Namespace" | xargs rg "name: openshift-kmm" -B 5 -A 2config/network-policy/kustomization.yaml (1)
1-5: Verify the missing newline issue is resolved.A previous review comment flagged that this file was missing a trailing newline character. Please ensure the file ends with a newline after
- webhook.yamlto satisfy YAML linting requirements.#!/bin/bash # Description: Check if the file ends with a newline # Check for newline at end of file if [ -f "config/network-policy/kustomization.yaml" ]; then if [ -n "$(tail -c 1 config/network-policy/kustomization.yaml)" ]; then echo "❌ File does NOT end with newline" exit 1 else echo "✓ File ends with newline" fi else echo "File not found" ficonfig/network-policy/controller.yaml (2)
3-5: Past review concern remains valid: Remove hardcoded namespace.As noted in the previous review,
metadata.namespace: systemis hardcoded in this manifest that gets copied into the bundle. Since there's no namespace transformer inconfig/network-policy/kustomization.yaml, this hardcoded value will be baked into the final bundle, potentially mis-targeting the resource when installed via OLM. Omit thenamespacefield here and let overlays or OLM inject the proper namespace at deployment time.Apply this diff:
metadata: name: controller - namespace: system
32-36: Past review concern remains valid: Scope API server egress to specific destinations.As noted in the previous review, the egress rule permits TCP ports 6443 and 443 to any destination without restrictions. In a default-deny NetworkPolicy context, this overly permissive rule undermines the security posture. Scope this egress to actual Kubernetes API server endpoints using appropriate selectors or IP blocks.
Apply this diff to restrict API server access:
- - ports: # kube api server - - protocol: TCP - port: 6443 - - protocol: TCP - port: 443 + - to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: default + podSelector: + matchLabels: + component: kube-apiserver + ports: + - protocol: TCP + port: 6443 + - protocol: TCP + port: 443
🧹 Nitpick comments (3)
Makefile (2)
298-311: Consider adding validation to the active bundle target.The
bundletarget generates bundle manifests withINCLUDE_NETWORK_POLICIES=truebut doesn't include an explicit validation step with${OPERATOR_SDK} bundle validate ./bundlelikebundle-olddoes (line 296). Consider adding validation to ensure the generated bundle with network policies is valid.Apply this diff to add validation:
OPERATOR_SDK="${OPERATOR_SDK}" \ BUNDLE_GEN_FLAGS="${BUNDLE_GEN_FLAGS} --extra-service-accounts kmm-operator-module-loader,kmm-operator-device-plugin" \ PKG=kernel-module-management \ SOURCE_DIR=$(dir $(realpath $(lastword $(MAKEFILE_LIST)))) \ INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundle
336-356: Consider adding validation to the active bundle-hub target.The
bundle-hubtarget generates hub bundle manifests withINCLUDE_NETWORK_POLICIES=truebut doesn't include an explicit validation step with${OPERATOR_SDK} bundle validate ./bundle-hublikebundle-hub-olddoes (line 334). Consider adding validation to ensure the generated hub bundle with network policies is valid.Apply this diff to add validation:
OPERATOR_SDK="${OPERATOR_SDK}" \ BUNDLE_GEN_FLAGS="${BUNDLE_GEN_FLAGS}" \ MANIFESTS_DIR=config/manifests-hub \ PKG=kernel-module-management-hub \ SOURCE_DIR=$(dir $(realpath $(lastword $(MAKEFILE_LIST)))) \ SUFFIX="-hub" \ INCLUDE_NETWORK_POLICIES=true \ ./hack/generate-bundle + + ${OPERATOR_SDK} bundle validate ./bundle-hubconfig/network-policy/controller.yaml (1)
13-18: Consider documenting or restricting ingress sources.The ingress rule allows traffic on TCP ports 8443 (metrics) and 8081 (healthz) from any source. Consider whether these should be restricted to specific sources (e.g., monitoring infrastructure for metrics, only same-namespace for healthz) or document that unrestricted ingress is intentional for these ports.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
Makefile(3 hunks)bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml(0 hunks)bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kernel-module-management.clusterserviceversion.yaml(0 hunks)bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)bundle/manifests/kmm-operator-webhook-service_v1_service.yaml(0 hunks)bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml(1 hunks)ci/e2e/kustomization.yaml(1 hunks)ci/e2e/module.yaml(1 hunks)ci/prow/e2e-incluster-build(4 hunks)ci/sign-key-certs/kustomization.yaml(1 hunks)config/default-hub/kustomization.yaml(1 hunks)config/default/kustomization.yaml(1 hunks)config/deploy-hub/kustomization.yaml(1 hunks)config/deploy/kustomization.yaml(1 hunks)config/manifests-hub/kustomization.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/network-policy/controller.yaml(1 hunks)config/network-policy/kustomization.yaml(1 hunks)config/network-policy/webhook.yaml(1 hunks)config/olm-hub/kustomization.yaml(1 hunks)config/olm/kustomization.yaml(1 hunks)config/webhook-server/kustomization.yaml(1 hunks)hack/generate-bundle(2 hunks)
💤 Files with no reviewable changes (6)
- bundle/manifests/kmm-operator-webhook-service_v1_service.yaml
- bundle/manifests/kmm-operator-controller-metrics-service_v1_service.yaml
- bundle-hub/manifests/kernel-module-management-hub.clusterserviceversion.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook-service_v1_service.yaml
- bundle-hub/manifests/kmm-operator-hub-controller-metrics-service_v1_service.yaml
- bundle/manifests/kernel-module-management.clusterserviceversion.yaml
🚧 Files skipped from review as they are similar to previous changes (14)
- hack/generate-bundle
- ci/e2e/module.yaml
- ci/prow/e2e-incluster-build
- bundle/manifests/kmm-operator-webhook_networking.k8s.io_v1_networkpolicy.yaml
- config/manifests-hub/kustomization.yaml
- config/manifests/kustomization.yaml
- config/deploy/kustomization.yaml
- bundle-hub/manifests/kmm-operator-hub-controller_networking.k8s.io_v1_networkpolicy.yaml
- bundle-hub/manifests/kmm-operator-hub-webhook_networking.k8s.io_v1_networkpolicy.yaml
- config/network-policy/webhook.yaml
- config/default-hub/kustomization.yaml
- bundle/manifests/kmm-operator-controller_networking.k8s.io_v1_networkpolicy.yaml
- config/olm-hub/kustomization.yaml
- config/webhook-server/kustomization.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (5)
config/olm/kustomization.yaml (1)
13-13: LGTM! Selector label control aligns with network policy requirements.Setting
includeSelectors: falseprevents automatic label propagation to selectors, which is appropriate when introducing NetworkPolicy resources that need precise pod selector control. This change is consistent with the same pattern applied across other kustomization files in this PR.ci/sign-key-certs/kustomization.yaml (1)
8-13: LGTM! Namespace scoping enhances resource isolation.Adding explicit
namespace: openshift-kmmto the secretGenerator entries ensures these secrets are properly scoped to the target namespace, aligning with the broader namespace scoping improvements in this PR.config/default/kustomization.yaml (1)
20-20: LGTM! Consistent selector control.Setting
includeSelectors: falseis consistent with the same change in other kustomization files and supports precise selector control needed for NetworkPolicy resources.Makefile (1)
282-296: Good backward compatibility strategy.The introduction of
bundle-oldandbundle-hub-oldtargets preserves the previous bundle generation behavior without network policies, while the activebundleandbundle-hubtargets now include network policies. This provides a clear migration path and maintains backward compatibility.Also applies to: 313-334
config/deploy-hub/kustomization.yaml (1)
12-12: LGTM! Network policy integration for hub deployment.Adding
../network-policyto the resources list enables NetworkPolicy manifests for the hub deployment, which aligns with the PR's objective of restricting KMM operator traffic.
|
@TomerNewman: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Due to security concerns, we need to allow KMM
operator only the nessecery traffic.
This commits adds Network policies for each kmm pod.
This commit also affects KMM bundle to include the network policy mainfests.
/hold
/cc @ybettan @yevgeny-shnaidman
Summary by CodeRabbit
Release Notes
New Features
Configuration Updates
✏️ Tip: You can customize this high-level summary in your review settings.