Skip to content

CNTRLPLANE-2845: add service-ca operator and workload network policies#324

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
liouk:network-policies
Apr 8, 2026
Merged

CNTRLPLANE-2845: add service-ca operator and workload network policies#324
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
liouk:network-policies

Conversation

@liouk
Copy link
Copy Markdown
Member

@liouk liouk commented Mar 3, 2026

Note on manifest placement: The network policies for the operand namespace (openshift-service-ca) are placed in manifests/ as CVO-managed resources rather than in bindata/ as operator-managed resources.

This is because the vendored version of library-go in this repo does not include ApplyNetworkPolicy support in ApplyDirectly. The latest library-go on master does support it (see networking.go and the NetworkPolicy case in generic.go).

If we bump the library-go dependency in the future, we can move the controller policy to bindata/v4.0.0/controller/ and have the operator manage it directly, consistent with how other operand resources (namespace, RBAC, etc.) are managed. For now, CVO will reconcile the policies and recreate them if deleted.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 3, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 3, 2026

@liouk: This pull request references CNTRLPLANE-2845 which is a valid jira issue.

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from bertinatto and p0lyn0mial March 3, 2026 15:17
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 3, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: eafd89e6-d125-4d2e-ba0d-80c03214cb1b

📥 Commits

Reviewing files that changed from the base of the PR and between ea4488d and 769deb7.

📒 Files selected for processing (4)
  • manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml
  • manifests/0000_10_openshift_service-ca_02_networkpolicy_default-deny-all.yaml
  • manifests/01_networkpolicy_00_service-ca-operator.yaml
  • manifests/01_networkpolicy_01_default-deny-all.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml
  • manifests/01_networkpolicy_01_default-deny-all.yaml

Walkthrough

Adds four new Kubernetes NetworkPolicy manifests: two for the openshift-service-ca namespace and two for the openshift-service-ca-operator namespace, defining ingress (TCP 8443), egress (DNS to openshift-dns and catch-all/API access), and default-deny-all policies.

Changes

Cohort / File(s) Summary
Service-CA Namespace Policies
manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml, manifests/0000_10_openshift_service-ca_02_networkpolicy_default-deny-all.yaml
Adds service-ca NetworkPolicy selecting app: service-ca allowing ingress TCP 8443, egress for DNS to openshift-dns (TCP/UDP 5353) plus a catch-all egress rule; and a default-deny-all NetworkPolicy with empty podSelector and policyTypes [Ingress, Egress]. Both include release annotations.
Service-CA-Operator Namespace Policies
manifests/01_networkpolicy_00_service-ca-operator.yaml, manifests/01_networkpolicy_01_default-deny-all.yaml
Adds service-ca-operator NetworkPolicy selecting app: service-ca-operator allowing ingress TCP 8443, egress for DNS to openshift-dns (TCP/UDP 5353) and unrestricted API access; and a default-deny-all NetworkPolicy with empty podSelector and policyTypes [Ingress, Egress]. Both include release annotations.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately describes the main change: adding network policies for both the service-ca operator and its workload components.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed This PR only adds four Kubernetes NetworkPolicy manifest files in YAML format—infrastructure configuration files, not test code, making this check not applicable.
Test Structure And Quality ✅ Passed PR adds only Kubernetes NetworkPolicy YAML manifest files, not test code. The test quality check is not applicable to declarative infrastructure configuration files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 3, 2026

@liouk: This pull request references CNTRLPLANE-2845 which is a valid jira issue.

Details

In response to this:

Summary by CodeRabbit

  • Chores
  • Added network security policies for service-ca components to define and enforce strict ingress and egress traffic rules, restricting network communication to only essential services including DNS resolution and Kubernetes API server access. This significantly improves cluster security posture and compliance with best practices.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 3, 2026

@liouk: This pull request references CNTRLPLANE-2845 which is a valid jira issue.

Details

In response to this:

Note on manifest placement: The network policies for the operand namespace (openshift-service-ca) are placed in manifests/ as CVO-managed resources rather than in bindata/ as operator-managed resources.

This is because the vendored version of library-go in this repo does not include ApplyNetworkPolicy support in ApplyDirectly. The latest library-go on master does support it (see networking.go and the NetworkPolicy case in generic.go).

If we bump the library-go dependency in the future, we can move the controller policy to bindata/v4.0.0/controller/ and have the operator manage it directly, consistent with how other operand resources (namespace, RBAC, etc.) are managed. For now, CVO will reconcile the policies and recreate them if deleted.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
manifests/08_networkpolicy.yaml (1)

16-21: Ingress policy lacks explicit source restrictions despite documented design intent

While the comment in lines 1-2 explains that podSelector: {} is intentional for this core operator namespace, the ingress rule on port 8443 still lacks explicit source restrictions (no from field). Although metrics scraping is controlled via the ServiceMonitor and RBAC RoleBinding for prometheus-k8s in openshift-monitoring, adding explicit network-layer controls (namespace and pod selectors in the from field) would provide defense-in-depth. Consider restricting ingress to the openshift-monitoring namespace and prometheus-k8s pod to align with the intentional scraper configuration defined in the ServiceMonitor.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/08_networkpolicy.yaml` around lines 16 - 21, The NetworkPolicy
currently allows ingress to podSelector: {} on port 8443 with no `from` clause;
add explicit source restrictions to enforce defense-in-depth by updating the
ingress rule for port 8443 to include a `from` that limits traffic to the
openshift-monitoring namespace and the prometheus-k8s pod (use a
namespaceSelector matching metadata.name: openshift-monitoring and a podSelector
matching labels used by prometheus-k8s), ensuring the ServiceMonitor and RBAC
remain the primary controls while the NetworkPolicy explicitly restricts
network-layer access to only the prometheus scraper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml`:
- Around line 36-37: The egress entry "egress: - {}" currently allows all
outbound traffic; update the NetworkPolicy to restrict egress to the Kubernetes
API server only by replacing the empty rule with an explicit rule that targets
the API server (e.g., an egress rule matching destination port 443/6443 and
either an IPBlock for the cluster API service CIDR or a peer that selects the
API server pods/namespace via namespaceSelector/podSelector); specifically,
remove the "egress: - {}" wildcard and add an egress rule that uses to: -
ipBlock: {cidr: <api-server-cidr>} ports: - protocol: TCP port: 443 (and 6443 if
used) or alternatively use to: - namespaceSelector: {matchLabels:
{kubernetes.io/metadata.name: kube-system}} + podSelector matching the
kube-apiserver label so only API-server egress is permitted.

In `@manifests/08_networkpolicy.yaml`:
- Around line 36-37: The egress rule currently uses the open matcher "- {}"
which permits all destinations; replace that with a restricted egress that
targets the Kubernetes API server using an explicit "to" entry and port (e.g.,
port 6443) or, if unrestricted egress is intended, update the comment to state
that traffic is intentionally allowed to all destinations. Locate the egress
rule with the "- {}" entry in the NetworkPolicy manifest and either replace it
with a "to" selector/ipBlock + port specification for the API server or change
the comment to reflect the open egress policy.

---

Nitpick comments:
In `@manifests/08_networkpolicy.yaml`:
- Around line 16-21: The NetworkPolicy currently allows ingress to podSelector:
{} on port 8443 with no `from` clause; add explicit source restrictions to
enforce defense-in-depth by updating the ingress rule for port 8443 to include a
`from` that limits traffic to the openshift-monitoring namespace and the
prometheus-k8s pod (use a namespaceSelector matching metadata.name:
openshift-monitoring and a podSelector matching labels used by prometheus-k8s),
ensuring the ServiceMonitor and RBAC remain the primary controls while the
NetworkPolicy explicitly restricts network-layer access to only the prometheus
scraper.

ℹ️ Review info

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 503e4f5 and 4689137.

📒 Files selected for processing (2)
  • manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml
  • manifests/08_networkpolicy.yaml

Comment on lines +36 to +37
# Kubernetes API server; no port restriction since it may run on non-standard ports
- {}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Kubernetes NetworkPolicy egress rule empty object {} behavior specification

💡 Result:

In a NetworkPolicy, each entry in spec.egress is an allow rule; traffic is allowed if it matches at least one egress rule across all policies selecting the pod (rules are additive). If a pod becomes “egress-isolated” (a selecting policy has policyTypes: [Egress]), then only traffic allowed by those egress rules is permitted. [1]

What egress: - {} means

An empty egress rule object ({}) has no to and no ports, so it imposes no destination or port constraints—it matches all destinations on all ports. Kubernetes documents this as the standard “allow all egress traffic” policy:

spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

With this in place, “no additional policy or policies can cause any outgoing connection from those pods to be denied” (because NetworkPolicy is allow-list + additive; there is no explicit “deny” rule). [2]

Contrast: “default deny egress”

If you select pods and set policyTypes: [Egress] but provide no egress rules, that creates egress isolation with zero allowed egress, i.e. deny-all egress:

spec:
  podSelector: {}
  policyTypes:
  - Egress
``` [2]

**So:** `egress: - {}` == **allow all egress** (for selected pods), while omitting egress rules under an egress-isolating policy == **deny all egress**. [1][2]

**Sources**: Kubernetes NetworkPolicy concept docs (v1.33) [1][2].

---

🏁 Script executed:

```shell
cat -n manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml | sed -n '16,37p'

Repository: openshift/service-ca-operator

Length of output: 797


egress: - {} permits unrestricted outbound traffic, not API-server-specific egress

Line 37 allows egress to any destination on any port/protocol. The comment on line 36 indicates intent to allow Kubernetes API server traffic only, but the empty egress rule {} overrides this constraint and permits all outbound destinations. Replace with an explicit rule targeting API server endpoints if that is the intended scope.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml` around lines 36
- 37, The egress entry "egress: - {}" currently allows all outbound traffic;
update the NetworkPolicy to restrict egress to the Kubernetes API server only by
replacing the empty rule with an explicit rule that targets the API server
(e.g., an egress rule matching destination port 443/6443 and either an IPBlock
for the cluster API service CIDR or a peer that selects the API server
pods/namespace via namespaceSelector/podSelector); specifically, remove the
"egress: - {}" wildcard and add an egress rule that uses to: - ipBlock: {cidr:
<api-server-cidr>} ports: - protocol: TCP port: 443 (and 6443 if used) or
alternatively use to: - namespaceSelector: {matchLabels:
{kubernetes.io/metadata.name: kube-system}} + podSelector matching the
kube-apiserver label so only API-server egress is permitted.

Comment on lines +36 to +37
# Kubernetes API server; no port restriction since it may run on non-standard ports
- {}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n manifests/08_networkpolicy.yaml | head -50

Repository: openshift/service-ca-operator

Length of output: 1693


🌐 Web query:

Kubernetes NetworkPolicy egress empty object behavior v1 API

💡 Result:

In the networking.k8s.io/v1 NetworkPolicy API, an “empty object” egress rule means “match everything”:

  • spec.egress: - {} creates one egress rule with no to and no ports constraints, so it matches all destinations on all ports → effectively allow all egress (for the selected pods, for policies that apply to Egress). This is the documented “allow all egress” pattern. [1]

  • Within an egress rule:

    • If to is missing or empty, the rule matches all destinations. [2]
    • If ports is missing or empty, the rule matches all ports. [3]

By contrast:

  • If a policy applies to egress (e.g., policyTypes: ["Egress"]) but has no egress rules (spec.egress omitted) or an empty egress list (spec.egress: []), then the selected pods become egress-isolated and no egress is allowed (default-deny for egress). [1]

Examples

# Allow all egress
spec:
  podSelector: {}
  policyTypes: ["Egress"]
  egress:
  - {}
# Deny all egress (egress isolation with zero allowed rules)
spec:
  podSelector: {}
  policyTypes: ["Egress"]
  # egress: []   # (equivalent intent: no allowed egress rules)
  # (or omit egress entirely)

Sources: Kubernetes NetworkPolicy docs (allow-all / default-deny examples) [1], API semantics for egress.to [2] and egress.ports [3].

[1] (v1-33.docs.kubernetes.io)
[2] (v1-31.kubernetes.ac.cn)
[3] (docs.okd.io)

Citations:


- {} in egress grants unrestricted access to all destinations, not API-server-only access

The empty egress rule on line 37 allows traffic to any destination on any port. This contradicts the intent stated in the line 36 comment. Either restrict the rule to the Kubernetes API server using an explicit to selector and port, or update the comment to reflect that unrestricted egress is intentional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/08_networkpolicy.yaml` around lines 36 - 37, The egress rule
currently uses the open matcher "- {}" which permits all destinations; replace
that with a restricted egress that targets the Kubernetes API server using an
explicit "to" entry and port (e.g., port 6443) or, if unrestricted egress is
intended, update the comment to state that traffic is intentionally allowed to
all destinations. Locate the egress rule with the "- {}" entry in the
NetworkPolicy manifest and either replace it with a "to" selector/ipBlock + port
specification for the API server or change the comment to reflect the open
egress policy.

@liouk liouk changed the title CNTRLPLANE-2845: add service-ca operator and workload network policies WIP: CNTRLPLANE-2845: add service-ca operator and workload network policies Mar 3, 2026
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 3, 2026
@liouk liouk force-pushed the network-policies branch from 4689137 to ea4488d Compare March 3, 2026 15:53
@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 3, 2026

/retitle CNTRLPLANE-2845: add service-ca operator and workload network policies

@openshift-ci openshift-ci bot changed the title WIP: CNTRLPLANE-2845: add service-ca operator and workload network policies CNTRLPLANE-2845: add service-ca operator and workload network policies Mar 3, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 3, 2026

@liouk: This pull request references CNTRLPLANE-2845 which is a valid jira issue.

Details

In response to this:

Note on manifest placement: The network policies for the operand namespace (openshift-service-ca) are placed in manifests/ as CVO-managed resources rather than in bindata/ as operator-managed resources.

This is because the vendored version of library-go in this repo does not include ApplyNetworkPolicy support in ApplyDirectly. The latest library-go on master does support it (see networking.go and the NetworkPolicy case in generic.go).

If we bump the library-go dependency in the future, we can move the controller policy to bindata/v4.0.0/controller/ and have the operator manage it directly, consistent with how other operand resources (namespace, RBAC, etc.) are managed. For now, CVO will reconcile the policies and recreate them if deleted.

Summary by CodeRabbit

  • New Features
  • Added network policies to control and restrict traffic for OpenShift service-ca components, enhancing cluster security.
  • Implemented default deny-all network policies to establish a secure networking baseline for service-ca namespaces.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 3, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml`:
- Around line 7-10: The annotations block in the allow NetworkPolicy is missing
the include.release.openshift.io/ibm-cloud-managed: "true" key which makes it
inconsistent with the default-deny policy and will block traffic in IBM Cloud
managed environments; update the annotations in the manifest (the annotations
map that currently contains include.release.openshift.io/hypershift,
include.release.openshift.io/self-managed-high-availability, and
include.release.openshift.io/single-node-developer) to also include
include.release.openshift.io/ibm-cloud-managed: "true" so both policies share
the same IBM Cloud managed annotation.

In `@manifests/01_networkpolicy_00_service-ca-operator.yaml`:
- Around line 7-10: Add the missing annotation
include.release.openshift.io/ibm-cloud-managed: "true" to the annotations block
in the allow policy manifest (01_networkpolicy_00_service-ca-operator.yaml) so
it matches the default-deny policy; update the annotations map containing
include.release.openshift.io/hypershift,
include.release.openshift.io/self-managed-high-availability, and
include.release.openshift.io/single-node-developer by adding the
include.release.openshift.io/ibm-cloud-managed key with the string value "true".

ℹ️ Review info

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 4689137 and ea4488d.

📒 Files selected for processing (4)
  • manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml
  • manifests/0000_10_openshift_service-ca_02_networkpolicy_default-deny-all.yaml
  • manifests/01_networkpolicy_00_service-ca-operator.yaml
  • manifests/01_networkpolicy_01_default-deny-all.yaml

Comment on lines +7 to +10
annotations:
include.release.openshift.io/hypershift: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing ibm-cloud-managed annotation creates inconsistency.

The default-deny policy for this namespace (0000_10_openshift_service-ca_02_networkpolicy_default-deny-all.yaml) includes include.release.openshift.io/ibm-cloud-managed: "true", but this allow policy does not. This would cause traffic to be blocked in IBM Cloud managed environments.

🔧 Proposed fix
   annotations:
     include.release.openshift.io/hypershift: "true"
+    include.release.openshift.io/ibm-cloud-managed: "true"
     include.release.openshift.io/self-managed-high-availability: "true"
     include.release.openshift.io/single-node-developer: "true"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
annotations:
include.release.openshift.io/hypershift: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
annotations:
include.release.openshift.io/hypershift: "true"
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/0000_10_openshift_service-ca_01_networkpolicy.yaml` around lines 7
- 10, The annotations block in the allow NetworkPolicy is missing the
include.release.openshift.io/ibm-cloud-managed: "true" key which makes it
inconsistent with the default-deny policy and will block traffic in IBM Cloud
managed environments; update the annotations in the manifest (the annotations
map that currently contains include.release.openshift.io/hypershift,
include.release.openshift.io/self-managed-high-availability, and
include.release.openshift.io/single-node-developer) to also include
include.release.openshift.io/ibm-cloud-managed: "true" so both policies share
the same IBM Cloud managed annotation.

Comment on lines +7 to +10
annotations:
include.release.openshift.io/hypershift: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing ibm-cloud-managed annotation creates inconsistency.

The default-deny policy in this namespace (01_networkpolicy_01_default-deny-all.yaml) includes include.release.openshift.io/ibm-cloud-managed: "true", but this allow policy does not. In IBM Cloud managed environments, the default-deny would be applied while this allow policy would be skipped, blocking all traffic to the operator pod.

🔧 Proposed fix
   annotations:
     include.release.openshift.io/hypershift: "true"
+    include.release.openshift.io/ibm-cloud-managed: "true"
     include.release.openshift.io/self-managed-high-availability: "true"
     include.release.openshift.io/single-node-developer: "true"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/01_networkpolicy_00_service-ca-operator.yaml` around lines 7 - 10,
Add the missing annotation include.release.openshift.io/ibm-cloud-managed:
"true" to the annotations block in the allow policy manifest
(01_networkpolicy_00_service-ca-operator.yaml) so it matches the default-deny
policy; update the annotations map containing
include.release.openshift.io/hypershift,
include.release.openshift.io/self-managed-high-availability, and
include.release.openshift.io/single-node-developer by adding the
include.release.openshift.io/ibm-cloud-managed key with the string value "true".

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 6, 2026

/retest-required

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 16, 2026

Holding to run payload jobs once review is completed.

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 16, 2026
@kaleemsiddiqu
Copy link
Copy Markdown

kaleemsiddiqu commented Mar 17, 2026

/verified by @kaleemsiddiqu
executed test cases successfully manually

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@kaleemsiddiqu: The /verified command must be used with one of the following actions: by, later, remove, or bypass. See https://docs.ci.openshift.org/docs/architecture/jira/#premerge-verification for more information.

Details

In response to this:

/verified
executed test cases successfully manually

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

name: service-ca
namespace: openshift-service-ca
annotations:
include.release.openshift.io/hypershift: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should remove the hypershift label

name: default-deny-all
namespace: openshift-service-ca
annotations:
include.release.openshift.io/hypershift: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove hypershift label and ibm-cloud-managed

name: service-ca-operator
namespace: openshift-service-ca-operator
annotations:
include.release.openshift.io/hypershift: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove hypershift label

name: default-deny-all
namespace: openshift-service-ca-operator
annotations:
include.release.openshift.io/hypershift: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove hypershift label and ibm-cloud-managed

@liouk liouk force-pushed the network-policies branch from ea4488d to 769deb7 Compare March 18, 2026 13:15
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 18, 2026

@liouk: This pull request references CNTRLPLANE-2845 which is a valid jira issue.

Details

In response to this:

Note on manifest placement: The network policies for the operand namespace (openshift-service-ca) are placed in manifests/ as CVO-managed resources rather than in bindata/ as operator-managed resources.

This is because the vendored version of library-go in this repo does not include ApplyNetworkPolicy support in ApplyDirectly. The latest library-go on master does support it (see networking.go and the NetworkPolicy case in generic.go).

If we bump the library-go dependency in the future, we can move the controller policy to bindata/v4.0.0/controller/ and have the operator manage it directly, consistent with how other operand resources (namespace, RBAC, etc.) are managed. For now, CVO will reconcile the policies and recreate them if deleted.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@dusk125
Copy link
Copy Markdown

dusk125 commented Mar 18, 2026

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 18, 2026
wangke19 added a commit to wangke19/service-ca-operator that referenced this pull request Mar 23, 2026
Verify that NetworkPolicy resources introduced in PR openshift#324 are correctly
deployed in both the openshift-service-ca-operator and openshift-service-ca
namespaces. Tests check:
- existence of allow and default-deny-all policies in each namespace
- pod selector targets the correct app label
- ingress allows TCP 8443 for metrics scraping
- egress allows port 5353 for DNS resolution
- both Ingress and Egress policyTypes are declared
wangke19 added a commit to wangke19/service-ca-operator that referenced this pull request Mar 23, 2026
Verify that NetworkPolicy resources introduced in PR openshift#324 are correctly
deployed in both the openshift-service-ca-operator and openshift-service-ca
namespaces. Tests check:
- existence of allow and default-deny-all policies in each namespace
- pod selector targets the correct app label
- ingress allows TCP 8443 for metrics scraping
- egress allows port 5353 for DNS resolution
- both Ingress and Egress policyTypes are declared
wangke19 added a commit to wangke19/service-ca-operator that referenced this pull request Mar 23, 2026
Verify that NetworkPolicy resources introduced in PR openshift#324 are correctly
deployed in both the openshift-service-ca-operator and openshift-service-ca
namespaces. Tests check:
- existence of allow and default-deny-all policies in each namespace
- pod selector targets the correct app label
- ingress allows TCP 8443 for metrics scraping
- egress allows port 5353 for DNS resolution
- both Ingress and Egress policyTypes are declared
@wangke19
Copy link
Copy Markdown
Contributor

E2E Test Results for Network Policy Changes

Tested against a live cluster with the NetworkPolicy manifests from this PR applied, using the companion test PR #337.

Test binary: service-ca-operator-tests-ext (OTE/Ginkgo)

Test Result Duration
[sig-service-ca] ... operator-namespace-network-policies should have network policies deployed in the operator namespace ✅ passed 678ms
[sig-service-ca] ... controller-namespace-network-policies should have network policies deployed in the controller namespace ✅ passed 674ms
[sig-service-ca] ... operator-network-policy-spec should allow metrics scraping on port 8443 and restrict other ingress in the operator namespace ✅ passed 694ms
[sig-service-ca] ... controller-network-policy-spec should allow metrics scraping on port 8443 and restrict other ingress in the controller namespace ✅ passed 892ms

All 4 tests passed. The NetworkPolicies exist in both namespaces with correct pod selectors, ingress on TCP 8443 (metrics), egress on port 5353 (DNS), and both Ingress/Egress policyTypes declared.

@wangke19
Copy link
Copy Markdown
Contributor

/verified by @wangke19

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Mar 23, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@wangke19: This PR has been marked as verified by @wangke19.

Details

In response to this:

/verified by @wangke19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@benluddy
Copy link
Copy Markdown
Contributor

/approve

@sjenning sjenning added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 25, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 25, 2026

[APPROVALNOTIFIER] This PR is APPROVED

Approval requirements bypassed by manually added approval.

This pull-request has been approved by: benluddy, dusk125, liouk

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 26, 2026

/payload 4.22 nightly blocking
/payload 4.22 nightly informing

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 26, 2026

@liouk: trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6b9cf080-28f2-11f1-83d6-de0dddf695db-0

trigger 65 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6b9cf080-28f2-11f1-83d6-de0dddf695db-1

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 26, 2026

/payload-job periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 26, 2026

@liouk: trigger 4 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/2eca08f0-2924-11f1-9375-4c2a457bbf12-0

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 27, 2026

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 27, 2026

@liouk: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c66dd730-29b6-11f1-95d2-69081b0f76a0-0

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 27, 2026

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 27, 2026

@liouk: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4703eca0-29f2-11f1-9ff4-5d16cfd60a17-0

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Mar 31, 2026

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 31, 2026

@liouk: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/25a085a0-2cd1-11f1-8e72-8668e4fa92ea-0

@liouk
Copy link
Copy Markdown
Member Author

liouk commented Apr 7, 2026

After a few retries of the single blocking job failing, I'll cancel the hold; the job failure is unrelated to the changes of this PR.

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 7, 2026
@liouk
Copy link
Copy Markdown
Member Author

liouk commented Apr 7, 2026

/retest-required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/retest-required

Remaining retests: 0 against base HEAD 8aea4fc and 2 for PR HEAD 769deb7 in total

@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 7, 2026

/retest-required

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 8, 2026

@liouk: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit a6f7add into openshift:main Apr 8, 2026
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants