Skip to content

Add PKO migration generated by make pko-migrate#496

Open
nephomaniac wants to merge 3 commits intoopenshift:masterfrom
nephomaniac:test-pko-migrate-experiment
Open

Add PKO migration generated by make pko-migrate#496
nephomaniac wants to merge 3 commits intoopenshift:masterfrom
nephomaniac:test-pko-migrate-experiment

Conversation

@nephomaniac
Copy link
Copy Markdown
Contributor

Summary

This PR adds PKO (Package Operator) migration files generated using the standard make pko-migrate boilerplate target, replacing the manually created files in PR #494.

Key improvements over PR #494:

  • ✅ Uses pipelineRef to boilerplate (49 lines) instead of inline pipelineSpec (587 lines)
  • ✅ Correct path-context: deploy_pko (not .)
  • ✅ Uses skip-preflight-cert-check: true (not skip-checks: "true")
  • ✅ Includes missing resources from OLM SelectorSyncSets
  • ✅ Fixes phase ordering (cleanup before deploy)
  • ✅ Supports dual-stack monitoring (Prometheus + RHOBS)

Commit-by-Commit Operations

Commit 1: Boilerplate updates

  • Updated local master branch from upstream openshift/route-monitor-operator
  • Created test-pko-migrate-experiment branch
  • Ran make boilerplate-update
  • Committed boilerplate framework updates

Commit 2: Add PKO migration files generated by make pko-migrate

  • Ran make pko-migrate (automated migration script)
  • Generated PKO manifests in deploy_pko/ directory
  • Generated Tekton pipelines (.tekton/-pko-.yaml) with pipelineRef to boilerplate
  • Generated Dockerfile.pko and hack/pko/clusterpackage.yaml
  • Committed all generated files

Commit 3: Fix PKO phase ordering and add missing resources

  • Fixed phase ordering in manifest.yaml (cleanup-rbac, cleanup-deploy before rbac, deploy)
  • Added Namespace resource with cluster-monitoring and hypershift monitoring labels
  • Added Service with dual-port configuration (https + metrics ports)
  • Added RHOBS ServiceMonitor for HyperShift management cluster monitoring
  • Committed fixes and additional resources

Generated Files

Tekton Pipelines:

  • .tekton/route-monitor-operator-pko-pull-request.yaml (49 lines)
  • .tekton/route-monitor-operator-pko-push.yaml (47 lines)

PKO Package:

  • deploy_pko/manifest.yaml (PackageManifest with correct phase ordering)
  • deploy_pko/*.yaml (14 PKO-annotated manifests)
  • deploy_pko/Cleanup-OLM-Job.yaml (OLM cleanup template)

Build and Deployment:

  • build/Dockerfile.pko (PKO package Dockerfile)
  • hack/pko/clusterpackage.yaml (ClusterPackage template)

Additional Resources (from OLM SelectorSyncSets):

  • deploy_pko/Namespace-openshift-route-monitor-operator.yaml
  • deploy_pko/Service-route-monitor-operator-metrics.yaml
  • deploy_pko/ServiceMonitor-route-monitor-operator-rhobs.yaml

Differences from PR #494

Aspect This PR (make pko-migrate) PR #494 (Manual)
Pipeline structure pipelineRef to boilerplate Inline pipelineSpec
Pipeline lines 49 lines 587 lines
path-context deploy_pko .
skip-checks Not present ✅ "true" ❌ (causes EC failures)
Phase ordering Correct ✅ Wrong ❌
Missing resources Added ✅ Missing ❌
Dual-stack monitoring Supported ✅ Not addressed

Testing Plan

  • Verify PKO package builds successfully
  • Test deployment on Classic ROSA cluster
  • Test deployment on HyperShift management cluster
  • Verify both Prometheus and RHOBS monitoring work
  • Confirm OLM cleanup job removes old resources
  • Run Enterprise Contract validation (should pass)

🤖 Generated with Claude Code

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 23, 2026

Walkthrough

This pull request introduces a comprehensive Package Operator-based deployment infrastructure for the route-monitor-operator, including new Tekton pipeline runs, Kubernetes manifests (CRDs, RBAC, deployments), and supporting configuration. Minor updates to base images, team ownership, and documentation are also included.

Changes

Cohort / File(s) Summary
Tekton Pipeline Configuration
.tekton/route-monitor-operator-pko-pull-request.yaml, .tekton/route-monitor-operator-pko-push.yaml
New PipelineRun resources for PR and push events to master branch, each with parameters for Git URL, revision, image tagging, Dockerfile path, and service account configuration referencing the openshift/boilerplate pipeline repository.
Team Ownership Updates
OWNERS_ALIASES, boilerplate/openshift/golang-osd-operator/OWNERS_ALIASES
Removed hectorakemp and added gvnnn to the srep-functional-team-orange alias list in two locations.
Boilerplate Metadata
boilerplate/_data/last-boilerplate-commit
Updated stored boilerplate commit hash from d1e9f2bab452cbc54c99302d171d41cba41a2c48 to 887730ea09ffbc486f178f8e938558e9a559d154.
E2E Test Configuration
boilerplate/openshift/golang-osd-e2e/e2e-template.yml
Added optional SLACK_CHANNEL template parameter and wired it into the test suite YAML configuration.
Python Script Enhancement
boilerplate/openshift/golang-osd-operator/olm_pko_migration.py
Updated annotate_manifests() to handle multi-document YAML using yaml.safe_load_all() and fixed JSON schema structure in get_pko_manifest().
Base Docker Images
build/Dockerfile, build/Dockerfile.olm-registry
Updated base image tag from 9.7-1771346502 to 9.7-1773939694 in UBI9 minimal images.
New PKO Dockerfile
build/Dockerfile.pko
Added new minimal scratch-based Dockerfile with OCI metadata labels for the PKO-based deployment.
PKO RBAC and Configuration
deploy_pko/ServiceAccount-route-monitor-operator-system.yaml, deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml, deploy_pko/ClusterRoleBinding-route-monitor-operator-manager-rolebinding.yaml, deploy_pko/Role-prometheus-k8s.yaml, deploy_pko/Role-route-monitor-operator-leader-election-role.yaml, deploy_pko/RoleBinding-prometheus-k8s.yaml, deploy_pko/RoleBinding-route-monitor-operator-leader-election-rolebinding.yaml
New RBAC resources defining service accounts, cluster and namespace-scoped roles with permissions for monitoring resources, leadership election, and Prometheus scraping.
PKO Custom Resource Definitions
deploy_pko/CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml, deploy_pko/CustomResourceDefinition-routemonitors.monitoring.openshift.io.yaml
New CRDs for ClusterUrlMonitor and RouteMonitor resources in the monitoring.openshift.io API group with associated OpenAPI schemas for spec and status fields.
PKO Deployment Resources
deploy_pko/Namespace-openshift-route-monitor-operator.yaml, deploy_pko/ConfigMap-route-monitor-operator-manager-config.yaml, deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl, deploy_pko/Service-route-monitor-operator-metrics.yaml, deploy_pko/ServiceMonitor-controller-manager-metrics-monitor.yaml, deploy_pko/ServiceMonitor-route-monitor-operator-rhobs.yaml
New namespace, configuration, deployment, service, and monitoring resources for the route-monitor-operator controller manager with health/readiness probes and metrics exposure.
PKO Cleanup and Manifest
deploy_pko/Cleanup-OLM-Job.yaml, deploy_pko/manifest.yaml
New OLM cleanup job with RBAC for deletion of CSV resources, and new PackageManifest defining deployment phases, availability probes, and configuration schema for the PKO package.
PKO Cluster Configuration
hack/pko/clusterpackage.yaml
New OpenShift template defining ClusterPackage and SelectorSyncSet resources for staged route-monitor-operator deployment with image and repository parameters.
E2E Documentation
test/e2e/README.md
Replaced comprehensive documentation with shortened "locally running e2e test suite" guide including steps for building binaries, cluster deployment, and running Ginkgo tests.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 23, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nephomaniac

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 23, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 24, 2026

@nephomaniac: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@codecov-commenter
Copy link
Copy Markdown

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 56.26%. Comparing base (0b95ccc) to head (3429d82).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #496   +/-   ##
=======================================
  Coverage   56.26%   56.26%           
=======================================
  Files          31       31           
  Lines        2851     2851           
=======================================
  Hits         1604     1604           
  Misses       1137     1137           
  Partials      110      110           
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@nephomaniac
Copy link
Copy Markdown
Contributor Author

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 24, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (5)
.tekton/route-monitor-operator-pko-push.yaml (1)

43-46: Pipeline reference uses unpinned master branch.

Using revision: master for the boilerplate pipeline means builds may behave differently over time as upstream changes. Consider pinning to a specific commit SHA or tag for reproducible builds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.tekton/route-monitor-operator-pko-push.yaml around lines 43 - 46, The
pipeline step is referencing the boilerplate using an unpinned branch (revision:
master); replace the value of the revision field with a specific immutable
identifier (a commit SHA or a release tag) so the pipeline referenced by
pathInRepo (pipelines/docker-build-oci-ta/pipeline.yaml) is reproducible; update
the revision value to the chosen SHA or tag and verify the referenced pipeline
works with that revision.
deploy_pko/Cleanup-OLM-Job.yaml (1)

66-74: Consider adding ttlSecondsAfterFinished for automatic cleanup.

Without ttlSecondsAfterFinished, completed Job pods remain until manually deleted. For a one-time migration cleanup job, setting a TTL prevents stale resources from accumulating.

♻️ Proposed addition
 spec:
+  ttlSecondsAfterFinished: 86400  # Clean up after 24 hours
   template:
     metadata:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/Cleanup-OLM-Job.yaml` around lines 66 - 74, Add a
ttlSecondsAfterFinished field to the Job spec so completed pods are
garbage-collected automatically; specifically, under the Job manifest's
top-level spec (alongside existing keys like template, serviceAccountName,
priorityClassName, restartPolicy) add ttlSecondsAfterFinished with an
appropriate integer value (e.g., 3600 for one hour or another desired TTL) to
ensure the one-time olm-cleanup Job and its pods are removed after completion.
deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml (2)

64-79: Duplicate hostedcontrolplanes rule blocks.

Lines 64-71 and 72-79 define identical rules for hostedcontrolplanes with the same verbs (get, list, watch). This duplication is redundant and should be consolidated. Note that lines 201-210 define a superset of permissions (get, list, watch, update, patch) for the same resource, which is the actual working rule.

♻️ Proposed fix to remove duplicate
 - apiGroups:
   - hypershift.openshift.io
   resources:
   - hostedcontrolplanes
   verbs:
   - get
   - list
   - watch
-- apiGroups:
-  - hypershift.openshift.io
-  resources:
-  - hostedcontrolplanes
-  verbs:
-  - get
-  - list
-  - watch
 - apiGroups:
   - monitoring.coreos.com
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml` around lines
64 - 79, Remove the duplicate rule block that grants verbs [get, list, watch] on
resource "hostedcontrolplanes" in the ClusterRole (the repeated apiGroups:
hypershift.openshift.io / resources: hostedcontrolplanes / verbs: get,list,watch
block); consolidate by keeping only the single, correct rule (or rely on the
existing superset rule that includes update and patch), ensuring there is just
one entry for hostedcontrolplanes so permissions are not duplicated.

18-28: Static analysis flags on secrets and services access are expected for this operator.

Trivy flags the secrets read access and services management permissions. These appear necessary for the operator's functionality:

  • Secrets: The operator likely needs to read secrets for OIDC authentication configuration
  • Services: The operator creates/manages blackbox-exporter services for monitoring

The permissions follow the pattern in the existing deploy/ manifests. However, consider scoping the apiGroups: ['*'] rules to specific API groups ("" for core) if possible, to follow least-privilege principles more strictly.

♻️ Consider scoping apiGroups
 - apiGroups:
-  - '*'
+  - ""
   resources:
   - services
   - configmaps
 - apiGroups:
-  - '*'
+  - ""
   resources:
   - secrets

Also applies to: 230-237

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml` around lines
18 - 28, The ClusterRole currently uses apiGroups: ['*'] for core resources
(resources: services, configmaps, secrets) which is overly broad; change the
rule(s) that mention apiGroups: '*' to explicitly scope core resources by using
apiGroups: [''] (the core API group) for the rule containing resources:
services, configmaps, secrets and keep the same verbs (create, delete, get,
list, watch); if the operator truly needs non-core resources, add separate rules
with their specific API group names rather than '*'—apply the same replacement
for the other identical rule block that grants services/secrets access (the
block listing resources services, configmaps, secrets and verbs
create/delete/get/list/watch).
deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl (1)

38-47: Missing --probe-tenant argument.

The deployment template includes all flags from main.go except --probe-tenant. While the code has a default value of "hcp", consider adding it explicitly for configuration parity and to allow customization via environment variable like the other RHOBS-related flags.

♻️ Proposed addition
         - --probe-api-url=$(PROBE_API_URL)
+        - --probe-tenant=$(PROBE_TENANT)
         - --oidc-client-id=$(OIDC_CLIENT_ID)

And add the corresponding environment variable:

         - name: PROBE_API_URL
           value: ''
+        - name: PROBE_TENANT
+          value: 'hcp'
         - name: OIDC_CLIENT_ID
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl`
around lines 38 - 47, Add the missing --probe-tenant flag to the container args
and expose it via an environment variable like the other RHOBS flags: add an
args entry --probe-tenant=$(PROBE_TENANT) (matching the pattern used for
--blackbox-image/--probe-api-url) and add a corresponding env var definition
PROBE_TENANT with a default value of "hcp" so this can be customized via
deployment values; ensure the flag name (--probe-tenant), env var name
(PROBE_TENANT) and the default "hcp" are used consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@boilerplate/openshift/golang-osd-operator/olm_pko_migration.py`:
- Around line 550-567: The manifest switch in annotate_manifests never handles
Namespace resources, so add an explicit branch for kind == "Namespace" that
calls annotate(manifest, PHASE_NAMESPACE) and appends the result to annotated
(similar to other branches); update the block that currently checks kinds
("CustomResourceDefinition", "ClusterRole", "ServiceAccount", etc.) to include
an elif for "Namespace" before the final else so Namespace resources get the
PHASE_NAMESPACE annotation and don't fall through to the "Unhandled type" case.

In `@build/Dockerfile.pko`:
- Around line 1-13: The PKO Dockerfile currently builds from scratch and copies
artifacts but does not set a non-root user; modify the Dockerfile by adding a
USER 65532 directive after the existing COPY * /package/ line (so the image runs
as UID 65532 instead of root), ensuring the container meets non-root security
policies.

In `@deploy_pko/Cleanup-OLM-Job.yaml`:
- Around line 75-97: The container "delete-csv" is missing an explicit
securityContext; add a securityContext block under the container spec for
delete-csv to harden it: set runAsNonRoot: true and a non-zero runAsUser (e.g.,
1000), set allowPrivilegeEscalation: false, set readOnlyRootFilesystem: true,
drop all Linux capabilities (capabilities: drop: ["ALL"]), and set
seccompProfile: type: RuntimeDefault (and/or appArmor if applicable). Apply
these settings to the container spec for the Job (container name delete-csv) to
provide defense-in-depth even when the pod is scheduled with restricted SCC.

In `@deploy_pko/ConfigMap-route-monitor-operator-manager-config.yaml`:
- Around line 3-6: Delete the unused ConfigMap resource named
route-monitor-operator-manager-config (the YAML containing
controller_manager_config.yaml with apiVersion
controller-runtime.sigs.k8s.io/v1alpha1 and kind ControllerManagerConfig)
because it is not mounted by any Deployment and the manager’s LeaderElectionID
is hardcoded in main.go ("2793210b.openshift.io"); remove the ConfigMap file
from the repository and any references to it to avoid dead manifests, and if you
find any related code paths expecting controller_manager_config.yaml, either
remove those code paths or update them to use a supported config mechanism
(controller-runtime ≥v0.18) before committing.

In
`@deploy_pko/CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml`:
- Around line 106-110: The CRD description for serviceMonitorRef is still the
placeholder text; locate the Go type that defines ServiceMonitorRef (e.g., the
ServiceMonitorRef field on the ClusterURLMonitor spec/status struct, look for
the field named ServiceMonitorRef or `serviceMonitorRef` in your API types) and
replace the placeholder comment/marker with a clear description of what the
field represents and any constraints; then regenerate the CRD artifacts (run the
project's codegen/CRD generation target, e.g., make or controller-gen as your
repo uses) so the updated description is emitted into the YAML.
- Around line 65-68: Update the placeholder description for the `prefix` field
in the ClusterUrlMonitor CRD by editing the Go type definition in
ClusterUrlMonitor_types.go: locate the `prefix` field on the
ClusterUrlMonitorSpec (or the struct that defines the CRD field), replace the
boilerplate comment/`//` tag with a meaningful description, then regenerate the
CRD so CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml
is updated (the YAML's `prefix` description should no longer contain the "Foo is
an example..." text).

In
`@deploy_pko/CustomResourceDefinition-routemonitors.monitoring.openshift.io.yaml`:
- Around line 36-64: The CRD spec is missing the spec fields
insecureSkipTLSVerify (bool) and serviceMonitorType (string) defined on the
RouteMonitor Go struct; update the YAML schema under spec.properties to add
properties for insecureSkipTLSVerify (type: boolean, with a clear description
matching the Go doc) and serviceMonitorType (type: string, with description and
any kubebuilder/validation constraints reflected from
api/v1alpha1/routemonitor_types.go), so that the controller code in
routemonitor_supplement.go and hostedcontrolplane.go receives user-configured
values instead of zero-values.

In `@deploy_pko/manifest.yaml`:
- Around line 32-35: The manifest's image field currently sets default: "None"
(a string) which can cause deployment errors; update the image schema (the image
field under description/type/default) to either remove the default and add
required: true, or set default: "" (empty string) so misconfiguration is obvious
— choose one approach and apply it consistently in the image field definition.
- Around line 5-6: The annotations block contains a placeholder key/value
("annotations" with "name: annotation"); replace the placeholder with a
meaningful annotation value or remove the "name: annotation" entry entirely so
the manifest doesn't contain boilerplate—look for the annotations map and the
"name: annotation" entry and either populate it with the intended annotation
key/value or delete that line.

In `@deploy_pko/ServiceMonitor-controller-manager-metrics-monitor.yaml`:
- Around line 15-17: ServiceMonitor's selector matchLabels uses control-plane:
controller-manager which doesn't exist on the target Service
route-monitor-operator-metrics (that Service only has
app.kubernetes.io/component: route-monitor-operator) and the ServiceMonitor
lacks a namespaceSelector so it only looks in its own namespace; fix by either
adding the missing metadata label control-plane: controller-manager to the
Service resource (route-monitor-operator-metrics) or update the ServiceMonitor
(ServiceMonitor) selector to match app.kubernetes.io/component:
route-monitor-operator and add a namespaceSelector that includes the
openshift-route-monitor-operator namespace so Prometheus can discover the
service across namespaces.

In `@hack/pko/clusterpackage.yaml`:
- Around line 12-13: The IMAGE_DIGEST parameter is declared as required but
never used; either remove the IMAGE_DIGEST parameter from the parameters block
or update the deployment/template image references to consume it (e.g., append
"@${IMAGE_DIGEST}" or interpolate IMAGE_DIGEST into image fields) so the
parameter is actually referenced by the template objects (search for
IMAGE_DIGEST and image fields in the YAML and update the image strings or delete
the parameter declaration).
- Around line 30-34: The YAML indentation for the cluster selector is incorrect:
move the matchLabels block to be nested under clusterDeploymentSelector so the
structure reads clusterDeploymentSelector: matchLabels: {
api.openshift.com/managed: 'true' } and ensure resourceApplyMode remains a
sibling of clusterDeploymentSelector; update the entries around the symbols
clusterDeploymentSelector, matchLabels, and resourceApplyMode accordingly so the
YAML parses properly.

In `@test/e2e/README.md`:
- Line 7: The README currently instructs to run "go install
github.com/onsi/ginkgo/ginkgo@latest" which installs the v1 binary while the
tests import "github.com/onsi/ginkgo/v2"; update the install command to use the
v2 module path (e.g., use "go install github.com/onsi/ginkgo/v2/ginkgo@latest")
so the installed ginkgo matches the imported module version used by the E2E
tests.

---

Nitpick comments:
In @.tekton/route-monitor-operator-pko-push.yaml:
- Around line 43-46: The pipeline step is referencing the boilerplate using an
unpinned branch (revision: master); replace the value of the revision field with
a specific immutable identifier (a commit SHA or a release tag) so the pipeline
referenced by pathInRepo (pipelines/docker-build-oci-ta/pipeline.yaml) is
reproducible; update the revision value to the chosen SHA or tag and verify the
referenced pipeline works with that revision.

In `@deploy_pko/Cleanup-OLM-Job.yaml`:
- Around line 66-74: Add a ttlSecondsAfterFinished field to the Job spec so
completed pods are garbage-collected automatically; specifically, under the Job
manifest's top-level spec (alongside existing keys like template,
serviceAccountName, priorityClassName, restartPolicy) add
ttlSecondsAfterFinished with an appropriate integer value (e.g., 3600 for one
hour or another desired TTL) to ensure the one-time olm-cleanup Job and its pods
are removed after completion.

In `@deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml`:
- Around line 64-79: Remove the duplicate rule block that grants verbs [get,
list, watch] on resource "hostedcontrolplanes" in the ClusterRole (the repeated
apiGroups: hypershift.openshift.io / resources: hostedcontrolplanes / verbs:
get,list,watch block); consolidate by keeping only the single, correct rule (or
rely on the existing superset rule that includes update and patch), ensuring
there is just one entry for hostedcontrolplanes so permissions are not
duplicated.
- Around line 18-28: The ClusterRole currently uses apiGroups: ['*'] for core
resources (resources: services, configmaps, secrets) which is overly broad;
change the rule(s) that mention apiGroups: '*' to explicitly scope core
resources by using apiGroups: [''] (the core API group) for the rule containing
resources: services, configmaps, secrets and keep the same verbs (create,
delete, get, list, watch); if the operator truly needs non-core resources, add
separate rules with their specific API group names rather than '*'—apply the
same replacement for the other identical rule block that grants services/secrets
access (the block listing resources services, configmaps, secrets and verbs
create/delete/get/list/watch).

In `@deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl`:
- Around line 38-47: Add the missing --probe-tenant flag to the container args
and expose it via an environment variable like the other RHOBS flags: add an
args entry --probe-tenant=$(PROBE_TENANT) (matching the pattern used for
--blackbox-image/--probe-api-url) and add a corresponding env var definition
PROBE_TENANT with a default value of "hcp" so this can be customized via
deployment values; ensure the flag name (--probe-tenant), env var name
(PROBE_TENANT) and the default "hcp" are used consistently.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 11705ebd-cf42-4997-b41d-fe70acc19bf6

📥 Commits

Reviewing files that changed from the base of the PR and between 0b95ccc and 3429d82.

📒 Files selected for processing (29)
  • .tekton/route-monitor-operator-pko-pull-request.yaml
  • .tekton/route-monitor-operator-pko-push.yaml
  • OWNERS_ALIASES
  • boilerplate/_data/last-boilerplate-commit
  • boilerplate/openshift/golang-osd-e2e/e2e-template.yml
  • boilerplate/openshift/golang-osd-operator/OWNERS_ALIASES
  • boilerplate/openshift/golang-osd-operator/olm_pko_migration.py
  • build/Dockerfile
  • build/Dockerfile.olm-registry
  • build/Dockerfile.pko
  • deploy_pko/Cleanup-OLM-Job.yaml
  • deploy_pko/ClusterRole-route-monitor-operator-manager-role.yaml
  • deploy_pko/ClusterRoleBinding-route-monitor-operator-manager-rolebinding.yaml
  • deploy_pko/ConfigMap-route-monitor-operator-manager-config.yaml
  • deploy_pko/CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml
  • deploy_pko/CustomResourceDefinition-routemonitors.monitoring.openshift.io.yaml
  • deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl
  • deploy_pko/Namespace-openshift-route-monitor-operator.yaml
  • deploy_pko/Role-prometheus-k8s.yaml
  • deploy_pko/Role-route-monitor-operator-leader-election-role.yaml
  • deploy_pko/RoleBinding-prometheus-k8s.yaml
  • deploy_pko/RoleBinding-route-monitor-operator-leader-election-rolebinding.yaml
  • deploy_pko/Service-route-monitor-operator-metrics.yaml
  • deploy_pko/ServiceAccount-route-monitor-operator-system.yaml
  • deploy_pko/ServiceMonitor-controller-manager-metrics-monitor.yaml
  • deploy_pko/ServiceMonitor-route-monitor-operator-rhobs.yaml
  • deploy_pko/manifest.yaml
  • hack/pko/clusterpackage.yaml
  • test/e2e/README.md

Comment on lines +550 to +567
if kind == "CustomResourceDefinition":
annotated.append(annotate(manifest, PHASE_CRDS))
elif kind in ["ClusterRole", "ClusterRoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind in ["Role", "RoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "ServiceAccount":
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "Service":
annotated.append(annotate(manifest, PHASE_DEPLOY))
elif kind == "Deployment":
manifest = annotate(manifest, PHASE_DEPLOY)
manifest = set_image_template(manifest)
annotated.append(manifest)
elif kind == "ServiceMonitor":
annotated.append(annotate(manifest, PHASE_DEPLOY))
else:
print(f"Unhandled type: {kind}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add explicit Namespace phase annotation handling.

PHASE_NAMESPACE is defined and included in PackageManifest, but annotate_manifests() never assigns it. Namespace resources currently fall into “Unhandled type”, which can break intended apply ordering.

Proposed fix
                 elif kind == "ServiceAccount":
                     annotated.append(annotate(manifest, PHASE_RBAC))
+                elif kind == "Namespace":
+                    annotated.append(annotate(manifest, PHASE_NAMESPACE))
                 elif kind == "Service":
                     annotated.append(annotate(manifest, PHASE_DEPLOY))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if kind == "CustomResourceDefinition":
annotated.append(annotate(manifest, PHASE_CRDS))
elif kind in ["ClusterRole", "ClusterRoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind in ["Role", "RoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "ServiceAccount":
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "Service":
annotated.append(annotate(manifest, PHASE_DEPLOY))
elif kind == "Deployment":
manifest = annotate(manifest, PHASE_DEPLOY)
manifest = set_image_template(manifest)
annotated.append(manifest)
elif kind == "ServiceMonitor":
annotated.append(annotate(manifest, PHASE_DEPLOY))
else:
print(f"Unhandled type: {kind}")
if kind == "CustomResourceDefinition":
annotated.append(annotate(manifest, PHASE_CRDS))
elif kind in ["ClusterRole", "ClusterRoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind in ["Role", "RoleBinding"]:
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "ServiceAccount":
annotated.append(annotate(manifest, PHASE_RBAC))
elif kind == "Namespace":
annotated.append(annotate(manifest, PHASE_NAMESPACE))
elif kind == "Service":
annotated.append(annotate(manifest, PHASE_DEPLOY))
elif kind == "Deployment":
manifest = annotate(manifest, PHASE_DEPLOY)
manifest = set_image_template(manifest)
annotated.append(manifest)
elif kind == "ServiceMonitor":
annotated.append(annotate(manifest, PHASE_DEPLOY))
else:
print(f"Unhandled type: {kind}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@boilerplate/openshift/golang-osd-operator/olm_pko_migration.py` around lines
550 - 567, The manifest switch in annotate_manifests never handles Namespace
resources, so add an explicit branch for kind == "Namespace" that calls
annotate(manifest, PHASE_NAMESPACE) and appends the result to annotated (similar
to other branches); update the block that currently checks kinds
("CustomResourceDefinition", "ClusterRole", "ServiceAccount", etc.) to include
an elif for "Namespace" before the final else so Namespace resources get the
PHASE_NAMESPACE annotation and don't fall through to the "Unhandled type" case.

Comment on lines +1 to +13
FROM scratch

LABEL com.redhat.component="openshift-route-monitor-operator" \
io.k8s.description="route-monitor-operator Operator for OpenShift Dedicated" \
description="route-monitor-operator Operator for OpenShift Dedicated" \
distribution-scope="public" \
name="openshift/route-monitor-operator" \
url="https://github.com/openshift/route-monitor-operator" \
vendor="Red Hat, Inc." \
release="v0.0.0" \
version="v0.0.0"

COPY * /package/
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
nl -ba build/Dockerfile.pko | sed -n '1,80p'
rg -nP '^\s*USER\b' build/Dockerfile.pko || true

Repository: openshift/route-monitor-operator

Length of output: 118


🏁 Script executed:

#!/bin/bash
set -euo pipefail
# Read the file with line numbers using cat -n which should be available
cat -n build/Dockerfile.pko
# Search for USER directive
echo "=== Searching for USER directive ==="
rg -n 'USER' build/Dockerfile.pko || echo "No USER directive found"

Repository: openshift/route-monitor-operator

Length of output: 715


Set an explicit non-root USER in the PKO image.

Without USER, the container defaults to root, which fails security policy checks and weakens runtime posture. Add USER 65532 after line 13 to run as a non-root user.

🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build/Dockerfile.pko` around lines 1 - 13, The PKO Dockerfile currently
builds from scratch and copies artifacts but does not set a non-root user;
modify the Dockerfile by adding a USER 65532 directive after the existing COPY *
/package/ line (so the image runs as UID 65532 instead of root), ensuring the
container meets non-root security policies.

Comment on lines +75 to +97
containers:
- name: delete-csv
image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
imagePullPolicy: Always
command:
- sh
- -c
- |
#!/bin/sh
set -euxo pipefail
# CUSTOMIZE: Update the label selector for your operator
# Example pattern: operators.coreos.com/OPERATOR_NAME.NAMESPACE
oc -n openshift-route-monitor-operator delete csv -l "operators.coreos.com/route-monitor-operator.openshift-route-monitor-operator" || true

# CUSTOMIZE: Add any additional cleanup logic here
# Examples:
# - Delete subscriptions
# - Delete operator groups
# - Clean up custom resources
resources:
requests:
cpu: 100m
memory: 100Mi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add explicit security context to harden the Job container.

Static analysis correctly flags that the container lacks security hardening. While the pod requests restricted-v2 SCC, explicitly setting the container's securityContext ensures defense-in-depth and makes the security posture clear.

🛡️ Proposed fix to add security context
       containers:
         - name: delete-csv
           image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
           imagePullPolicy: Always
           command:
             - sh
             - -c
             - |
               #!/bin/sh
               set -euxo pipefail
               # CUSTOMIZE: Update the label selector for your operator
               # Example pattern: operators.coreos.com/OPERATOR_NAME.NAMESPACE
               oc -n openshift-route-monitor-operator delete csv -l "operators.coreos.com/route-monitor-operator.openshift-route-monitor-operator" || true
               
               # CUSTOMIZE: Add any additional cleanup logic here
               # Examples:
               # - Delete subscriptions
               # - Delete operator groups
               # - Clean up custom resources
           resources:
             requests:
               cpu: 100m
               memory: 100Mi
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            runAsNonRoot: true
+            capabilities:
+              drop:
+                - ALL
🧰 Tools
🪛 Checkov (3.2.508)

[medium] 58-97: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[medium] 58-97: Minimize the admission of root containers

(CKV_K8S_23)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/Cleanup-OLM-Job.yaml` around lines 75 - 97, The container
"delete-csv" is missing an explicit securityContext; add a securityContext block
under the container spec for delete-csv to harden it: set runAsNonRoot: true and
a non-zero runAsUser (e.g., 1000), set allowPrivilegeEscalation: false, set
readOnlyRootFilesystem: true, drop all Linux capabilities (capabilities: drop:
["ALL"]), and set seccompProfile: type: RuntimeDefault (and/or appArmor if
applicable). Apply these settings to the container spec for the Job (container
name delete-csv) to provide defense-in-depth even when the pod is scheduled with
restricted SCC.

Comment on lines +3 to +6
controller_manager_config.yaml: "apiVersion: controller-runtime.sigs.k8s.io/v1alpha1\n\
kind: ControllerManagerConfig\nhealth:\n healthProbeBindAddress: :8081\nmetrics:\n\
\ bindAddress: :8080\nwebhook:\n port: 9443\nleaderElection:\n leaderElect:\
\ true\n resourceName: 431ceb26.my.domain\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if the ConfigMap is mounted in the Deployment template
rg -n "route-monitor-operator-manager-config" --glob "*.yaml*" deploy_pko/

Repository: openshift/route-monitor-operator

Length of output: 188


🏁 Script executed:

# Search for deployment files that might mount this ConfigMap
find deploy_pko -name "*Deployment*" -o -name "*deployment*" | head -20

Repository: openshift/route-monitor-operator

Length of output: 151


🏁 Script executed:

# Check for volume references to the ConfigMap in all YAML files
rg -B5 -A5 "route-monitor-operator-manager-config" deploy_pko/ --glob "*.yaml*"

Repository: openshift/route-monitor-operator

Length of output: 745


🏁 Script executed:

# Find and examine main.go for LeaderElectionID
fd "main.go" --type f | head -5

Repository: openshift/route-monitor-operator

Length of output: 83


🏁 Script executed:

# Check the Deployment file to see if it mounts the ConfigMap
cat deploy_pko/Deployment-route-monitor-operator-controller-manager.yaml.gotmpl | head -100

Repository: openshift/route-monitor-operator

Length of output: 3092


🏁 Script executed:

# Extract relevant parts from main.go around LeaderElectionID
rg -n "LeaderElectionID|leaderElection" main.go -A 3 -B 3

Repository: openshift/route-monitor-operator

Length of output: 288


🏁 Script executed:

# Check main.go lines around 267
sed -n '260,280p' main.go

Repository: openshift/route-monitor-operator

Length of output: 770


🏁 Script executed:

# Check the full ConfigMap file to see the API version
cat deploy_pko/ConfigMap-route-monitor-operator-manager-config.yaml

Repository: openshift/route-monitor-operator

Length of output: 581


🏁 Script executed:

# Search for any other references to this ConfigMap in the codebase
rg "route-monitor-operator-manager-config" . --type yaml --type yml

Repository: openshift/route-monitor-operator

Length of output: 103


🌐 Web query:

controller-runtime ControllerManagerConfig v1alpha1 v1 deprecation

💡 Result:

The ControllerManagerConfig in sigs.k8s.io/controller-runtime/pkg/config/v1alpha1 (specifically types like ControllerManagerConfiguration and ControllerManagerConfigurationSpec) was deprecated starting in controller-runtime v0.14 (PR #2165, merged Feb 2023) and more explicitly in v0.15 via PR #2149 (merged Jan 2023). It remains deprecated in current versions up to at least v0.18.x and v0.23.x as of 2026. Deprecation was announced with warnings to migrate to custom config implementations or use manager.Options fields directly (e.g., manager.Options.Controller for controller configs, introduced around v0.15 in pkg/config). The package is marked for removal in a future release (tracked in issue #895 since 2020), but removal occurred in v0.18.0 via PR #2648 (merged Apr 2024), which removed deprecated manager options including v1alpha1.ControllerManagerConfiguration. As of today's date (2026-03-24), in latest releases like v0.23.3, the pkg/config/v1alpha1 package no longer exists or is fully removed—pkg.go.dev shows it only for older versions like v0.17.6. Users must implement their own config CRDs embedding necessary fields (as shown in Kubebuilder docs) or use runtime config options like ctrl.Options{LeaderElection: ..., Metrics: ...}.

Citations:


Remove unused ConfigMap—it is not mounted in the Deployment.

The ConfigMap route-monitor-operator-manager-config is not referenced or mounted by any Deployment. Since main.go hardcodes LeaderElectionID: "2793210b.openshift.io" directly in the manager options (line 265), this ConfigMap serves no purpose and should be deleted.

The embedded v1alpha1 API is also deprecated (removed in controller-runtime v0.18.0+), but this is moot since the ConfigMap is unused.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/ConfigMap-route-monitor-operator-manager-config.yaml` around lines
3 - 6, Delete the unused ConfigMap resource named
route-monitor-operator-manager-config (the YAML containing
controller_manager_config.yaml with apiVersion
controller-runtime.sigs.k8s.io/v1alpha1 and kind ControllerManagerConfig)
because it is not mounted by any Deployment and the manager’s LeaderElectionID
is hardcoded in main.go ("2793210b.openshift.io"); remove the ConfigMap file
from the repository and any references to it to avoid dead manifests, and if you
find any related code paths expecting controller_manager_config.yaml, either
remove those code paths or update them to use a supported config mechanism
(controller-runtime ≥v0.18) before committing.

Comment on lines +65 to +68
prefix:
description: Foo is an example field of ClusterUrlMonitor. Edit ClusterUrlMonitor_types.go
to remove/update
type: string
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Placeholder description in CRD.

The prefix field description contains boilerplate: "Foo is an example field of ClusterUrlMonitor. Edit ClusterUrlMonitor_types.go to remove/update". Update the description in the Go source and regenerate the CRD.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@deploy_pko/CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml`
around lines 65 - 68, Update the placeholder description for the `prefix` field
in the ClusterUrlMonitor CRD by editing the Go type definition in
ClusterUrlMonitor_types.go: locate the `prefix` field on the
ClusterUrlMonitorSpec (or the struct that defines the CRD field), replace the
boilerplate comment/`//` tag with a meaningful description, then regenerate the
CRD so CustomResourceDefinition-clusterurlmonitors.monitoring.openshift.io.yaml
is updated (the YAML's `prefix` description should no longer contain the "Foo is
an example..." text).

Comment on lines +32 to +35
image:
description: Operator image to deploy
type: string
default: None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Image default is string "None".

The default value None (string) may cause deployment failures if not explicitly overridden. Consider whether required: true without a default would be more appropriate, or use an empty string to make misconfiguration more obvious.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/manifest.yaml` around lines 32 - 35, The manifest's image field
currently sets default: "None" (a string) which can cause deployment errors;
update the image schema (the image field under description/type/default) to
either remove the default and add required: true, or set default: "" (empty
string) so misconfiguration is obvious — choose one approach and apply it
consistently in the image field definition.

Comment on lines +15 to +17
selector:
matchLabels:
control-plane: controller-manager
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Service label mismatch will break Prometheus scraping.

The ServiceMonitor selects services with control-plane: controller-manager, but the target Service (route-monitor-operator-metrics) only has app.kubernetes.io/component: route-monitor-operator in its metadata labels. The control-plane: controller-manager appears in the Service's spec.selector (for pod targeting), not in its metadata labels.

Additionally, this ServiceMonitor is in openshift-monitoring namespace but needs to discover a Service in openshift-route-monitor-operator. Without a namespaceSelector, the ServiceMonitor will only look in its own namespace.

🐛 Proposed fix: Add missing label to Service or adjust ServiceMonitor selector

Option 1: Add label to Service (in deploy_pko/Service-route-monitor-operator-metrics.yaml):

 metadata:
   name: route-monitor-operator-metrics
   namespace: openshift-route-monitor-operator
   labels:
     app.kubernetes.io/component: route-monitor-operator
+    control-plane: controller-manager

Option 2: Add namespaceSelector and fix selector (in this file):

 spec:
   endpoints:
   - path: /metrics
     port: https
+  namespaceSelector:
+    matchNames:
+    - openshift-route-monitor-operator
   selector:
     matchLabels:
-      control-plane: controller-manager
+      app.kubernetes.io/component: route-monitor-operator
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
selector:
matchLabels:
control-plane: controller-manager
spec:
endpoints:
- path: /metrics
port: https
namespaceSelector:
matchNames:
- openshift-route-monitor-operator
selector:
matchLabels:
app.kubernetes.io/component: route-monitor-operator
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy_pko/ServiceMonitor-controller-manager-metrics-monitor.yaml` around
lines 15 - 17, ServiceMonitor's selector matchLabels uses control-plane:
controller-manager which doesn't exist on the target Service
route-monitor-operator-metrics (that Service only has
app.kubernetes.io/component: route-monitor-operator) and the ServiceMonitor
lacks a namespaceSelector so it only looks in its own namespace; fix by either
adding the missing metadata label control-plane: controller-manager to the
Service resource (route-monitor-operator-metrics) or update the ServiceMonitor
(ServiceMonitor) selector to match app.kubernetes.io/component:
route-monitor-operator and add a namespaceSelector that includes the
openshift-route-monitor-operator namespace so Prometheus can discover the
service across namespaces.

Comment on lines +12 to +13
- name: IMAGE_DIGEST
required: true
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused required parameter IMAGE_DIGEST.

IMAGE_DIGEST is marked required: true but is never referenced in the template objects. Either use it (e.g., for image pinning with @${IMAGE_DIGEST}) or remove the parameter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hack/pko/clusterpackage.yaml` around lines 12 - 13, The IMAGE_DIGEST
parameter is declared as required but never used; either remove the IMAGE_DIGEST
parameter from the parameters block or update the deployment/template image
references to consume it (e.g., append "@${IMAGE_DIGEST}" or interpolate
IMAGE_DIGEST into image fields) so the parameter is actually referenced by the
template objects (search for IMAGE_DIGEST and image fields in the YAML and
update the image strings or delete the parameter declaration).

Comment on lines +30 to +34
spec:
clusterDeploymentSelector:
matchLabels:
api.openshift.com/managed: 'true'
resourceApplyMode: Sync
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Malformed YAML: matchLabels not nested under clusterDeploymentSelector.

The indentation is incorrect—matchLabels should be a child of clusterDeploymentSelector:

Proposed fix
     spec:
       clusterDeploymentSelector:
-      matchLabels:
-        api.openshift.com/managed: 'true'
+        matchLabels:
+          api.openshift.com/managed: 'true'
       resourceApplyMode: Sync
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
clusterDeploymentSelector:
matchLabels:
api.openshift.com/managed: 'true'
resourceApplyMode: Sync
spec:
clusterDeploymentSelector:
matchLabels:
api.openshift.com/managed: 'true'
resourceApplyMode: Sync
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hack/pko/clusterpackage.yaml` around lines 30 - 34, The YAML indentation for
the cluster selector is incorrect: move the matchLabels block to be nested under
clusterDeploymentSelector so the structure reads clusterDeploymentSelector:
matchLabels: { api.openshift.com/managed: 'true' } and ensure resourceApplyMode
remains a sibling of clusterDeploymentSelector; update the entries around the
symbols clusterDeploymentSelector, matchLabels, and resourceApplyMode
accordingly so the YAML parses properly.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like one that should be reviewed and if addressed may represent an area in the migration script that should be updated?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

Good question — let me check the migration script to see if it's responsible for generating the malformed selector indentation.


🏁 Script executed:

#!/bin/bash
# Look at the migration script for SelectorSyncSet / clusterDeploymentSelector generation
cat boilerplate/openshift/golang-osd-operator/olm_pko_migration.py

Length of output: 31688


@nephomaniac, confirmed — the bug is in the migration script. The generated file used the extract_deployment_selector() code path (because hack/olm-registry/olm-artifacts-template.yaml exists in this repo), and there's an off-by-two indentation error there.

Root cause in boilerplate/openshift/golang-osd-operator/olm_pko_migration.py, extract_deployment_selector():

# Current (wrong) — 6-space indent puts matchLabels at the same level as clusterDeploymentSelector
indented_lines = ["      " + line for line in selector_yaml.splitlines()]

The replacement in write_clusterpackage_template() prepends clusterDeploymentSelector:\n and then appends this block, so the result is:

      clusterDeploymentSelector:
      matchLabels:          # ← same indent as parent, not nested
        api.openshift.com/managed: 'true'

Fix — use 8 spaces so matchLabels is nested under clusterDeploymentSelector:

# Fixed — 8-space indent correctly nests content under clusterDeploymentSelector
indented_lines = ["        " + line for line in selector_yaml.splitlines()]

This produces:

      clusterDeploymentSelector:
        matchLabels:
          api.openshift.com/managed: 'true'

The CLUSTERPACKAGE_TEMPLATE default (used when no OLM template is found) is already correct, so only the extract_deployment_selector() indentation needs fixing.

This directory contains end-to-end tests for the Route Monitor Operator (RMO), validating its integration with the RHOBS synthetic monitoring system.
1. Run "make e2e-binary-build" to make sure e2e tests build
2. Deploy your new version of operator in a test cluster
3. Run "go install github.com/onsi/ginkgo/ginkgo@latest"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
rg -nP 'github\.com/onsi/ginkgo(/v2)?' || true
nl -ba test/e2e/README.md | sed -n '1,30p'

Repository: openshift/route-monitor-operator

Length of output: 2731


Update the install command to use the v2 module path.

The E2E tests import github.com/onsi/ginkgo/v2, but the README install instruction targets the v1 path. Running the current command would install the v1 binary instead of v2, causing test failures.

Suggested update
-3. Run "go install github.com/onsi/ginkgo/ginkgo@latest"
+3. Run "go install github.com/onsi/ginkgo/v2/ginkgo@latest"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
3. Run "go install github.com/onsi/ginkgo/ginkgo@latest"
3. Run "go install github.com/onsi/ginkgo/v2/ginkgo@latest"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/README.md` at line 7, The README currently instructs to run "go
install github.com/onsi/ginkgo/ginkgo@latest" which installs the v1 binary while
the tests import "github.com/onsi/ginkgo/v2"; update the install command to use
the v2 module path (e.g., use "go install
github.com/onsi/ginkgo/v2/ginkgo@latest") so the installed ginkgo matches the
imported module version used by the E2E tests.

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 25, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 25, 2026

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants