Skip to content

Comments

Add images with >21 days deployment lag to image-updater#4139

Merged
openshift-merge-bot[bot] merged 1 commit intomainfrom
add-images-to-updater-with-lag-new
Feb 21, 2026
Merged

Add images with >21 days deployment lag to image-updater#4139
openshift-merge-bot[bot] merged 1 commit intomainfrom
add-images-to-updater-with-lag-new

Conversation

@venkateshsredhat
Copy link
Collaborator

Original PR got messed up #4118 .

Why
Many images are out of date and could be a cause for security concerns. We need to automate keep our component images up to date for security compliance. https://www.releases.dev.aro.azure-test.net/releases/services/Microsoft.Azure.ARO.HCP.Global/cloud/public/environment/int/images

es

What
This change adds 7 images with deployment lag exceeding 21 days to the image-updater configuration, enabling automated digest updates.

Images added to image-updater (all >21 days lag):

route-monitor-operator (86d lag): quay.io/app-sre/route-monitor-operator
blackbox-exporter (659d lag): quay.io/prometheus/blackbox-exporter
kube-webhook-certgen (310d lag): registry.k8s.io/ingress-nginx/kube-webhook-certgen
aks-command-runtime (1725d lag): mcr.microsoft.com/aks/command/runtime
The global-acr.bicep template deploys to all environments (dev, int, stg, prod), so these cache rules will be available across all ACRs:

arohcpsvcdev (dev)
arohcpsvcint (public/int)
arohcpsvcstg (public/stg)
arohcpsvcprod (public/prod)
Images not added (already in image-updater or lag <21 days):

fluent-bit (12d lag) - already tracked as arobit-forwarder
mdsd (18d lag) - already tracked as arobit-mdsd
prometheus* images (16-19d lag) - already tracked, below threshold
kube-state-metrics (69d lag) - already tracked
msi-acrpull (56d lag) - already tracked as acrPull
provider-azure (71d lag) - already tracked as secretSyncProvider
controller/secrets-store-sync (195d lag) - already tracked as secretSyncController
mise (337d lag) - source registry unknown, needs investigation
mise-1p-container-image (72d lag) - already has MCR artifact sync configured
Related to: #4082

@openshift-ci openshift-ci bot requested review from raelga and tony-schndr February 18, 2026 07:00
Comment on lines 248 to 249
filePath: ../../config/config.yaml
# Kube Webhook Certgen (Ingress NGINX)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blackbox-exporter image is an inherent part of the RMO bundle, it cannot be changed here. It's here only so the image mirroring works, but it cannot be managed by image-updater.
For reference, when RMO bundle is updated, this make target

update-digests-in-config: $(YAMLFMT)
$(eval OPERATOR_DIGEST := $(shell yq '.imageDigestOperator' ${RMO_CHART_DIR}/values-generated.yaml))
$(eval BLACKBOX_DIGEST := $(shell yq '.imageDigestBlackbox' ${RMO_CHART_DIR}/values-generated.yaml))
@echo "Updating config.yaml with new digests:"
@echo " Operator digest: ${OPERATOR_DIGEST}"
@echo " Blackbox digest: ${BLACKBOX_DIGEST}"
make -s -C ../tooling/yamlwrap yamlwrap
../tooling/yamlwrap/yamlwrap wrap --input ../config/config.yaml --no-validate-result
yq eval '.defaults.routeMonitorOperator.operatorImage.digest = "${OPERATOR_DIGEST}"' -i ../config/config.yaml
yq eval '.defaults.routeMonitorOperator.blackboxExporterImage.digest = "${BLACKBOX_DIGEST}"' -i ../config/config.yaml
$(YAMLFMT) ../config/config.yaml
../tooling/yamlwrap/yamlwrap unwrap --input ../config/config.yaml
.PHONY: update-digests-in-config
is run to update the digests in the config.

Copy link
Collaborator Author

@venkateshsredhat venkateshsredhat Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we drop rmo and black box out this ?
the make target could be run part of the tooling too right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't blindly update RMO, the risk of the integration breaking is high. We should drop both blackbox and RMO images from the tool and replace RMO as soon as possible.

@avollmer-redhat avollmer-redhat force-pushed the add-images-to-updater-with-lag-new branch from be3144d to af71e7f Compare February 18, 2026 14:53
This change adds 3 images with deployment lag exceeding 21 days to the
image-updater configuration, enabling automated digest updates.

Images added to image-updater (all >21 days lag):
- kube-webhook-certgen (310d lag): registry.k8s.io/ingress-nginx/kube-webhook-certgen

Images NOT added to image-updater (have alternative update mechanisms):
- route-monitor-operator (86d lag): Updated via route-monitor-operator/Makefile 'update-digests-in-config' target
- blackbox-exporter (659d lag): Updated via route-monitor-operator/Makefile 'update-digests-in-config' target
- aks-command-runtime (1725d lag): Already tracked as aksCommandRuntime
- velero-server (114d lag): Removed from image-updater in previous commit
- velero-plugin-azure (139d lag): Removed from image-updater in previous commit
- velero-hypershift-plugin (46d lag): Removed from image-updater in previous commit

Pull-through cache setup (dev-infrastructure/templates/global-acr.bicep):
Velero ACR cache rules remain in place from previous commit to address
image deletion/overwrite issues in the konveyor registry.

Images not added (already in image-updater or lag <21 days):
- fluent-bit (12d lag) - already tracked as arobit-forwarder
- mdsd (18d lag) - already tracked as arobit-mdsd
- prometheus* images (16-19d lag) - already tracked, below threshold
- kube-state-metrics (69d lag) - already tracked
- msi-acrpull (56d lag) - already tracked as acrPull
- provider-azure (71d lag) - already tracked as secretSyncProvider
- controller/secrets-store-sync (195d lag) - already tracked as secretSyncController
- mise (337d lag) - source registry unknown, needs investigation
- mise-1p-container-image (72d lag) - already has MCR artifact sync configured

Related to: #4082
@avollmer-redhat avollmer-redhat force-pushed the add-images-to-updater-with-lag-new branch from af71e7f to 53571b8 Compare February 18, 2026 16:30
@mmazur
Copy link
Collaborator

mmazur commented Feb 18, 2026

/lgtm

Copy link
Collaborator

@raelga raelga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Feb 18, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mmazur, raelga, venkateshsredhat

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mmazur
Copy link
Collaborator

mmazur commented Feb 19, 2026

/retest

@janboll
Copy link
Collaborator

janboll commented Feb 20, 2026

/retest-required

@stevekuznetsov
Copy link
Contributor

/override ci/prow/integration
/override ci/prow/e2e-parallel

@openshift-ci
Copy link

openshift-ci bot commented Feb 21, 2026

@stevekuznetsov: Overrode contexts on behalf of stevekuznetsov: ci/prow/e2e-parallel, ci/prow/integration

Details

In response to this:

/override ci/prow/integration
/override ci/prow/e2e-parallel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot openshift-merge-bot bot merged commit be70fc6 into main Feb 21, 2026
15 checks passed
@openshift-merge-bot openshift-merge-bot bot deleted the add-images-to-updater-with-lag-new branch February 21, 2026 19:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants