Add images with >21 days deployment lag to image-updater#4139
Add images with >21 days deployment lag to image-updater#4139openshift-merge-bot[bot] merged 1 commit intomainfrom
Conversation
tooling/image-updater/config.yaml
Outdated
| filePath: ../../config/config.yaml | ||
| # Kube Webhook Certgen (Ingress NGINX) |
There was a problem hiding this comment.
There was a problem hiding this comment.
blackbox-exporter image is an inherent part of the RMO bundle, it cannot be changed here. It's here only so the image mirroring works, but it cannot be managed by image-updater.
For reference, when RMO bundle is updated, this make target
ARO-HCP/route-monitor-operator/Makefile
Lines 55 to 67 in 5f7333f
There was a problem hiding this comment.
so we drop rmo and black box out this ?
the make target could be run part of the tooling too right?
There was a problem hiding this comment.
We can't blindly update RMO, the risk of the integration breaking is high. We should drop both blackbox and RMO images from the tool and replace RMO as soon as possible.
be3144d to
af71e7f
Compare
This change adds 3 images with deployment lag exceeding 21 days to the image-updater configuration, enabling automated digest updates. Images added to image-updater (all >21 days lag): - kube-webhook-certgen (310d lag): registry.k8s.io/ingress-nginx/kube-webhook-certgen Images NOT added to image-updater (have alternative update mechanisms): - route-monitor-operator (86d lag): Updated via route-monitor-operator/Makefile 'update-digests-in-config' target - blackbox-exporter (659d lag): Updated via route-monitor-operator/Makefile 'update-digests-in-config' target - aks-command-runtime (1725d lag): Already tracked as aksCommandRuntime - velero-server (114d lag): Removed from image-updater in previous commit - velero-plugin-azure (139d lag): Removed from image-updater in previous commit - velero-hypershift-plugin (46d lag): Removed from image-updater in previous commit Pull-through cache setup (dev-infrastructure/templates/global-acr.bicep): Velero ACR cache rules remain in place from previous commit to address image deletion/overwrite issues in the konveyor registry. Images not added (already in image-updater or lag <21 days): - fluent-bit (12d lag) - already tracked as arobit-forwarder - mdsd (18d lag) - already tracked as arobit-mdsd - prometheus* images (16-19d lag) - already tracked, below threshold - kube-state-metrics (69d lag) - already tracked - msi-acrpull (56d lag) - already tracked as acrPull - provider-azure (71d lag) - already tracked as secretSyncProvider - controller/secrets-store-sync (195d lag) - already tracked as secretSyncController - mise (337d lag) - source registry unknown, needs investigation - mise-1p-container-image (72d lag) - already has MCR artifact sync configured Related to: #4082
af71e7f to
53571b8
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mmazur, raelga, venkateshsredhat The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest |
|
/retest-required |
|
/override ci/prow/integration |
|
@stevekuznetsov: Overrode contexts on behalf of stevekuznetsov: ci/prow/e2e-parallel, ci/prow/integration DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Original PR got messed up #4118 .
Why
Many images are out of date and could be a cause for security concerns. We need to automate keep our component images up to date for security compliance. https://www.releases.dev.aro.azure-test.net/releases/services/Microsoft.Azure.ARO.HCP.Global/cloud/public/environment/int/images
es
What
This change adds 7 images with deployment lag exceeding 21 days to the image-updater configuration, enabling automated digest updates.
Images added to image-updater (all >21 days lag):
route-monitor-operator (86d lag): quay.io/app-sre/route-monitor-operator
blackbox-exporter (659d lag): quay.io/prometheus/blackbox-exporter
kube-webhook-certgen (310d lag): registry.k8s.io/ingress-nginx/kube-webhook-certgen
aks-command-runtime (1725d lag): mcr.microsoft.com/aks/command/runtime
The global-acr.bicep template deploys to all environments (dev, int, stg, prod), so these cache rules will be available across all ACRs:
arohcpsvcdev (dev)
arohcpsvcint (public/int)
arohcpsvcstg (public/stg)
arohcpsvcprod (public/prod)
Images not added (already in image-updater or lag <21 days):
fluent-bit (12d lag) - already tracked as arobit-forwarder
mdsd (18d lag) - already tracked as arobit-mdsd
prometheus* images (16-19d lag) - already tracked, below threshold
kube-state-metrics (69d lag) - already tracked
msi-acrpull (56d lag) - already tracked as acrPull
provider-azure (71d lag) - already tracked as secretSyncProvider
controller/secrets-store-sync (195d lag) - already tracked as secretSyncController
mise (337d lag) - source registry unknown, needs investigation
mise-1p-container-image (72d lag) - already has MCR artifact sync configured
Related to: #4082