Skip to content

Conversation

mpryc
Copy link
Contributor

@mpryc mpryc commented Aug 7, 2025

Fixes: OADP-6168

The CM for the Repository Maintenance do not match the CM layout for the Node Agent. In upstream documentation CM for the Repository Maintenance is the same as for the Node Agent, but implementation is different.

Upstream issue: vmware-tanzu/velero#9159

Why the changes were made

To fix: OADP-6168

How to test the changes made

Aligning ConfigMap for the Repository Maintanance Job with the Velero implementation and not Velero documentation.

Tests performed:

  1. Labeled one node with the EU label.
  2. Created DPA with:
spec:
  [...]
  configuration:
    [...]
    repositoryMaintenance:
      global:
        loadAffinity:
          - nodeSelector:
              matchLabels:
                label.io/location: EU
        podResources:
          cpuLimit: 800m
          cpuRequest: 400m
          memoryLimit: 600Mi
          memoryRequest: 100Mi          
  1. Ensured the ConfigMap was created with the proper data
  2. Ensured the ConfigMap is passed in the velero pod as the argument (example for the velero-sample DPA name): '--repo-maintenance-job-configmap=repository-maintenance-velero-sample'
  3. Ensured on the job itself it was ran with the podResources and affinity:
kind: Pod
apiVersion: v1
metadata:
 generateName: test-backup-default-kopia-hzpgr-maintain-job-1754564576225-
 [...]
 managedFields:
   [...]
 namespace: openshift-adp
 ownerReferences:
   [...]
 labels:
   [...]
   velero.io/repo-name: test-backup-default-kopia-hzpgr
spec:
 restartPolicy: Never
 serviceAccountName: velero
 imagePullSecrets:
   - name: velero-dockercfg-nns42
 priority: 0
 schedulerName: default-scheduler
 enableServiceLinks: true
 affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
             - key: label.io/location
               operator: In
               values:
                 - EU
 [...]
 containers:
   - resources:
       limits:
         cpu: 800m
         memory: 600Mi
       requests:
         cpu: 400m
         memory: 100Mi
     [...]
     args:
       - repo-maintenance
       - '--repo-name=test-backup'
       - '--repo-type=kopia'
       - '--backup-storage-location=default'
       - '--log-level=debug'
       - '--log-format=text'
 [...]
  1. Ensured the job maintenance pod was scheduled only on the node with proper label
  2. Ran unit tests that have been modified to align with the Velero implementation CM

The CM for the Repository Maintenance do not match the
CM layout for the Node Agent. In upstream documentation
CM for the Repository Maintenance is the same as for the
Node Agent, but implementation is different.

Upstream issue: vmware-tanzu/velero#9159

Signed-off-by: Michal Pryc <[email protected]>
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Aug 7, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 7, 2025

@mpryc: This pull request references OADP-6168 which is a valid jira issue.

In response to this:

Fixes: OADP-6168

The CM for the Repository Maintenance do not match the CM layout for the Node Agent. In upstream documentation CM for the Repository Maintenance is the same as for the Node Agent, but implementation is different.

Upstream issue: vmware-tanzu/velero#9159

Why the changes were made

To fix: OADP-6168

How to test the changes made

Aligning ConfigMap for the Repository Maintanance Job with the Velero implementation and not Velero documentation.

Tests performed:

  1. Labeled one node with the EU label.
  2. Created DPA with:
spec:
 [...]
 configuration:
   [...]
   repositoryMaintenance:
     global:
       loadAffinity:
         - nodeSelector:
             matchLabels:
               label.io/location: EU
       podResources:
         cpuLimit: 800m
         cpuRequest: 400m
         memoryLimit: 600Mi
         memoryRequest: 100Mi          
  1. Ensured the ConfigMap was created with the proper data
  2. Ensured the ConfigMap is passed in the velero pod as the argument (example for the velero-sample DPA name): '--repo-maintenance-job-configmap=repository-maintenance-velero-sample'
  3. Ensured on the job itself it was ran with the podResources and affinity:
kind: Pod
apiVersion: v1
metadata:
 generateName: test-backup-default-kopia-hzpgr-maintain-job-1754564576225-
 [...]
 managedFields:
   [...]
 namespace: openshift-adp
 ownerReferences:
   [...]
 labels:
   [...]
   velero.io/repo-name: test-backup-default-kopia-hzpgr
spec:
 restartPolicy: Never
 serviceAccountName: velero
 imagePullSecrets:
   - name: velero-dockercfg-nns42
 priority: 0
 schedulerName: default-scheduler
 enableServiceLinks: true
 affinity:
   nodeAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
       nodeSelectorTerms:
         - matchExpressions:
             - key: label.io/location
               operator: In
               values:
                 - EU
 [...]
 containers:
   - resources:
       limits:
         cpu: 800m
         memory: 600Mi
       requests:
         cpu: 400m
         memory: 100Mi
     [...]
     args:
       - repo-maintenance
       - '--repo-name=test-backup'
       - '--repo-type=kopia'
       - '--backup-storage-location=default'
       - '--log-level=debug'
       - '--log-format=text'
 [...]
  1. Ensured the job maintenance pod was scheduled only on the node with proper label
  2. Ran unit tests that have been modified to align with the Velero implementation CM

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from mrnold and sseago August 7, 2025 11:12
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 7, 2025
@mpryc
Copy link
Contributor Author

mpryc commented Aug 7, 2025

/cherry-pick oadp-1.5

@openshift-cherrypick-robot
Copy link
Contributor

@mpryc: once the present PR merges, I will cherry-pick it on top of oadp-1.5 in a new PR and assign it to you.

In response to this:

/cherry-pick oadp-1.5

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@weshayutin
Copy link
Contributor

just waiting on a maintenance job :)

@shubham-pampattiwar
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 19, 2025
@weshayutin
Copy link
Contributor

oc get node ip-10-0-15-237.us-west-2.compute.internal --show-labels
NAME                                        STATUS   ROLES    AGE   VERSION   LABELS
ip-10-0-15-237.us-west-2.compute.internal   Ready    worker   24h   v1.33.2   beta.kubernetes.io/arch=arm64,beta.kubernetes.io/instance-type=m6g.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-west-2,failure-domain.beta.kubernetes.io/zone=us-west-2a,kubernetes.io/arch=arm64,kubernetes.io/hostname=ip-10-0-15-237.us-west-2.compute.internal,kubernetes.io/os=linux,label.io/location=EU,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m6g.xlarge,node.openshift.io/os_id=rhel,topology.ebs.csi.aws.com/zone=us-west-2a,topology.k8s.aws/zone-id=usw2-az1,topology.kubernetes.io/region=us-west-2,topology.kubernetes.io/zone=us-west-2a
spec:
  backupLocations:
    - velero:
        config:
          profile: default
          region: us-west-2
        credential:
          key: cloud
          name: cloud-credentials
        default: true
        objectStorage:
          bucket: cvpbucket3uswest2
          prefix: velero
        provider: aws
  configuration:
    nodeAgent:
      enable: true
      uploaderType: kopia
    repositoryMaintenance:
      global:
        loadAffinity:
          - nodeSelector:
              matchLabels:
                label.io/location: EU
    velero:
      defaultPlugins:
        - kubevirt
        - csi
        - openshift
        - aws
        - hypershift
      disableFsBackup: false
  logFormat: text
  nonAdmin:
    enable: false
  snapshotLocations:
    - velero:
        config:
          profile: default
          region: us-west-2
        provider: aws
status:
  conditions:
    - lastTransitionTime: '2025-08-19T16:56:59Z'
      message: Reconcile complete
      reason: Complete
oc get cm repository-maintenance-dpa-sample -o yaml
apiVersion: v1
data:
  global: '{"loadAffinity":[{"nodeSelector":{"matchLabels":{"label.io/location":"EU"}}}]}'
kind: ConfigMap
metadata:
  creationTimestamp: "2025-08-19T17:04:02Z"
  labels:
    app.kubernetes.io/component: repository-maintenance-config
    app.kubernetes.io/instance: dpa-sample
    app.kubernetes.io/managed-by: oadp-operator
    openshift.io/oadp: "True"
  name: repository-maintenance-dpa-sample
  namespace: openshift-adp
  ownerReferences:
  - apiVersion: oadp.openshift.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: DataProtectionApplication
    name: dpa-sample
    uid: c33ae177-b035-4e81-bc15-744ac93552dc
  resourceVersion: "405591"
  uid: b6da504e-0266-44e6-8e14-9117ca1eb170
oc get pod/repo-maintain-job-1755626364242-dfppr -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.128.2.145/23"],"mac_address":"0a:58:0a:80:02:91","gateway_ips":["10.128.2.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.2.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.2.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.2.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.2.1"}],"ip_address":"10.128.2.145/23","gateway_ip":"10.128.2.1","role":"primary"}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.128.2.145"
          ],
          "mac": "0a:58:0a:80:02:91",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: restricted-v2
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
    security.openshift.io/validated-scc-subject-type: user
  creationTimestamp: "2025-08-19T17:59:24Z"
  generateName: repo-maintain-job-1755626364242-
  generation: 1
  labels:
    batch.kubernetes.io/controller-uid: 9e19ae9e-5468-4dd1-a77c-c2e1a4eade05
    batch.kubernetes.io/job-name: repo-maintain-job-1755626364242
    controller-uid: 9e19ae9e-5468-4dd1-a77c-c2e1a4eade05
    job-name: repo-maintain-job-1755626364242
    velero.io/repo-name: minimal-3csivol-dpa-sample-1-kopia-sthmr
  name: repo-maintain-job-1755626364242-dfppr
  namespace: openshift-adp
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: repo-maintain-job-1755626364242
    uid: 9e19ae9e-5468-4dd1-a77c-c2e1a4eade05
  resourceVersion: "420604"
  uid: 48d604db-78d0-4f0a-aa2b-f2e7e6b9d9ea
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: label.io/location
            operator: In
            values:
            - EU
  containers:
  - args:
    - repo-maintenance
    - --repo-name=minimal-3csivol
    - --repo-type=kopia
    - --backup-storage-location=dpa-sample-1
    - --log-level=info
    - --log-format=text
    command:
    - /velero
    env:
    - name: VELERO_SCRATCH_DIR
      value: /scratch
    - name: VELERO_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: LD_LIBRARY_PATH
      value: /plugins
    - name: OPENSHIFT_IMAGESTREAM_BACKUP
      value: "true"
    - name: AWS_SHARED_CREDENTIALS_FILE
      value: /credentials/cloud
    image: quay.io/konveyor/velero:latest
    imagePullPolicy: IfNotPresent
    name: velero-repo-maintenance-container
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1000740000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /plugins
      name: plugins
      readOnly: true
    - mountPath: /scratch
      name: scratch
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /var/run/secrets/openshift/serviceaccount
      name: bound-sa-token
      readOnly: true
    - mountPath: /tmp
      name: tmp
    - mountPath: /home/velero
      name: home
    - mountPath: /credentials
      name: cloud-credentials
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-wnf59
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: velero-dockercfg-849pw
  nodeName: ip-10-0-15-237.us-west-2.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000740000
    seLinuxOptions:
      level: s0:c27,c19
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: velero
  serviceAccountName: velero
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: os
    operator: Equal
    value: windows
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: plugins
  - emptyDir: {}
    name: scratch
  - emptyDir: {}
    name: certs
  - emptyDir: {}
    name: tmp
  - emptyDir: {}
    name: home
  - name: bound-sa-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: openshift
          expirationSeconds: 3600
          path: token
  - name: cloud-credentials
    secret:
      defaultMode: 288
      secretName: cloud-credentials
  - name: kube-api-access-wnf59
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T17:59:29Z"
    status: "False"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T17:59:24Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T17:59:28Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T17:59:28Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T17:59:24Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://cabfeb3eb83c95bf9ac627f0f88b7c35b07746332cc89b7f23d56ca9d01944c1
    image: quay.io/konveyor/velero:latest
    imageID: quay.io/konveyor/velero@sha256:66804ae39d92591db8c87176ce29cf18a314155a9177e0d35c7059ed0346ea72
    lastState: {}
    name: velero-repo-maintenance-container
    ready: false
    resources: {}
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://cabfeb3eb83c95bf9ac627f0f88b7c35b07746332cc89b7f23d56ca9d01944c1
        exitCode: 0
        finishedAt: "2025-08-19T17:59:27Z"
        reason: Completed
        startedAt: "2025-08-19T17:59:24Z"
    user:
      linux:
        gid: 0
        supplementalGroups:
        - 0
        - 1000740000
        uid: 1000740000
    volumeMounts:
    - mountPath: /plugins
      name: plugins
      readOnly: true
      recursiveReadOnly: Disabled
    - mountPath: /scratch
      name: scratch
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /var/run/secrets/openshift/serviceaccount
      name: bound-sa-token
      readOnly: true
      recursiveReadOnly: Disabled
    - mountPath: /tmp
      name: tmp
    - mountPath: /home/velero
      name: home
    - mountPath: /credentials
      name: cloud-credentials
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-wnf59
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 10.0.15.237
  hostIPs:
  - ip: 10.0.15.237
  phase: Succeeded
  podIP: 10.128.2.145
  podIPs:
  - ip: 10.128.2.145
  qosClass: BestEffort
  startTime: "2025-08-19T17:59:24Z"
whayutin@fedora:~/OPENSHIFT/git/OADP$ oc get pod/repo-maintain-job-1755627264041-dmttw -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.128.2.147/23"],"mac_address":"0a:58:0a:80:02:93","gateway_ips":["10.128.2.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.2.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.2.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.2.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.2.1"}],"ip_address":"10.128.2.147/23","gateway_ip":"10.128.2.1","role":"primary"}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.128.2.147"
          ],
          "mac": "0a:58:0a:80:02:93",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: restricted-v2
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
    security.openshift.io/validated-scc-subject-type: user
  creationTimestamp: "2025-08-19T18:14:24Z"
  generateName: repo-maintain-job-1755627264041-
  generation: 1
  labels:
    batch.kubernetes.io/controller-uid: 2d5fe4e6-f0cd-4fec-b5dc-b82d7f71f8e9
    batch.kubernetes.io/job-name: repo-maintain-job-1755627264041
    controller-uid: 2d5fe4e6-f0cd-4fec-b5dc-b82d7f71f8e9
    job-name: repo-maintain-job-1755627264041
    velero.io/repo-name: mysql-persistent-dpa-sample-1-kopia-nmdsx
  name: repo-maintain-job-1755627264041-dmttw
  namespace: openshift-adp
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: repo-maintain-job-1755627264041
    uid: 2d5fe4e6-f0cd-4fec-b5dc-b82d7f71f8e9
  resourceVersion: "424347"
  uid: fdadb2dc-7f59-4542-8a14-4280322fada2
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: label.io/location
            operator: In
            values:
            - EU
  containers:
  - args:
    - repo-maintenance
    - --repo-name=mysql-persistent
    - --repo-type=kopia
    - --backup-storage-location=dpa-sample-1
    - --log-level=info
    - --log-format=text
    command:
    - /velero
    env:
    - name: VELERO_SCRATCH_DIR
      value: /scratch
    - name: VELERO_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: LD_LIBRARY_PATH
      value: /plugins
    - name: OPENSHIFT_IMAGESTREAM_BACKUP
      value: "true"
    - name: AWS_SHARED_CREDENTIALS_FILE
      value: /credentials/cloud
    image: quay.io/konveyor/velero:latest
    imagePullPolicy: IfNotPresent
    name: velero-repo-maintenance-container
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1000740000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /plugins
      name: plugins
      readOnly: true
    - mountPath: /scratch
      name: scratch
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /var/run/secrets/openshift/serviceaccount
      name: bound-sa-token
      readOnly: true
    - mountPath: /tmp
      name: tmp
    - mountPath: /home/velero
      name: home
    - mountPath: /credentials
      name: cloud-credentials
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-qgcwm
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: velero-dockercfg-849pw
  nodeName: ip-10-0-15-237.us-west-2.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000740000
    seLinuxOptions:
      level: s0:c27,c19
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: velero
  serviceAccountName: velero
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: os
    operator: Equal
    value: windows
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: plugins
  - emptyDir: {}
    name: scratch
  - emptyDir: {}
    name: certs
  - emptyDir: {}
    name: tmp
  - emptyDir: {}
    name: home
  - name: bound-sa-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: openshift
          expirationSeconds: 3600
          path: token
  - name: cloud-credentials
    secret:
      defaultMode: 288
      secretName: cloud-credentials
  - name: kube-api-access-qgcwm
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T18:14:39Z"
    status: "False"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T18:14:24Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T18:14:38Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T18:14:38Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2025-08-19T18:14:24Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://d73637b7896b3f61d7d65f3a8ecd16736ef58bacb35115f044671d6f35d24a3c
    image: quay.io/konveyor/velero:latest
    imageID: quay.io/konveyor/velero@sha256:66804ae39d92591db8c87176ce29cf18a314155a9177e0d35c7059ed0346ea72
    lastState: {}
    name: velero-repo-maintenance-container
    ready: false
    resources: {}
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://d73637b7896b3f61d7d65f3a8ecd16736ef58bacb35115f044671d6f35d24a3c
        exitCode: 0
        finishedAt: "2025-08-19T18:14:37Z"
        reason: Completed
        startedAt: "2025-08-19T18:14:24Z"
    user:
      linux:
        gid: 0
        supplementalGroups:
        - 0
        - 1000740000
        uid: 1000740000
    volumeMounts:
    - mountPath: /plugins
      name: plugins
      readOnly: true
      recursiveReadOnly: Disabled
    - mountPath: /scratch
      name: scratch
    - mountPath: /etc/ssl/certs
      name: certs
    - mountPath: /var/run/secrets/openshift/serviceaccount
      name: bound-sa-token
      readOnly: true
      recursiveReadOnly: Disabled
    - mountPath: /tmp
      name: tmp
    - mountPath: /home/velero
      name: home
    - mountPath: /credentials
      name: cloud-credentials
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-qgcwm
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 10.0.15.237
  hostIPs:
  - ip: 10.0.15.237
  phase: Succeeded
  podIP: 10.128.2.147
  podIPs:
  - ip: 10.128.2.147
  qosClass: BestEffort
  startTime: "2025-08-19T18:14:24Z"
whayutin@fedora:~/OPENSHIFT/git/OADP$ 

Copy link

openshift-ci bot commented Aug 19, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Joeavaikath, mpryc, shubham-pampattiwar, sseago, weshayutin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [mpryc,shubham-pampattiwar,sseago]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

openshift-ci bot commented Aug 19, 2025

@mpryc: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 08fecbd into openshift:oadp-dev Aug 19, 2025
11 checks passed
@openshift-cherrypick-robot
Copy link
Contributor

@mpryc: new pull request created: #1917

In response to this:

/cherry-pick oadp-1.5

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants