Skip to content

Conversation

@lucianvlad
Copy link
Contributor

Warning

This is a public repository, ensure not to disclose:

  • personal data beyond what is necessary for interacting with this pull request, nor
  • business confidential information, such as customer names.

What kind of PR is this?

Update default containerd version to v1.7.27 to mitigate CVE-2024-40635.

Required: Mark one of the following that is applicable:

  • kind/feature
  • kind/improvement
  • kind/deprecation
  • kind/documentation
  • kind/clean-up
  • kind/bug
  • kind/other

Optional: Mark one or more of the following that are applicable:

Important

Breaking changes should be marked kind/admin-change or kind/dev-change depending on type
Critical security fixes should be marked with kind/security

  • kind/admin-change
  • kind/dev-change
  • kind/security
  • [kind/adr](set-me)

What does this PR do / why do we need this PR?

...

  • Fixes #

Information to reviewers

Checklist

  • Proper commit message prefix on all commits
  • Change checks:
    • The change is transparent
    • The change is disruptive
    • The change requires no migration steps
    • The change requires migration steps
  • Documentation checks:
  • Metrics checks:
    • The metrics are still exposed and present in Grafana after the change
    • The metrics names didn't change (Grafana dashboards and Prometheus alerts required no updates)
    • The metrics names did change (Grafana dashboards and Prometheus alerts required an update)
  • Logs checks:
    • The logs do not show any errors after the change
  • PodSecurityPolicy checks:
    • Any changed Pod is covered by Kubernetes Pod Security Standards
    • Any changed Pod is covered by Gatekeeper Pod Security Policies
    • The change does not cause any Pods to be blocked by Pod Security Standards or Policies
  • NetworkPolicy checks:
    • Any changed Pod is covered by Network Policies
    • The change does not cause any dropped packets in the NetworkPolicy Dashboard
  • Audit checks:
    • The change does not cause any unnecessary Kubernetes audit events
    • The change requires changes to Kubernetes audit policy
  • Falco checks:
    • The change does not cause any alerts to be generated by Falco
  • Bug checks:
    • The bug fix is covered by regression tests

@lucianvlad lucianvlad self-assigned this May 8, 2025
@lucianvlad lucianvlad changed the base branch from main to release-2.26.0 May 9, 2025 08:07
@lucianvlad lucianvlad force-pushed the staging-2.26.0-ck8s5 branch 2 times, most recently from fbb5b99 to 17dba5c Compare May 12, 2025 09:12
@lucianvlad
Copy link
Contributor Author

The upgrade from v2.26.0-ck8s4 to v2.26.0-ck8s5 worked with no issue and containerd was upgraded as expected:

at 05/13 13:12:50 ❯ k get nodes -o wide
NAME                                       STATUS   ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
dr-team-2-restore-sc-k8s-control-plane-1   Ready    control-plane   113m   v1.30.4   192.121.209.46    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-control-plane-2   Ready    control-plane   113m   v1.30.4   192.121.208.151   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-control-plane-3   Ready    control-plane   113m   v1.30.4   192.121.208.53    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-node-worker-0     Ready    <none>          112m   v1.30.4   192.121.208.136   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-node-worker-1     Ready    <none>          112m   v1.30.4   192.121.209.245   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-node-worker-2     Ready    <none>          112m   v1.30.4   192.121.208.224   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-node-worker-3     Ready    <none>          112m   v1.30.4   192.121.208.177   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21
dr-team-2-restore-sc-k8s-node-worker-4     Ready    <none>          112m   v1.30.4   192.121.209.57    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.21


at 05/13 14:42:04 ❯ k get nodes -o wide
Opening in existing browser session.
NAME                                       STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
dr-team-2-restore-sc-k8s-control-plane-1   Ready    control-plane   3h22m   v1.30.4   192.121.209.46    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-control-plane-2   Ready    control-plane   3h22m   v1.30.4   192.121.208.151   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-control-plane-3   Ready    control-plane   3h22m   v1.30.4   192.121.208.53    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-node-worker-0     Ready    <none>          3h21m   v1.30.4   192.121.208.136   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-node-worker-1     Ready    <none>          3h21m   v1.30.4   192.121.209.245   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-node-worker-2     Ready    <none>          3h21m   v1.30.4   192.121.208.224   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-node-worker-3     Ready    <none>          3h21m   v1.30.4   192.121.208.177   <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27
dr-team-2-restore-sc-k8s-node-worker-4     Ready    <none>          3h21m   v1.30.4   192.121.209.57    <none>        Ubuntu 24.04 LTS   6.8.0-39-generic   containerd://1.7.27

@lucianvlad lucianvlad marked this pull request as ready for review May 13, 2025 14:17
Copy link
Contributor

@Xartos Xartos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kubespray changes includes some v2.27 changes to the upcloud terraform that shouldn't be part since they require the migration guide for 2.27. The ones that changes the loadbalancer to a map.

@lucianvlad
Copy link
Contributor Author

The kubespray changes includes some v2.27 changes to the upcloud terraform that shouldn't be part since they require the migration guide for 2.27. The ones that changes the loadbalancer to a map.

Those changes are on the 2.26.0-ck8s4 but on branch v2.26.0-ck8s4+upcloud+extra+lb
For some reason were are not pushed to the release-2.26.0 branch and they are not part of any patches yet
But they are already installed on customer env
This is the reason they are now included in this patch.

@lucianvlad lucianvlad requested a review from Xartos May 14, 2025 07:05
@Xartos
Copy link
Contributor

Xartos commented May 14, 2025

Those changes are on the 2.26.0-ck8s4 but on branch v2.26.0-ck8s4+upcloud+extra+lb For some reason were are not pushed to the release-2.26.0 branch and they are not part of any patches yet But they are already installed on customer env This is the reason they are now included in this patch.

I would assume that they are on the separate branch because it was needed for some customer and we didn't want to include it in the release because it adds this new feature that requires migration. So I'd like us to do the same here and skip in this release and then maybe add it as a separate branch v2.26.0-ck8s5+upcloud+extra+lb if it's needed

@Ajarmar
Copy link
Contributor

Ajarmar commented May 14, 2025

For some reason were are not pushed to the release-2.26.0 branch and they are not part of any patches yet

Just to clarify something: they were already part of the release-2.26.0-ck8s branch, because the original PR that added the functionality merged into that branch, but they were not part of a release yet.

@Ajarmar
Copy link
Contributor

Ajarmar commented May 14, 2025

@Xartos We talked a bit more about this and it seems kind of tricky to fix this without doing some destructive operations, because of the fact that the original PR merged into the release-2.26-0-ck8s branch instead of into master.
If we remove the commits from release-2.26.0-ck8s then the PR will still claim that they were merged into release-2.26.0-ck8s even though they are no longer there (I assume, not sure how GitHub handles that), and it will also no longer be possible to trace the commits back to their PR (although doing this from master is already tricky).
Some alternative approaches would be to instead add another commit onto release-2.26.0-ck8s that reverts the changes, or to create a new branch to use for this patch release, but I think neither of those options are particularly clean, so maybe the destructive operations are preferable.
Thoughts?

@Xartos
Copy link
Contributor

Xartos commented May 14, 2025

@Xartos We talked a bit more about this and it seems kind of tricky to fix this without doing some destructive operations, because of the fact that the original PR merged into the release-2.26-0-ck8s branch instead of into master. If we remove the commits from release-2.26.0-ck8s then the PR will still claim that they were merged into release-2.26.0-ck8s even though they are no longer there (I assume, not sure how GitHub handles that), and it will also no longer be possible to trace the commits back to their PR (although doing this from master is already tricky). Some alternative approaches would be to instead add another commit onto release-2.26.0-ck8s that reverts the changes, or to create a new branch to use for this patch release, but I think neither of those options are particularly clean, so maybe the destructive operations are preferable. Thoughts?

Aah, right 🤔 Yep that shouldn't have been merged there. I'm not sure what's best, but rewriting history is typically not great since then all the other tags needs to be updated as well and so on. But as you say the other options might not be so nice either. 🤔 I'll need to ponder on this one a bit more

@Ajarmar
Copy link
Contributor

Ajarmar commented May 15, 2025

Aah, right 🤔 Yep that shouldn't have been merged there. I'm not sure what's best, but rewriting history is typically not great since then all the other tags needs to be updated as well and so on. But as you say the other options might not be so nice either. 🤔 I'll need to ponder on this one a bit more

I don't think these commits have been part of a tag yet, they were added after the ck8s4 patch.

@Xartos
Copy link
Contributor

Xartos commented May 15, 2025

The only issue is that everyone with a local copy of the branch will need to delete it before pulling again. However I don't see that many should've pulled that repostitory so not a big issue imo. Then destruction might be the best

@lucianvlad lucianvlad force-pushed the staging-2.26.0-ck8s5 branch from 79cd617 to a411c44 Compare May 15, 2025 14:18
@lucianvlad lucianvlad force-pushed the staging-2.26.0-ck8s5 branch from a411c44 to 370a328 Compare May 15, 2025 14:19
@lucianvlad lucianvlad merged commit a7665e9 into release-2.26.0 May 15, 2025
2 checks passed
@lucianvlad lucianvlad deleted the staging-2.26.0-ck8s5 branch May 15, 2025 14:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants