-
Notifications
You must be signed in to change notification settings - Fork 450
OCPBUGS-62714: Temporary policy.json for PIS rpm-ostree rebasing #5345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Skipping CI for Draft Pull Request. |
f398cb2
to
5d18375
Compare
af4bdb7
to
e15a4ae
Compare
/jira refresh |
@pablintino: No Jira issue is referenced in the title of this pull request. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@pablintino: This pull request references Jira Issue OCPBUGS-62714, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall structure makes sense to me. It looks like we opted for Colin's suggestion in https://issues.redhat.com/browse/OCPBUGS-62714 which I think is a safe path.
Some questions/suggestions inline:
|
||
// PodmanStorageConfig contains storage configuration from Podman. | ||
type PodmanStorageConfig struct { | ||
GraphDriverName string `json:"graphDriverName"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of curiosity are these names and fields copied over from podman somewhere? Or just what we need to construct the full pull spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The values come from
machine-config-operator/templates/common/_base/files/container-storage.yaml
Lines 13 to 20 in 18a0dc6
# Default storage driver, must be set for proper operation. | |
driver = "overlay" | |
# Temporary storage location | |
runroot = "/run/containers/storage" | |
# Primary Read/Write location of container storage | |
graphroot = "/var/lib/containers/storage" |
I foud podman the more convenient way to get them instead of reading the file, as podman will take into consideration user overrides placed in
~/.config/containers/storage.conf
Id string `json:"Id"` | ||
Digest string `json:"Digest"` | ||
RepoDigests []string `json:"RepoDigests"` | ||
RepoDigest string `json:"-"` // Filled with matching digest from RepoDigests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was a bit confused about this field at first, but I guess the intent here is that Id, Digest, RepoDigests will get unmarshed from podimage images, and instead of having a new field we pass around, we have a pre-filtered RepoDigest field we populate after the fact?
Interesting pattern that I'm not sure we employ elsewhere and should work, so I'm not against it, just wanted to make sure I'm understanding that right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You got it right. I did it to avoid passing a tuple everywhere, but, if it's not too clear I can always follow that other approach.
} | ||
|
||
_, containerStoragePoliciesPresent := policy.Transports[imagePolicyTransportContainerStorage] | ||
if (reflect.DeepEqual(policy.Default[0], signature.PolicyRequirements{signature.NewPRInsecureAcceptAnything()}) && !containerStoragePoliciesPresent) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious, can there be other fields in the policy.Default[0]
if we have insecureAcceptAnything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory it shouldn't, based on the reverse engineering I've done, there are many checks in the containers repo that assummes the insecure policy is a single element list.
To be 100% sure I added the length condition cause they are all "and" aggregated, thus, if there are more elements I cannot warranty the policy will accept the pull and I prefer to patch the policy and try with our temporal entry.
// the local image | ||
isOsImagePresent := false | ||
var podmanImageInfo *PodmanImageInfo | ||
if isPisConfigured { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(non blocking) do we plan on removing the PIS requirement, or keep this functionality for the PIS use case only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure about this one and I have not enough context of why this thing was done only for PIS. I'll ask to the team.
BTW Shouldn't we remove the FeatureGate checks now that is GAed?
assert.True(t, os.IsNotExist(err)) | ||
} | ||
|
||
func TestIsImagePresent(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you plan to unit test the podman info commands?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I avoided doing so cause we have a ton of calls to exec all over the MCD that are not tested and I thought the ones the new podman file perform are just "a few more".
I've reworked the code to try to land an abstraction layer that helps testing with mocked commands. I'll create a Jira story in the Tech Debt epic to increase its usage in the MCD.
3373ccd
to
9dc9d5e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The updated workflow looks sane to me, thanks for addressing the comments!
I didn't mean to increase the scope with my original test comment, so apologies for that, hopefully there's no conflicts on the backport.
Speaking of, I noticed that you made this against the 4.19 branch. Should we not do this on main and backport?
One last suggestion inline as well:
9dc9d5e
to
05a2848
Compare
@pablintino: This pull request references Jira Issue OCPBUGS-62714, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/jira refresh |
@pablintino: This pull request references Jira Issue OCPBUGS-62714, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest-required |
@pablintino: This pull request references Jira Issue OCPBUGS-62714, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest-required |
@pablintino: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
I think all my concerns are addressed. Will let verification process and CI ensure we don't break anything.
/payload 4.21 nightly blocking
Just for additional safety
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pablintino, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Closes: OCPBUGS-62714
- What I did
This commit addresses OCPBUGS-61714 by implementing a temporary container image policy override for rpm-ostree operations when pulling images from local container storage.
The Problem:
When using PinnedImageSets with restrictive container image policies, rpm-ostree fails to pull OS images from local storage during updates. The issue occurs because:
The Solution:
Create a temporary, non-invasive policy override mechanism:
This ensures rpm-ostree can always pull from local storage when using PinnedImageSets without permanently modifying the system's security policies.
- How to verify it
- Description for the changelog
Fix rpm-ostree rebase failures from local container storage when using PinnedImageSets with restrictive image policies