-
Notifications
You must be signed in to change notification settings - Fork 181
CNTRLPLANE-180: check for user-based SCCs causing PSA violations #1881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
CNTRLPLANE-180: check for user-based SCCs causing PSA violations #1881
Conversation
@ibihim: This pull request references CNTRLPLANE-180 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the feature to target the "4.20.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
The refactoring is necessary to split the condition into condition and classification as the classification is quite tricky. If we can't determine the PSS, we shouldn' error out, but default to global config.
29b3dfa
to
465832d
Compare
/retest-required |
pkg/operator/podsecurityreadinesscontroller/classification_test.go
Outdated
Show resolved
Hide resolved
5d7b236
to
ca7b9ab
Compare
/retest-required |
Assigning myself for approval /assign |
if runLevelZeroNamespaces.Has(ns.Name) { | ||
conditions.addViolatingRunLevelZero(ns) | ||
return nil | ||
} | ||
if strings.HasPrefix(ns.Name, "openshift") { | ||
conditions.addViolatingOpenShift(ns) | ||
return nil | ||
} | ||
if ns.Labels[labelSyncControlLabel] == "false" { | ||
conditions.addViolatingDisabledSyncer(ns) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that any of these can be true and the evalutions further below also apply?
I.e is it possible to have a user-created Pod running in a violating run-level zero namespace, openshift namespace, or namespace where the syncer is disabled?
Would it be valuable to still continue classification even if one of these is true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issues above are very profound. It is in the place of "someone consciously" did something, while below we are in the space of exploration / figuring out what the root cause is.
Runlevel Zero is mostly excluded a layer above, so this actually doesn't happen anymore. It occurred up until 4.14 or so.
If OpenShift is triggered a team needs to evaluate what they did and set their PSA level for their namespace accordingly.
If a customer disables we simply don't care and assume they take ownership.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If OpenShift is triggered a team needs to evaluate what they did and set their PSA level for their namespace accordingly.
Do we prevent users from creating workloads in these namespaces somehow?
If not, I could imagine getting reports from a cluster where an openshift-*
namespace has violations from a user created workload, but because we don't root cause these violations we are oblivious to that fact.
If this is something we consider extremely rare and unlikely to affect our metrics in any significant way, I'm fine with this as-is but is something that I think is worth noting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't prevent users from creating openshift-
namespaces. Some Red Hat department even suggest to create openshfit-
-pre-fixed-namespaces when backing up etcd 😄
In most cases openshift-
namespaces were teams that didn't adjust yet to the PSA enforcement.
) | ||
|
||
const ( | ||
PodSecurityCustomerType = "PodSecurityCustomerEvaluationConditionsDetected" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dgrisonnet, there is no harm (except the need to adjust our evaluation of the metrics), if we "rename" it, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is not what we would treat as a breaking change for a metric as it only modifies a label value. But you are correct that evaluation rules will need to be updated accordingly.
Returning an error will put the namespace into inconclusive condition.
/retest-required |
2 similar comments
/retest-required |
/retest-required |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
) | ||
|
||
const ( | ||
PodSecurityCustomerType = "PodSecurityCustomerEvaluationConditionsDetected" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is not what we would treat as a breaking change for a metric as it only modifies a label value. But you are correct that evaluation rules will need to be updated accordingly.
// isNamespaceViolating checks if a namespace is ready for Pod Security Admission enforcement. | ||
// It returns true if the namespace is violating the Pod Security Admission policy, along with | ||
// the enforce label it was tested against. | ||
func (c *PodSecurityReadinessController) isNamespaceViolating(ctx context.Context, ns *corev1.Namespace) (bool, psapi.Level, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think decoupling the determineEnforceLabelForNamespace
call from this function and taking the enforcement level as a parameter instead of a returned value would be cleaner.
From what I can tell, you could also change determineEnforceLabelForNamespace
to take a Namespace instead of an ApplyConfig
/retest-required |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgrisonnet, everettraven, ibihim The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
2 similar comments
/retest-required |
1 similar comment
2 similar comments
2 similar comments
/retest-required |
Dependencies upon: openshift/origin#30159. |
@ibihim: This pull request references CNTRLPLANE-180 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the feature to target the "4.20.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@ibihim: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What
This change help us to collect metrics for violations that are based upon SCCs that are user-based.
Why
When a workload has a SCC that are based upon the privileges of the user, the PSA label syncer doesn't honor this and potentially assigns the wrong PSS.
Dependency
openshift/origin#30159