-
Notifications
You must be signed in to change notification settings - Fork 38
ClusterNetworkPolicy: implementation and clients #306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Kubebuilder was used to originally generate the CRDs, but for us it offers only a very limited scaffolding. Kubebuilder has strong opinions on where the apis should be stored (./api instead of currently used ./apis) and migrating to it should be a separate effort (if we ever want to). Signed-off-by: Nadia Pinaeva <[email protected]>
✅ Deploy Preview for kubernetes-sigs-network-policy-api ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
d1841d4
to
fdedcef
Compare
just to confirm, the only diff from v1alpha1 to v1alpha2 are
I want to get this into kube-network-policies |
Yes that is a good summary, and thank you for the implementation! |
My time for implementation is limited, so I prefer to spend only one time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can tell, this is follows what we discussed at Kubecon
I did notice some tweaking that needs to be done in some of the comments but it's probably better to do that as follow-up PRs to make the update less unwieldy.
LGTM from me so we can get the CRD structure change in. We do need to go through the godoc -- and also as important is to update the documentation
@@ -1,24 +0,0 @@ | |||
domain: policy.networking.k8s.io |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this file used for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is used by kubebuilder, see 9a065f3 comment
@@ -0,0 +1,436 @@ | |||
/* | |||
Copyright 2020 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Might want to update these dates at some point.
// ClusterNetworkPolicySpec defines the desired state of ClusterNetworkPolicy. | ||
type ClusterNetworkPolicySpec struct { | ||
// Tier is used as a first layer of rule prioritization. | ||
// ClusterNetworkPolicy has 2 tiers: admin and baseline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I think it might be hard to describe the mental model for the Tier in a comment attached to the field itself.
Suggest:
// Tier is used as the top-level grouping for network policy prioritization.
//
// Policies are evaluated in the following order:
//
// * AdminTier
// * NetworkPolicyTier
// * BaselineTier
// * DefaultTier
//
// The AdminTier takes precedence over all other policies. Policies
// defined in this tier are used to set cluster-wide security rules that cannot
// be overridden in the other tiers. Only ClusterNetworkPolicy can be
// created in the AdminTier. An example use case would be to "block all traffic
// to a given IP CIDR block" for all Pods in the cluster.
//
// NetworkPolicyTier is the tier for the namespaced v1.NetworkPolicy. These
// policies are intended for the application developer to describe the security
// policy associated with their deployments inside of their namespace. Policies
// in this tier are applied after AdminTier.
//
// BaselineTier is a cluster-wide policy override for the default allow
// ingress/egress for Pods not selected by an namespaced v1.NetworkPolicy.
// An example use case for BaselineTier would be to set "default deny" in
// any matching namespace that does not have a v1.NetworkPolicy defined.
//
// DefaultTier is the overall policy applied if there is no matching policy
// definition in any Tier. This is (currently) not configurable and always
// set to "allow all" on both ingress and egress directions, matching the
// default behavior of Kubernetes with no network policies enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if API doc is a good place for use cases, I left them our for now, but open for discussion
// ClusterNetworkPolicyRuleActionPass indicates that matching traffic will | ||
// jump to the next tier evaluation. That means that all the rules with lower | ||
// priority at the same tier will be ignored, but evaluation will continue at the next tier. | ||
// For example, if an admin tier CNP uses Pass action, NetworkPolicy evaluation will happen next. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use the constants (AdminTier)
// For example, a Pass action in the AdminTier will skip the rest of the rules
// defined in the AdminTier and continue evaluation with policicies in the
// NetworkPolicyTier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we want this to be readable on the kubectl explain
and that means the AdminTier
will be printer like that, while I want to say Admin tier
, because Admin
is valid value for the tier
field?
fdedcef
to
467e31c
Compare
f245770
to
e3ff32d
Compare
// If a given connection wasn't allowed or denied by any of the tiers, | ||
// the default kubernetes policy is applied, which says that | ||
// all pods can communicate with each other. | ||
Tier Tier `json:"tier"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we haven't discussed this explicitly in the NPEP, but I am making both tier and priority fields required, so no default values. The main difference is that BANP will now have to set some priority, while before there was only 1 BANP allowed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had assumed tier would default to Admin
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is up for discussion - my thinking is that considering we are squashing 2 APIs into 1, it could be easy for people to upgrade to the wrong tier if we have any defaults (also keeping in mind that we wanted to have a very explicit API)
Also LGTM on same basis as @bowei (and Mazdak was able to do a trial integration with the clients and it all seemed to work ok). I think we may want to rephrase some of the bigger godoc or trim it down to avoid confusion but the shape of the structs is what I was expecting and it'd be good to get this first tranche of work in so that we can iterate on top of it. |
/lgtm |
// | ||
// Baseline tier is a cluster-wide policy that can be overridden by the | ||
// v1.NetworkPolicy. It will only be evaluated for Pods not selected by a | ||
// namespaced v1.NetworkPolicy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR doesn't need to block on having absolutely perfect documentation, but at some point we should revisit this text. eg, you point out that Baseline is only evaluated when NetworkPolicy isn't, but you don't point out that NetworkPolicy is only evaluated when ANP doesn't match or Passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me try to make it more unified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(To be clear, I was just pointing out "we need to revisit this text before the first official v1alpha2 release" not "we need to revisit this text in this PR".)
// | ||
// If a given connection wasn't allowed or denied by any of the tiers, | ||
// the default kubernetes policy is applied, which says that | ||
// all pods can communicate with each other. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a change? I thought the text before tried to allow the possibility that a cluster might have a deny default rather than an allow default (even though there's no standard way to do that).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We first wanted to use the concept of "Default" tier, but that was problematic (see last meeting notes), so we agreed to just explain what happens when no policies match. Not sure how important it is in this context to talk about potential default-deny considering it doesn't exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just meant that I feel like in the past we've allowed for the possibility that a network plugin might have some --pod-network-deny-by-default
option, whereas the wording here seems to explicitly forbid that. (OTOH I'm not sure how to word the text here in a way that allows for that possibility without getting really hung up in the fact that it's purely hypothetical. So maybe just ignore this for now...)
// If a given connection wasn't allowed or denied by any of the tiers, | ||
// the default kubernetes policy is applied, which says that | ||
// all pods can communicate with each other. | ||
Tier Tier `json:"tier"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had assumed tier would default to Admin
...
// Priority is a value from 0 to 1000. | ||
// Policies with lower priority values have higher precedence, | ||
// and are checked before policies with higher priority values. | ||
// All Admin tier rules have higher precedence than NetworkPolicy or | ||
// Baseline tier rules. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Priority is a value from 0 to 1000 indicating the precedence of the policy within its tier.
Policies with lower priority values have higher precedence, and are checked before policies with higher priority values in the same tier.
All Admin tier rules have higher precedence than NetworkPolicy or Baseline tier rules.
// and are checked before policies with higher priority values. | ||
// All Admin tier rules have higher precedence than NetworkPolicy or | ||
// Baseline tier rules. | ||
// If two (or more) policies with the same priority could match a connection, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"policies in the same tier with the same priority"
// +kubebuilder:validation:MaxProperties=1 | ||
// +kubebuilder:validation:MinProperties=1 | ||
type ClusterNetworkPolicyPort struct { | ||
// Port selects a port on a pod(s) based on number. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Port selects a destination port based on protocol and port number.
// +optional | ||
PortNumber *Port `json:"portNumber,omitempty"` | ||
|
||
// NamedPort selects a port on a pod(s) based on name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NamedPort selects a destination port on a pod based on the ContainerPort name.
You can't use this in a rule with Nodes or Networks peers, because they do not have named ports.
NamedPort *string `json:"namedPort,omitempty"` | ||
|
||
// PortRange selects a port range on a pod(s) based on provided | ||
// start and end values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PortRange selects a destination port range based on protocol and start and end port numbers.
// | ||
// +optional | ||
PortRange *PortRange `json:"portRange,omitempty"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should probably order that as Port, PortRange, NamedPort rather than Port, NamedPort, PortRange...
// +kubebuilder:validation:Minimum=1 | ||
// +kubebuilder:validation:Maximum=65535 | ||
// | ||
End int32 `json:"end"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we have CEL validation that Start < End?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that sounds reasonable
Signed-off-by: Nadia Pinaeva <[email protected]>
Signed-off-by: Nadia Pinaeva <[email protected]>
Signed-off-by: Nadia Pinaeva <[email protected]>
e3ff32d
to
77282fa
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: danwinship, npinaeva The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR introduced the API changes discussed in the previous iteration #289 as
v1alpha2
.Conformance tests for the new API should probably be done in a separate PR, but also nice to be tested before this PR is merged to avoid any mistakes in the API.
code generation script as always inspired by gateway-api, https://github.com/kubernetes-sigs/gateway-api/blob/main/hack/update-clientset.sh
Tested locally with #307 and kubernetes-sigs/kube-network-policies#199, tests seem to pass!