@@ -258,11 +258,10 @@ leveraging the library implementation.
258
258
259
259
When an ` enforce ` policy (or version) label is added or changed, the admission plugin will test each pod
260
260
in the namespace against the new policy. Violations are returned to the user as warnings. These
261
- checks have a timeout of XX seconds and a limit of YY pods, and will return a warning in the event
261
+ checks have a timeout of 1 second and a limit of 3,000 pods, and will return a warning in the event
262
262
that not every pod was checked. User exemptions are ignored by these checks, but runtime class
263
- exemptions still apply. Namespace exemptions are also ignored, but an additional warning will be
264
- returned when updating the policy on an exempt namespace. These checks only consider actual Pod
265
- resources, not [ templated pods] .
263
+ exemptions and namespace exemptions still apply when determining whether to check the new ` enforce ` policy
264
+ against existing pods in the namespace. These checks only consider actual Pod resources, not [ templated pods] .
266
265
267
266
These checks are also performed when making a dry-run request, which can be an effective way of
268
267
checking for breakages before updating a policy, for example:
@@ -271,23 +270,17 @@ checking for breakages before updating a policy, for example:
271
270
kubectl label --dry-run=server --overwrite ns --all pod-security.kubernetes.io/enforce=baseline
272
271
```
273
272
274
- <<[ UNRESOLVED] >>
275
-
276
- _ Non-blocking: can be decided on the implementing PR_
273
+ Evaluation of pods in a namespace is limited in the following dimensions, and a warning emitted if not all pods are checked:
274
+ * max of 3,000 pods ([ documented] ( https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md )
275
+ scalability limit for per-namespace pod count)
276
+ * no more than 1 second or 50% of remaining request deadline (whichever is less).
277
+ * benchmarks show checking 3,000 pods takes ~ 0.01 second running with 100% of a 2.60GHz CPU
277
278
278
- - What should the timeout be for pod update warnings?
279
- - Total is a parameter on the context (query parameter for webhooks). Cap should be
280
- ` min(timeout_param, hard_cap) ` , where the ` hard_cap ` is a small number of seconds.
281
- - Expect evaluation to be fast, so even 3k pods should come in well under the timeout.
282
- - What should the pod limit be set to?
283
- - 3,000 is the
284
- [ documented] ( https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md )
285
- scalability limit for per-namespace pod count.
286
- - Warnings should be aggregated for large namespaces (soft cap number of warnings, hard cap number
287
- of evaluations).
288
-
289
- <<[ /UNRESOLVED] >>
279
+ If multiple pods have identical warnings, the warnings are aggregated.
290
280
281
+ If there are multiple pods with an ownerReference pointing to the same controller,
282
+ controlled pods after the first one are checked only if sufficient pod count and time remain.
283
+ This prioritizes checking unique pods over checking many identical replicas.
291
284
292
285
### Admission Configuration
293
286
@@ -1067,6 +1060,7 @@ As this feature progresses towards GA, we should think more about how it interac
1067
1060
provisionally accepted.
1068
1061
- 2021-08-04: v1.22 Alpha version released
1069
1062
- 2021-08-24: v1.23 Beta KEP updates
1063
+ - 2021-11-03: v1.23 Beta version released
1070
1064
<!--
1071
1065
Major milestones in the lifecycle of a KEP should be tracked in this section.
1072
1066
Major milestones might include:
0 commit comments