Skip to content

Commit c8c474d

Browse files
authored
Merge pull request #31224 from nate-double-u/merged-main-dev-1.24
Merged main into dev 1.24
2 parents e0c5205 + f8847c0 commit c8c474d

File tree

203 files changed

+62785
-8419
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

203 files changed

+62785
-8419
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,6 @@ Note that code issues should be filed against the main kubernetes repository, wh
3434

3535
### Submitting Documentation Pull Requests
3636

37-
If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
37+
If you're fixing an issue in the existing documentation, you should submit a PR against the main branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
3838

3939
For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/).

OWNERS_ALIASES

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,12 @@ aliases:
22
sig-docs-blog-owners: # Approvers for blog content
33
- onlydole
44
- mrbobbytables
5+
- sftim
56
sig-docs-blog-reviewers: # Reviewers for blog content
67
- mrbobbytables
78
- onlydole
89
- sftim
10+
- nate-double-u
911
sig-docs-de-owners: # Admins for German content
1012
- bene2k1
1113
- mkorbi
@@ -125,6 +127,7 @@ aliases:
125127
- ClaudiaJKang
126128
- gochist
127129
- ianychoi
130+
- jihoon-seo
128131
- seokho-son
129132
- ysyukr
130133
sig-docs-ko-reviews: # PR reviews for Korean content
@@ -242,19 +245,18 @@ aliases:
242245
- saschagrunert # SIG Chair
243246
release-engineering-approvers:
244247
- cpanato # Release Manager
245-
- hasheddan # subproject owner / Release Manager
248+
- palnabarun # Release Manager
246249
- puerco # Release Manager
247250
- saschagrunert # subproject owner / Release Manager
248251
- justaugustus # subproject owner / Release Manager
252+
- Verolop # Release Manager
249253
- xmudrii # Release Manager
250254
release-engineering-reviewers:
251255
- ameukam # Release Manager Associate
252256
- jimangel # Release Manager Associate
253257
- markyjackson-taulia # Release Manager Associate
254258
- mkorbi # Release Manager Associate
255-
- palnabarun # Release Manager Associate
256259
- onlydole # Release Manager Associate
257260
- sethmccombs # Release Manager Associate
258261
- thejoycekung # Release Manager Associate
259-
- verolop # Release Manager Associate
260262
- wilsonehusin # Release Manager Associate

content/en/blog/_posts/2018-10-01-health-checking-grpc.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,12 @@ title: 'Health checking gRPC servers on Kubernetes'
44
date: 2018-10-01
55
---
66

7-
_Built-in gRPC probes were introduced in Kubernetes 1.23. To learn more, see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)._
8-
97
**Author**: [Ahmet Alp Balkan](https://twitter.com/ahmetb) (Google)
108

9+
**Update (December 2021):** _Kubernetes now has built-in gRPC health probes starting in v1.23.
10+
To learn more, see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
11+
This article was originally written about an external tool to achieve the same task._
12+
1113
[gRPC](https://grpc.io) is on its way to becoming the lingua franca for
1214
communication between cloud-native microservices. If you are deploying gRPC
1315
applications to Kubernetes today, you may be wondering about the best way to

content/en/blog/_posts/2020-12-02-dockershim-faq.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
1212
what that means, check out the blog post
1313
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).
1414

15+
Also, you can read [check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to check whether it does.
16+
1517
### Why is dockershim being deprecated?
1618

1719
Maintaining dockershim has become a heavy burden on the Kubernetes maintainers.

content/en/blog/_posts/2021-12-09-pod-security-admission-beta.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
layout: blog
3-
title: 'Pod Security Graduates to Beta'
3+
title: 'Kubernetes 1.23: Pod Security Graduates to Beta'
44
date: 2021-12-09
55
slug: pod-security-admission-beta
66
---

content/en/blog/_posts/2021-11-18-prevent-persistentvolume-leaks-when-deleting-out-of-order.md renamed to content/en/blog/_posts/2021-12-15-prevent-persistentvolume-leaks-when-deleting-out-of-order.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
layout: blog
3-
title: "Kubernetes 1.23 Prevent PersistentVolume leaks when deleting out of order"
3+
title: "Kubernetes 1.23: Prevent PersistentVolume leaks when deleting out of order"
44
date: 2021-12-15T10:00:00-08:00
55
slug: kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order
66
---
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
---
2+
layout: blog
3+
title: 'Kubernetes 1.23: StatefulSet PVC Auto-Deletion (alpha)'
4+
date: 2021-12-16
5+
slug: kubernetes-1-23-statefulset-pvc-auto-deletion
6+
---
7+
8+
**Author:** Matthew Cary (Google)
9+
10+
Kubernetes v1.23 introduced a new, alpha-level policy for
11+
[StatefulSets](docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
12+
[PersistentVolumeClaims](docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
13+
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
14+
is deleted or pods in the StatefulSet are scaled down.
15+
16+
## What problem does this solve?
17+
A StatefulSet spec can include Pod and PVC templates. When a replica is first created, the
18+
Kubernetes control plane creates a PVC for that replica if one does not already exist. The behavior
19+
before Kubernetes v1.23 was that the control plane never cleaned up the PVCs created for
20+
StatefulSets - this was left up to the cluster administrator, or to some add-on automation that
21+
you’d have to find, check suitability, and deploy. The common pattern for managing PVCs, either
22+
manually or through tools such as Helm, is that the PVCs are tracked by the tool that manages them,
23+
with explicit lifecycle. Workflows that use StatefulSets must determine on their own what PVCs are
24+
created by a StatefulSet and what their lifecycle should be.
25+
26+
Before this new feature, when a StatefulSet-managed replica disappears, either because the
27+
StatefulSet is reducing its replica count, or because its StatefulSet is deleted, the PVC and its
28+
backing volume remains and must be manually deleted. While this behavior is appropriate when the
29+
data is critical, in many cases the persistent data in these PVCs is either temporary, or can be
30+
reconstructed from another source. In those cases, PVCs and their backing volumes remaining after
31+
their StatefulSet or replicas have been deleted are not necessary, incur cost, and require manual
32+
cleanup.
33+
34+
## The new StatefulSet PVC retention policy
35+
36+
If you enable the alpha feature, a StatefulSet spec includes a PersistentVolumeClaim retention
37+
policy. This is used to control if and when PVCs created from a StatefulSet’s `volumeClaimTemplate`
38+
are deleted. This first iteration of the retention policy contains two situations where PVCs may be
39+
deleted.
40+
41+
The first situation is when the StatefulSet resource is deleted (which implies that all replicas are
42+
also deleted). This is controlled by the `whenDeleted` policy. The second situation, controlled by
43+
`whenScaled` is when the StatefulSet is scaled down, which removes some but not all of the replicas
44+
in a StatefulSet. In both cases the policy can either be `Retain`, where the corresponding PVCs are
45+
not touched, or `Delete`, which means that PVCs are deleted. The deletion is done with a normal
46+
[object deletion](/docs/concepts/architecture/garbage-collection/), so that, for example, all
47+
retention policies for the underlying PV are respected.
48+
49+
This policy forms a matrix with four cases. I’ll walk through and give an example for each one.
50+
51+
* **`whenDeleted` and `whenScaled` are both `Retain`.** This matches the existing behavior for
52+
StatefulSets, where no PVCs are deleted. This is also the default retention policy. It’s
53+
appropriate to use when data on StatefulSet volumes may be irreplaceable and should only be
54+
deleted manually.
55+
56+
* **`whenDeleted` is `Delete` and `whenScaled` is `Retain`.** In this case, PVCs are deleted only when
57+
the entire StatefulSet is deleted. If the StatefulSet is scaled down, PVCs are not touched,
58+
meaning they are available to be reattached if a scale-up occurs with any data from the previous
59+
replica. This might be used for a temporary StatefulSet, such as in a CI instance or ETL
60+
pipeline, where the data on the StatefulSet is needed only during the lifetime of the
61+
StatefulSet lifetime, but while the task is running the data is not easily reconstructible. Any
62+
retained state is needed for any replicas that scale down and then up.
63+
64+
* **`whenDeleted` and `whenScaled` are both `Delete`.** PVCs are deleted immediately when their
65+
replica is no longer needed. Note this does not include when a Pod is deleted and a new version
66+
rescheduled, for example when a node is drained and Pods need to migrate elsewhere. The PVC is
67+
deleted only when the replica is no longer needed as signified by a scale-down or StatefulSet
68+
deletion. This use case is for when data does not need to live beyond the life of its
69+
replica. Perhaps the data is easily reconstructable and the cost savings of deleting unused PVCs
70+
is more important than quick scale-up, or perhaps that when a new replica is created, any data
71+
from a previous replica is not usable and must be reconstructed anyway.
72+
73+
* **`whenDeleted` is `Retain` and `whenScaled` is `Delete`.** This is similar to the previous case,
74+
when there is little benefit to keeping PVCs for fast reuse during scale-up. An example of a
75+
situation where you might use this is an Elasticsearch cluster. Typically you would scale that
76+
workload up and down to match demand, whilst ensuring a minimum number of replicas (for example:
77+
3). When scaling down, data is migrated away from removed replicas and there is no benefit to
78+
retaining those PVCs. However, it can be useful to bring the entire Elasticsearch cluster down
79+
temporarily for maintenance. If you need to take the Elasticsearch system offline, you can do
80+
this by temporarily deleting the StatefulSet, and then bringing the Elasticsearch cluster back
81+
by recreating the StatefulSet. The PVCs holding the Elasticsearch data will still exist and the
82+
new replicas will automatically use them.
83+
84+
Visit the
85+
[documentation](docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
86+
see all the details.
87+
88+
## What’s next?
89+
90+
Enable the feature and try it out! Enable the `StatefulSetAutoDeletePVC` feature gate on a cluster,
91+
then create a StatefulSet using the new policy. Test it out and tell us what you think!
92+
93+
I'm very curious to see if this owner reference mechanism works well in practice. For example, we
94+
realized there is no mechanism in Kubernetes for knowing who set a reference, so it’s possible that
95+
the StatefulSet controller may fight with custom controllers that set their own
96+
references. Fortunately, maintaining the existing retention behavior does not involve any new owner
97+
references, so default behavior will be compatible.
98+
99+
Please tag any issues you report with the label `sig/apps` and assign them to Matthew Cary
100+
([@mattcary](https://github.com/mattcary) at GitHub).
101+
102+
Enjoy!
103+
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
---
2+
layout: blog
3+
title: "What's new in Security Profiles Operator v0.4.0"
4+
date: 2021-12-17
5+
slug: security-profiles-operator
6+
---
7+
8+
**Authors:** Jakub Hrozek, Juan Antonio Osorio, Paulo Gomes, Sascha Grunert
9+
10+
---
11+
12+
The [Security Profiles Operator (SPO)](https://sigs.k8s.io/security-profiles-operator)
13+
is an out-of-tree Kubernetes enhancement to make the management of
14+
[seccomp](https://en.wikipedia.org/wiki/Seccomp),
15+
[SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) and
16+
[AppArmor](https://en.wikipedia.org/wiki/AppArmor) profiles easier and more
17+
convenient. We're happy to announce that we recently [released
18+
v0.4.0](https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.4.0)
19+
of the operator, which contains a ton of new features, fixes and usability
20+
improvements.
21+
22+
## What's new
23+
24+
It has been a while since the last
25+
[v0.3.0](https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.3.0)
26+
release of the operator. We added new features, fine-tuned existing ones and
27+
reworked our documentation in 290 commits over the past half year.
28+
29+
One of the highlights is that we're now able to record seccomp and SELinux
30+
profiles using the operators [log enricher](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#log-enricher-based-recording).
31+
This allows us to reduce the dependencies required for profile recording to have
32+
[auditd](https://linux.die.net/man/8/auditd) or
33+
[syslog](https://en.wikipedia.org/wiki/Syslog) (as fallback) running on the
34+
nodes. All profile recordings in the operator work in the same way by using the
35+
`ProfileRecording` CRD as well as their corresponding [label
36+
selectors](/docs/concepts/overview/working-with-objects/labels). The log
37+
enricher itself can be also used to gather meaningful insights about seccomp and
38+
SELinux messages of a node. Checkout the [official
39+
documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#using-the-log-enricher)
40+
to learn more about it.
41+
42+
### seccomp related improvements
43+
44+
Beside the log enricher based recording we now offer an alternative to record
45+
seccomp profiles by utilizing [ebpf](https://ebpf.io). This optional feature can
46+
be enabled by setting `enableBpfRecorder` to `true`. This results in running a
47+
dedicated container, which ships a custom bpf module on every node to collect
48+
the syscalls for containers. It even supports older Kernel versions which do not
49+
expose the [BPF Type Format (BTF)](https://www.kernel.org/doc/html/latest/bpf/btf.html) per
50+
default as well as the `amd64` and `arm64` architectures. Checkout
51+
[our documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#ebpf-based-recording)
52+
to see it in action. By the way, we now add the seccomp profile architecture of
53+
the recorder host to the recorded profile as well.
54+
55+
We also graduated the seccomp profile API from `v1alpha1` to `v1beta1`. This
56+
aligns with our overall goal to stabilize the CRD APIs over time. The only thing
57+
which has changed is that the seccomp profile type `Architectures` now points to
58+
`[]Arch` instead of `[]*Arch`.
59+
60+
### SELinux enhancements
61+
62+
Managing SELinux policies (an equivalent to using `semodule` that
63+
you would normally call on a single server) is not done by SPO
64+
itself, but by another container called selinuxd to provide better
65+
isolation. This release switched to using selinuxd containers from
66+
a personal repository to images located under [our team's quay.io
67+
repository](https://quay.io/organization/security-profiles-operator).
68+
The selinuxd repository has moved as well to [the containers GitHub
69+
organization](https://github.com/containers/selinuxd).
70+
71+
Please note that selinuxd links dynamically to `libsemanage` and mounts the
72+
SELinux directories from the nodes, which means that the selinuxd container
73+
must be running the same distribution as the cluster nodes. SPO defaults
74+
to using CentOS-8 based containers, but we also build Fedora based ones.
75+
If you are using another distribution and would like us to add support for
76+
it, please file [an issue against selinuxd](https://github.com/containers/selinuxd/issues).
77+
78+
#### Profile Recording
79+
80+
This release adds support for recording of SELinux profiles.
81+
The recording itself is managed via an instance of a `ProfileRecording` Custom
82+
Resource as seen in an
83+
[example](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/profilerecording-selinux-logs.yaml)
84+
in our repository. From the user's point of view it works pretty much the same
85+
as recording of seccomp profiles.
86+
87+
Under the hood, to know what the workload is doing SPO installs a special
88+
permissive policy called [selinuxrecording](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/profiles/selinuxrecording.cil)
89+
on startup which allows everything and logs all AVCs to `audit.log`.
90+
These AVC messages are scraped by the log enricher component and when
91+
the recorded workload exits, the policy is created.
92+
93+
#### `SELinuxProfile` CRD graduation
94+
95+
An `v1alpha2` version of the `SelinuxProfile` object has been introduced. This
96+
removes the raw Common Intermediate Language (CIL) from the object itself and
97+
instead adds a simple policy language to ease the writing and parsing
98+
experience.
99+
100+
Alongside, a `RawSelinuxProfile` object was also introduced. This contains a
101+
wrapped and raw representation of the policy. This was intended for folks to be
102+
able to take their existing policies into use as soon as possible. However, on
103+
validations are done here.
104+
105+
### AppArmor support
106+
107+
This version introduces the initial support for AppArmor, allowing users to load and
108+
unload AppArmor profiles into cluster nodes by using the new [AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/crds/apparmorprofile.yaml) CRD.
109+
110+
To enable AppArmor support use the [enableAppArmor feature gate](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/config.yaml#L10) switch of your SPO configuration.
111+
Then use our [apparmor example](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/apparmorprofile.yaml) to deploy your first profile across your cluster.
112+
113+
### Metrics
114+
115+
The operator now exposes metrics, which are described in detail in
116+
our new [metrics documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#using-metrics).
117+
We decided to secure the metrics retrieval process by using
118+
[kube-rbac-proxy](https://github.com/brancz/kube-rbac-proxy), while we ship an
119+
additional `spo-metrics-client` cluster role (and binding) to retrieve the
120+
metrics from within the cluster. If you're using
121+
[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift),
122+
then we provide an out of the box working
123+
[`ServiceMonitor`](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#automatic-servicemonitor-deployment)
124+
to access the metrics.
125+
126+
#### Debuggability and robustness
127+
128+
Beside all those new features, we decided to restructure parts of the Security
129+
Profiles Operator internally to make it better to debug and more robust. For
130+
example, we now maintain an internal [gRPC](https://grpc.io) API to communicate
131+
within the operator across different features. We also improved the performance
132+
of the log enricher, which now caches results for faster retrieval of the log
133+
data. The operator can be put into a more [verbose log mode](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#set-logging-verbosity)
134+
by setting `verbosity` from `0` to `1`.
135+
136+
We also print the used `libseccomp` and `libbpf` versions on startup, as well as
137+
expose CPU and memory profiling endpoints for each container via the
138+
[`enableProfiling` option](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#enable-cpu-and-memory-profiling).
139+
Dedicated liveness and startup probes inside of the operator daemon will now
140+
additionally improve the life cycle of the operator.
141+
142+
## Conclusion
143+
144+
Thank you for reading this update. We're looking forward to future enhancements
145+
of the operator and would love to get your feedback about the latest release.
146+
Feel free to reach out to us via the Kubernetes slack
147+
[#security-profiles-operator](https://kubernetes.slack.com/messages/security-profiles-operator)
148+
for any feedback or question.

0 commit comments

Comments
 (0)