@@ -57,6 +57,38 @@ If you have automation that makes it feasible, replace the node with another usi
57
57
configuration, or reinstall it using automation.
58
58
{{< /caution >}}
59
59
60
+ ## Cgroup v2
61
+
62
+ Cgroup v2 is the next version of the cgroup Linux API. Differently than cgroup v1, there is a single
63
+ hierarchy instead of a different one for each controller.
64
+
65
+ The new version offers several improvements over cgroup v1, some of these improvements are:
66
+
67
+ - cleaner and easier to use API
68
+ - safe sub-tree delegation to containers
69
+ - newer features like Pressure Stall Information
70
+
71
+ Even if the kernel supports a hybrid configuration where some controllers are managed by cgroup v1
72
+ and some others by cgroup v2, Kubernetes supports only the same cgroup version to manage all the
73
+ controllers.
74
+
75
+ If systemd doesn't use cgroup v2 by default, you can configure the system to use it by adding
76
+ ` systemd.unified_cgroup_hierarchy=1 ` to the kernel command line.
77
+
78
+ ``` shell
79
+ # dnf install -y grubby && \
80
+ sudo grubby \
81
+ --update-kernel=ALL \
82
+ --args=”systemd.unified_cgroup_hierarchy=1"
83
+ ` ` `
84
+
85
+ To apply the configuration, it is necessary to reboot the node.
86
+
87
+ There should not be any noticeable difference in the user experience when switching to cgroup v2, unless
88
+ users are accessing the cgroup file system directly, either on the node or from within the containers.
89
+
90
+ In order to use it, cgroup v2 must be supported by the CRI runtime as well.
91
+
60
92
# ## Migrating to the `systemd` driver in kubeadm managed clusters
61
93
62
94
Follow this [Migration guide](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/)
0 commit comments