|
| 1 | +--- |
| 2 | +layout: blog |
| 3 | +title: "Kubernetes v1.34: Pod Level Resources Graduated to Beta" |
| 4 | +date: 2025-xx-xx |
| 5 | +draft: true |
| 6 | +slug: kubernetes-v1-34-pod-level-resources |
| 7 | +author: Dixita Narang (Google) |
| 8 | +--- |
| 9 | + |
| 10 | +On behalf of the Kubernetes community, I am thrilled to announce that the Pod Level Resources feature has graduated to Beta in the Kubernetes v1.34 release and is enabled by default! This significant milestone introduces a new layer of flexibility for defining and managing resource allocation for your Pods. This flexibility stems from the ability to specify CPU and memory resources for the Pod as a whole. Pod level resources can be combined with the container-level specifications to express the exact resource requirements and limits your application needs. |
| 11 | + |
| 12 | +## Pod-level specification for resources |
| 13 | + |
| 14 | +Until recently, resource specifications that applied to Pods were primarily defined |
| 15 | +at the individual container level. While effective, this approach sometimes required |
| 16 | +duplicating or meticulously calculating resource needs across multiple containers |
| 17 | +within a single Pod. As a beta feature, Kubernetes allows you to specify the CPU, |
| 18 | +memory and hugepages resources at the Pod-level. This means you can now define |
| 19 | +resource requests and limits for an entire Pod, enabling easier resource sharing |
| 20 | +without requiring granular, per-container management of these resources where |
| 21 | +it's not needed. |
| 22 | + |
| 23 | + |
| 24 | +## Why does Pod-level specification matter? |
| 25 | + |
| 26 | +This feature enhances resource management in Kubernetes by offering *flexible resource management* at both the Pod and container levels. |
| 27 | + |
| 28 | +* It provides a consolidated approach to resource declaration, reducing the need for |
| 29 | + meticulous, per-container management, especially for Pods with multiple |
| 30 | + containers. |
| 31 | +* Pod-level resources enable containers within a pod to share unused resoures |
| 32 | + amongst themselves, promoting efficient utilization within the pod. For example, |
| 33 | + it prevents sidecar containers from becoming performance bottlenecks. Previously, |
| 34 | + a sidecar (e.g., a logging agent or service mesh proxy) hitting its individual CPU |
| 35 | + limit could be throttled and slow down the entire Pod, even if the main |
| 36 | + application container had plenty of spare CPU. With pod-level resources, the |
| 37 | + sidecar and the main container can share Pod's resource budget, ensuring smooth |
| 38 | + operation during traffic spikes - either the whole Pod is throttled or all |
| 39 | + containers work. |
| 40 | + |
| 41 | +* When both pod-level and container-level resources are specified, pod-level |
| 42 | + requests and limits take precedence. This gives you – and cluster administrators - |
| 43 | + a powerful way to enforce overall resource boundaries for your Pods. |
| 44 | + |
| 45 | + For scheduling, if a pod-level request is explicitly defined, the scheduler uses |
| 46 | + that specific value to find a suitable node, insteaf of the aggregated requests of |
| 47 | + the individual containers. At runtime, the pod-level limit acts as a hard ceiling |
| 48 | + for the combined resource usage of all containers. Crucially, this pod-level limit |
| 49 | + is the absolute enforcer; even if the sum of the individual container limits is |
| 50 | + higher, the total resource consumption can never exceed the pod-level limit. |
| 51 | +* Pod-level resources are **prioritized** in influencing the Quality of Service (QoS) class of the Pod. |
| 52 | +* For Pods running on Linux nodes, the Out-Of-Memory (OOM) score adjustment |
| 53 | + calculation considers both pod-level and container-level resources requests. |
| 54 | +* Pod-level resources are **designed to be compatible with existing Kubernetes functionalities**, ensuring a smooth integration into your workflows. |
| 55 | + |
| 56 | +## How to specify resources for an entire Pod |
| 57 | + |
| 58 | +Using `PodLevelResources` [feature |
| 59 | +gate](/docs/reference/command-line-tools-reference/feature-gates/) requires |
| 60 | +Kubernetes v1.34 or newer for all cluster components, including the control plane |
| 61 | +and every node. This feature gate is in beta and enabled by default in v1.34. |
| 62 | + |
| 63 | +### Example manifest |
| 64 | + |
| 65 | +You can specify CPU, memory and hugepages resources directly in the Pod spec manifest at the `resources` field for the entire Pod. |
| 66 | + |
| 67 | +Here’s an example demonstrating a Pod with both CPU and memory requests and limits |
| 68 | +defined at the Pod level: |
| 69 | + |
| 70 | +```yaml |
| 71 | +apiVersion: v1 |
| 72 | +kind: Pod |
| 73 | +metadata: |
| 74 | + name: pod-resources-demo |
| 75 | + namespace: pod-resources-example |
| 76 | +spec: |
| 77 | + # The 'resources' field at the Pod specification level defines the overall |
| 78 | + # resource budget for all containers within this Pod combined. |
| 79 | + resources: # Pod-level resources |
| 80 | + # 'limits' specifies the maximum amount of resources the Pod is allowed to use. |
| 81 | + # The sum of the limits of all containers in the Pod cannot exceed these values. |
| 82 | + limits: |
| 83 | + cpu: "1" # The entire Pod cannot use more than 1 CPU core. |
| 84 | + memory: "200Mi" # The entire Pod cannot use more than 200 MiB of memory. |
| 85 | + # 'requests' specifies the minimum amount of resources guaranteed to the Pod. |
| 86 | + # This value is used by the Kubernetes scheduler to find a node with enough capacity. |
| 87 | + requests: |
| 88 | + cpu: "1" # The Pod is guaranteed 1 CPU core when scheduled. |
| 89 | + memory: "100Mi" # The Pod is guaranteed 100 MiB of memory when scheduled. |
| 90 | + containers: |
| 91 | + - name: main-app-container |
| 92 | + image: nginx |
| 93 | + ... |
| 94 | + # This container has no resource requests or limits specified. |
| 95 | + - name: auxiliary-container |
| 96 | + image: fedora |
| 97 | + command: ["sleep", "inf"] |
| 98 | + ... |
| 99 | + # This container has no resource requests or limits specified. |
| 100 | +``` |
| 101 | + |
| 102 | +In this example, the `pod-resources-demo` Pod as a whole requests 1 CPU and 100 MiB of memory, and is limited to 1 CPU and 200 MiB of memory. The containers within will operate under these overall Pod-level constraints, as explained in the next section. |
| 103 | + |
| 104 | +### Interaction with container-level resource requests or limits |
| 105 | + |
| 106 | +When both pod-level and container-level resources are specified, **pod-level requests and limits take precedence**. This means the node allocates resources based on the pod-level specifications. |
| 107 | + |
| 108 | +Consider a Pod with two containers where pod-level CPU and memory requests and |
| 109 | +limits are defined, and only one container has its own explicit resource |
| 110 | +definitions: |
| 111 | + |
| 112 | +```yaml |
| 113 | +apiVersion: v1 |
| 114 | +kind: Pod |
| 115 | +metadata: |
| 116 | + name: pod-resources-demo |
| 117 | + namespace: pod-resources-example |
| 118 | +spec: |
| 119 | + resources: |
| 120 | + limits: |
| 121 | + cpu: "1" |
| 122 | + memory: "200Mi" |
| 123 | + requests: |
| 124 | + cpu: "1" |
| 125 | + memory: "100Mi" |
| 126 | + containers: |
| 127 | + - name: main-app-container |
| 128 | + image: nginx |
| 129 | + resources: |
| 130 | + requests: |
| 131 | + cpu: "0.5" |
| 132 | + memory: "50Mi" |
| 133 | + - name: auxiliary-container |
| 134 | + image: fedora |
| 135 | + command: [ "sleep", "inf"] |
| 136 | + # This container has no resource requests or limits specified. |
| 137 | +``` |
| 138 | + |
| 139 | +* Pod-Level Limits: The pod-level limits (cpu: "1", memory: "200Mi") establish an absolute boundary for the entire Pod. The sum of resources consumed by all its containers is enforced at this ceiling and cannot be surpassed. |
| 140 | + |
| 141 | +* Resource Sharing and Bursting: Containers can dynamically borrow any unused capacity, allowing them to burst as needed, so long as the Pod's aggregate usage stays within the overall limit. |
| 142 | + |
| 143 | +* Pod-Level Requests: The pod-level requests (cpu: "1", memory: "100Mi") serve as the foundational resource guarantee for the entire Pod. This value informs the scheduler's placement decision and represents the minimum resources the Pod can rely on during node-level contention. |
| 144 | + |
| 145 | +* Container-Level Requests: Container-level requests create a priority system within |
| 146 | +the Pod's guaranteed budget. Because main-app-container has an explicit request |
| 147 | +(cpu: "0.5", memory: "50Mi"), it is given precedence for its share of resources |
| 148 | +under resource pressure over the auxiliary-container, which has no |
| 149 | +such explicit claim. |
| 150 | + |
| 151 | +## Limitations |
| 152 | +* First of all, [in-place |
| 153 | + resize](/docs/concepts/workloads/pods/#pod-update-and-replacement) of pod-level |
| 154 | + resources is **not supported** for Kubernetes v1.34 (or earlier). Attempting to |
| 155 | + modify the _pod-level_ resource limits or requests on a running Pod results in an |
| 156 | + error: the resize is rejected. The v1.34 implementation of Pod level resources |
| 157 | + focuses on allowing initial declaration of an overall resource envelope, that |
| 158 | + applies to the **entire Pod**. That is distinct from in-place pod resize, which |
| 159 | + (despite what the name might suggest) allows you |
| 160 | + to make dynamic adjustments to _container_ resource |
| 161 | + requests and limits, within a *running* Pod, |
| 162 | + and potentially without a container restart. In-place resizing is also not yet a |
| 163 | + stable feature; it graduated to Beta in the v1.33 release. |
| 164 | + |
| 165 | +* Only CPU, memory, and hugepages resources can be specified at pod-level. |
| 166 | + |
| 167 | +* Pod-level resources are not supported for Windows pods. If the Pod specification |
| 168 | +explicitly targets Windows (e.g., by setting spec.os.name: "windows"), the API |
| 169 | +server will reject the Pod during the validation step. If the Pod is not explicitly |
| 170 | +marked for Windows but is scheduled to a Windows node (e.g., via a nodeSelector), |
| 171 | +the Kubelet on that Windows node will reject the Pod during its admission process. |
| 172 | + |
| 173 | +* The Topology Manager, Memory Manager and CPU Manager do not |
| 174 | + align pods and containers based on pod-level resources as these resource managers |
| 175 | + don't currently support pod-level resources. |
| 176 | + |
| 177 | +#### Getting started and providing feedback |
| 178 | + |
| 179 | +Ready to explore _Pod Level Resources_ feature? You'll need a Kubernetes cluster running version 1.34 or later. Remember to enable the `PodLevelResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) across your control plane and all nodes. |
| 180 | + |
| 181 | +As this feature moves through Beta, your feedback is invaluable. Please report any issues or share your experiences via the standard Kubernetes communication channels: |
| 182 | + |
| 183 | + |
| 184 | + |
| 185 | +* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node) |
| 186 | +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node) |
| 187 | +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode) |
0 commit comments