Skip to content

Commit 6545f00

Browse files
committed
[en] Add blog post on the DRAConsumableCapacity feature
Signed-off-by: Sunyanan Choochotkaew <[email protected]>
1 parent ea1f930 commit 6545f00

File tree

1 file changed

+199
-0
lines changed

1 file changed

+199
-0
lines changed
Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
---
2+
layout: blog
3+
title: "Kubernetes v1.34: DRA Consumable Capacity"
4+
draft: true # will be changed to date: YYYY-MM-DD before publication
5+
slug: kubernetes-v1-34-dra-consumable-capacity # optional
6+
author: >
7+
Sunyanan Choochotkaew (IBM),
8+
Lionel Jouin (Ericsson Software Technology)
9+
John Belamaric (Google)
10+
---
11+
12+
Dynamic Resource Allocation (DRA) is a Kubernetes API for managing scarce resources across Pods and containers.
13+
It enables flexible resource requests, going beyond simply allocating *N* number of devices to support more granular usage scenarios.
14+
With DRA, users can request specific types of devices based on their attributes, define custom configurations tailored to their workloads, and even share the same resource among multiple containers or Pods.
15+
16+
In this blog, we focus on the device sharing feature and dive into a new capability introduced in Kubernetes 1.34: `DRAConsumableCapacity`, which extends DRA to support finer-grained device sharing.
17+
18+
## Background: device sharing via ResourceClaims
19+
20+
From the beginning, DRA introduced the ability for multiple Pods to share a device by referencing the same ResourceClaim.
21+
This design decouples resource allocation from specific hardware, allowing for more dynamic and reusable provisioning of devices.
22+
23+
In Kubernetes 1.33, the new support for _partitionable devices_ allowed resource drivers to advertise slices of a device that are available, rather than exposing the entire device as an all-or-nothing resource.
24+
This enabled Kubernetes to model shareable hardware more accurately.
25+
26+
But there was still a missing piece: it didn't yet support scenarios
27+
where the device driver manages fine-grained, dynamic portions of a device resource — like network bandwidth — based on user demand,
28+
or to share those resources independently of ResourceClaims, which are restricted by their spec and namespace.
29+
30+
That’s where _consumable capacity_ for DRA comes in.
31+
32+
## Benefits of DRA consumable capacity support
33+
34+
Here's a taste of what you get in a cluster with the `DRAConsumableCapacity`
35+
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled.
36+
37+
### Device sharing across multiple ResourceClaims or DeviceRequests
38+
39+
Resource drivers can now support sharing the same device — or even a slice of a device — across multiple ResourceClaims or across multiple DeviceRequests.
40+
41+
This means that Pods from different namespaces can simultaneously share the same device,
42+
if permitted and supported by the specific DRA driver.
43+
44+
### Device resource allocation
45+
46+
Kubernetes extends the allocation algorithm in the scheduler to support allocating a portion of a device's resources, as defined in the `capacity` field.
47+
The scheduler ensures that the total allocated capacity across all consumers never exceeds the device’s total capacity, even when shared across multiple ResourceClaims or DeviceRequests.
48+
This is very similar to the way the scheduler allows Pods and containers to share allocatable resources on Nodes;
49+
in this case, it allows them to share allocatable (consumable) resources on Devices.
50+
51+
This feature expands support for scenarios where the device driver is able to manage resources **within** a device and on a per-process basis — for example,
52+
allocating a specific amount of memory (e.g., 8 GiB) from a virtual GPU,
53+
or setting bandwidth limits on virtual network interfaces allocated to specific Pods. This aims to provide safe and efficient resource sharing.
54+
55+
### DistinctAttribute constraint
56+
57+
This feature also introduces a new constraint: `DistinctAttribute`, which is the complement of the existing `MatchAttribute` constraint.
58+
59+
The primary goal of `DistinctAttribute` is to prevent the same underlying device from being allocated multiple times within a single ResourceClaim, which could happen since we are allocating shares (or subsets) of devices.
60+
This constraint ensures that each allocation refers to a distinct resource, even if they belong to the same device class.
61+
62+
It is useful for use cases such as allocating network devices connecting to different subnets to expand coverage or provide redundancy across failure domains.
63+
64+
## How to use consumable capacity?
65+
66+
`DRAConsumableCapacity` is introduced as an alpha feature in Kubernetes 1.34. The [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `DRAConsumableCapacity` must be enabled in kubelet, kube-apiserver, kube-scheduler and kube-controller-manager.
67+
68+
```bash
69+
--feature-gates=...,DRAConsumableCapacity=true
70+
```
71+
72+
### As a DRA driver developer
73+
74+
As a DRA driver developer writing in Golang, you can make a device within a ResourceSlice allocatable to multiple ResourceClaims (or `devices.requests`) by setting `AllowMultipleAllocations` to `true`.
75+
76+
```go
77+
Device {
78+
...
79+
AllowMultipleAllocations: ptr.To(true),
80+
...
81+
}
82+
```
83+
84+
Additionally, you can define a policy to restrict how each device's `Capacity` should be consumed by each `DeviceRequest` by defining `RequestPolicy` field in the `DeviceCapacity`.
85+
The example below shows how to define a policy that requires a GPU with 40 GiB of memory to allocate at least 5 GiB per request, with each allocation in multiples of 5 GiB.
86+
87+
```go
88+
DeviceCapacity{
89+
Value: resource.MustParse("40Gi"),
90+
RequestPolicy: &CapacityRequestPolicy{
91+
Default: ptr.To(resource.MustParse("5Gi")),
92+
ValidRange: &CapacityRequestPolicyRange {
93+
Min: ptr.To(resource.MustParse("5Gi")),
94+
Step: ptr.To(resource.MustParse("5Gi")),
95+
}
96+
}
97+
}
98+
```
99+
100+
This will be published to the ResourceSlice, as partially shown below:
101+
102+
```yaml
103+
apiVersion: resource.k8s.io/v1
104+
kind: ResourceSlice
105+
...
106+
spec:
107+
devices:
108+
- name: gpu0
109+
allowMultipleAllocations: true
110+
capacity:
111+
memory:
112+
value: 40Gi
113+
requestPolicy:
114+
default: 5Gi
115+
validRange:
116+
min: 5Gi
117+
step: 5Gi
118+
```
119+
120+
An allocated device with a specified portion of consumed capacity will have a `ShareID` field set in the allocation status.
121+
122+
```go
123+
claim.Status.Allocation.Devices.Results[i].ShareID
124+
```
125+
126+
This `ShareID` allows the driver to distinguish between different allocations that refer to the **same device or same statically-partitioned slice** but come from **different `ResourceClaim` requests**.
127+
It acts as a unique identifier for each shared slice, enabling the driver to manage and enforce resource limits independently across multiple consumers.
128+
129+
### As a consumer
130+
131+
As a consumer (or user), the device resource can be requested with a ResourceClaim like this:
132+
133+
```yaml
134+
apiVersion: resource.k8s.io/v1
135+
kind: ResourceClaim
136+
...
137+
spec:
138+
devices:
139+
requests: # for devices
140+
- name: req0
141+
exactly:
142+
- deviceClassName: resource.example.com
143+
capacity:
144+
requests: # for resources which must be provided by those devices
145+
memory: 10Gi
146+
```
147+
148+
This configuration ensures that the requested device can provide at least 10GiB of `memory`.
149+
150+
Notably that **any** `resource.example.com` device that has at least 10GiB of memory can be allocated.
151+
If a device that does not support multiple allocations is chosen, the allocation would consume the entire device.
152+
To filter only devices that support multiple allocations, you can define a selector like this:
153+
154+
```yaml
155+
selectors:
156+
- cel:
157+
expression: |-
158+
device.allowMultipleAllocations == true
159+
```
160+
161+
## Integration with DRA device status
162+
163+
In device sharing, general device information is provided through the resource slice.
164+
However, some details are set dynamically after allocation.
165+
These can be conveyed using the [`.status.devices`](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status) field of a ResourceClaim.
166+
That field is only published in clusters where the `DRAResourceClaimDeviceStatus`
167+
feature gate is enabled.
168+
169+
If you do have _device status_ support available, a driver can expose additional device-specific information beyond the `ShareID`.
170+
One particularly useful use case is for virtual networks, where a driver can include the assigned IP address(es) in the status.
171+
This is valuable for both network service operations and troubleshooting.
172+
173+
You can find more information by watching our recording at: [KubeCon Japan 2025 - Reimagining Cloud Native Networks: The Critical Role of DRA](https://sched.co/1x71v).
174+
175+
## What can you do next?
176+
* **Check out the [CNI DRA Driver project](https://github.com/kubernetes-sigs/cni-dra-driver)** for an example of DRA integration in Kubernetes networking. Try integrating with network resources like `macvlan`, `ipvlan`, or smart NICs.
177+
* Start enabling the `DRAConsumableCapacity` feature gate and experimenting with virtualized or partitionable devices. Specify your workloads with *consumable capacity* (for example: fractional bandwidth or memory).
178+
* Let us know your feedback:
179+
* ✅ What worked well?
180+
* ⚠️ What didn’t?
181+
182+
If you encountered issues to fix or opportunities to enhance,
183+
please [file a new issue](https://github.com/kubernetes/enhancements/issues)
184+
and reference [KEP-5075](https://github.com/kubernetes/enhancements/issues/5075) there,
185+
or reach out via [Slack (#wg-device-management)](https://kubernetes.slack.com/archives/C0409NGC1TK).
186+
187+
### Conclusion
188+
189+
Consumable capacity support enhances the device sharing capability of DRA by allowing effective device sharing across namespaces, across claims, and tailored to each Pod’s actual needs.
190+
It also empowers drivers to enforce capacity limits, improves scheduling accuracy, and unlocks new use cases like bandwidth-aware networking and multi-tenant device sharing.
191+
192+
Try it out, experiment with consumable resources, and help shape the future of dynamic resource allocation in Kubernetes!
193+
194+
### Further Reading
195+
* [DRA in the Kubernetes documentation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
196+
* [KEP for DRA Partitionable Devices](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/4815-dra-partitionable-devices)
197+
* [KEP for DRA Device Status](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/4817-resource-claim-device-status)
198+
* [KEP for DRA Consumable Capacity](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/5075-dra-consumable-capacity)
199+
* [Kubernetes 1.34 Release Notes](https://www.kubernetes.dev/resources/release/#kubernetes-v134)

0 commit comments

Comments
 (0)