diff --git a/keps/sig-node/5283-dra-resourceslice-status-device-health/README.md b/keps/sig-node/5283-dra-resourceslice-status-device-health/README.md new file mode 100644 index 00000000000..e139400843d --- /dev/null +++ b/keps/sig-node/5283-dra-resourceslice-status-device-health/README.md @@ -0,0 +1,1050 @@ + +# KEP-5283: DRA: ResourceSlice Status for Device Health Tracking + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories](#user-stories) + - [Visibility](#visibility) + - [No Side-Effects By Default](#no-side-effects-by-default) + - [Enabling Automated Remediation](#enabling-automated-remediation) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) + - [Device Conditions](#device-conditions) + - [Standardized Attributes](#standardized-attributes) + - [Standardized Events](#standardized-events) + - [Vendor-Provided Metrics](#vendor-provided-metrics) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + + + +With Dynamic Resource Allocation (DRA), devices that can be allocated to Pods +are advertised in ResourceSlices. Device entries in a ResourceSlice describe +properties like identifiers, capacity, and firmware or kernel driver versions. + +Sometimes devices fail and an automated agent or cluster administrator must take +some action to restore the device to a healthy state. This KEP defines a +standard way to determine whether or not a device is considered healthy. + +## Motivation + + + +Since the DRA APIs currently do not expose a standard way to classify device +health, cluster administrators are tasked with finding other means of doing that +for each kind of device that is defined in a ResourceSlice in order to identify +and restore unhealthy devices back to their full functionality. + +When unhealthy devices are no longer able to support new or existing workloads, +DRA drivers and cluster administrators already have the option to +[taint](https://kep.k8s.io/5055) the devices. A standard representation of +device health in the ResourceSlice API is needed to express the state of the +devices in cases where the side effects (`NoSchedule`/`NoExecute`) of taints are +not desired. + +### Goals + + + +- Expose well-formed data in the ResourceSlice API representing the real-time + health of the devices a ResourceSlice describes. + +### Non-Goals + + + +- Define what constitutes a "healthy" or "unhealthy" device. That distinction is + made by each DRA driver. +- Automatically take action based on the health status of a device. This KEP + only defines how that status is represented in the API. + +## Proposal + + + +### User Stories + + + +#### Visibility + +As a cluster administrator, I want to determine the overall health of the +DRA-exposed devices available throughout my cluster. + +#### No Side-Effects By Default + +As a cluster administrator, I want health status to be purely informational. I +do not want health status by itself to trigger any actions such as to prevent +new workloads from being allocated unhealthy devices or to evict running +workloads when their allocated devices become unhealthy. + +Automated application of device taints, for example, should either be configured +by cluster administrators themselves or be driven by other factors observed on +the devices more directly that a DRA driver is capable of seeing. + +#### Enabling Automated Remediation + +As a cluster administrator, I want to determine how to remediate unhealthy +devices when different failure modes require different methods of remediation. + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + + + +## Design Details + + + +A new `status` field will be added to the ResourceSlice API, along with fields +for health information for each device listed in the ResourceSlice: + +```go +type ResourceSlice struct { + ... + + // Contains the status observed by the driver. + // +optional + Status ResourceSliceStatus `json:"status,omitempty"` +} + +// ResourceSliceStatus contains the observed status of a ResourceSlice. +type ResourceSliceStatus struct { + // Devices contains the status of each device. + // +optional + Devices []DeviceStatus `json:"devices"` +} + +// DeviceStatus represents the status of a [Device]. +type DeviceStatus struct { + // Name maps this status to the device listed in .spec.devices. + // + // +required + Name string `json:"name"` + + // Health contains the health of a device as observed by the driver. + // + // +optional + Health *DeviceHealthStatus `json:"health,omitempty"` +} + +// DeviceHealthStatus represents the health of a device as observed by the driver. +type DeviceHealthStatus struct { + // TODO + // This could contain fields representing the overall "Healthy" or "Unhealthy" + // status and other context around the reason for that status. +} +``` + +A dedicated place for health information for devices is flexible enough to +accommodate new fields related to health, like describing particular failure +modes or remediation methods. + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/integration/...): [integration master](https://testgrid.k8s.io/sig-release-master-blocking#integration-master?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +##### e2e tests + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/e2e/...): [SIG ...](https://testgrid.k8s.io/sig-...?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +### Device Conditions + +Instead of inventing new API structures, [`metav1.Condition`]s could represent +the health status of each device. The `DeviceStatus` type could then look like +the following: + +```go +// DeviceStatus represents the status of a [Device]. +type DeviceStatus struct { + // Name maps this status to the device listed in .spec.devices. + // + // +required + Name string `json:"name"` + + // Represents the observations of a device's current state. + // Known .status.devices[].conditions.type are: "Available" + // +patchMergeKey=type + // +patchStrategy=merge + // +listType=map + // +listMapKey=type + Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"` +} + +// These are valid conditions of a device. +const ( + // Available means the device is available, as determined by the driver. + DeviceAvailable = "Available" +) +``` + +The `Available` condition would be the recommended way to declare the overall +health of a device. Other condition types like `Degraded` could also be added. + +While users are likely already familiar with the concept of these conditions, +having multiple lists of conditions in one ResourceSlice is not a pattern that +exists elsewhere in Kubernetes. Existing tools and libraries used to looking at +`status.conditions` may not work when there are separate instances in each of +`status.devices[].conditions`. + +The fields of each individual condition may also not satisfy the use cases +targeted by this KEP. The single `reason` and `message` fields may not be able +to capture all of the extra context behind a particular health state of a +device. The `observedGeneration` field may also not be useful since factors +outside the ResourceSlice may play into the health of a device more than +parameters in the ResourceSlice's own `spec`, so a condition's +`observedGeneration` matching the ResourceSlice's `metadata.generation` may not +reliably indicate that the device is healthy at that exact moment. + +### Standardized Attributes + +A set of standard attributes could be defined to represent device health. + +e.g. +```yaml +kind: ResourceSlice +apiVersion: resource.k8s.io/v1 +... +spec: + driver: cards.dra.example.com + devices: + - name: card-1 + attributes: + manufacturer: # a vendor-specific attribute, automatically qualified as cards.dra.example.com/manufacturer + string: Cards-are-us Inc. + resource.kubernetes.io/health: # a standard attribute expressing overall health status + string: Unhealthy +``` + +This approach requires no new API structure and DRA drivers already assume +ownership over device attributes. Users can also utilize attributes in +ResourceClaims to decide whether or not unhealthy devices are acceptable for a +request if a driver doesn't express its own opinion by also tainting the device. + +However, the lack of a dedicated place in the API for this information makes it +less discoverable. Users will be less likely to look for health status in a set +of arbitrary `attributes` than a list of [`metav1.Condition`]s or a field +labeled `health`. Other related information like the particular failure mode or +remediation strategy of an unhealthy device also becomes mixed in with all of +the other attributes of a device, making their relationship less clear even if +the attributes have similar names. + +### Standardized Events + +DRA drivers could be expected to generate Events to indicate health status for +devices. One such Event might look like the following: + +``` +LAST SEEN TYPE REASON OBJECT MESSAGE +2m4s Warning DeviceUnhealthy resourceslice/node-1 Device 'card-1' is Unhealthy. +``` + +Like standard attributes, this approach requires no new API structure, but is +similarly less discoverable and doesn't enable bundling related information in a +structured way. Health information from Events cannot be used to make scheduling +decisions in ResourceClaims. One benefit of using Events over other approaches +is that existing alerting systems may already be configured to surface when +devices become unhealthy based on the Event's `type`. + +Events are also ephemeral, so it isn't possible to determine if a device is +healthy at the present moment since a `DeviceUnhealthy` Event may have occurred +a while ago and since expired. The Event API is described as "informative, +best-effort, supplemental data" which is likely insufficient by itself for most +use cases for tracking device health. + +### Vendor-Provided Metrics + +Device vendors could be expected to provide device telemetry support and +documentation. Cluster administrators could then wire those metrics into +existing alerting mechanisms. + +This is more flexible option to determine health of a device than the more rigid +APIs defined by the other options. Vendor-specific device metrics can represent +a wider variety of dimensions and degrees of health than would be possible with +Kubernetes APIs. Cluster administrators have more freedom to interpret metrics +in a way that works best for their own environments, instead of needing to infer +actual device condition from the driver "Healthy" or "Unhealthy" status. + +The main cost of that flexibility is the lack of standardization, where cluster +administrators have to track down from each vendor how to determine if a given +device is in a healthy state as opposed to inspecting a well-defined area of a +vendor-agnostic API like ResourceSlice. This lack of standardization also makes +integrations like generic controllers that automatically taint unhealthy devices +less straightforward to implement. + +## Infrastructure Needed (Optional) + + + +[`metav1.Condition`]: https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Condition diff --git a/keps/sig-node/5283-dra-resourceslice-status-device-health/kep.yaml b/keps/sig-node/5283-dra-resourceslice-status-device-health/kep.yaml new file mode 100644 index 00000000000..51c24b512b5 --- /dev/null +++ b/keps/sig-node/5283-dra-resourceslice-status-device-health/kep.yaml @@ -0,0 +1,39 @@ +title: "DRA: ResourceSlice Status for Device Health Tracking" +kep-number: 5283 +authors: + - "@nojnhuh" +owning-sig: sig-node +participating-sigs: + - sig-scheduling +status: provisional +creation-date: 2025-08-06 +reviewers: + - "@johnbelamaric" +approvers: + - TBD + +see-also: + - "/keps/sig-scheduling/5055-dra-device-taints-and-tolerations/kep.yaml" + +# The target maturity stage in the current dev cycle for this KEP. +stage: alpha + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.35" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.35" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: DRAResourceSliceStatusDeviceHealth + components: + - kube-apiserver +disable-supported: true + +# The following PRR answers are required at beta release +metrics: