From fd731edd36ebb568ea749bb64d97c9b3359bf266 Mon Sep 17 00:00:00 2001 From: Brandon Salmon Date: Wed, 1 Oct 2025 23:09:05 +0000 Subject: [PATCH 1/5] KEP-5598: Opportunistic scheduling cache --- .../5598-scheduling-cache/README.md | 886 ++++++++++++++++++ .../5598-scheduling-cache/kep.yaml | 43 + 2 files changed, 929 insertions(+) create mode 100644 keps/sig-scheduling/5598-scheduling-cache/README.md create mode 100644 keps/sig-scheduling/5598-scheduling-cache/kep.yaml diff --git a/keps/sig-scheduling/5598-scheduling-cache/README.md b/keps/sig-scheduling/5598-scheduling-cache/README.md new file mode 100644 index 00000000000..817a48a75fd --- /dev/null +++ b/keps/sig-scheduling/5598-scheduling-cache/README.md @@ -0,0 +1,886 @@ + +# KEP-5598: Opportunistic scheduling cache + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) within one minor version of promotion to GA +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +This KEP proposes a cache which reuses the node feasibility and scoring from +earlier pods to reduce the cost of scheduling follow on pods that have the same constraints. +To simplify the first version, we only attempt this optimization for pods that have a single pod +of this type on a node, and we exclude pods that use complex constraints like +PodTopologySpread and PodAffinity / AntiAffinity. We also have tight +eviction rules on the cache (O(seconds)) to ensure it doesn't get stale. To tell if +two pods are the "same" we introduce a signature API to plugins. + +## Motivation + +Today our scheduling algorithm is p x n where p is the number of pods to schedule +and n the number of nodes in the cluster. As the size of clusters and jobs continue +to increase, this leads to low performance when scheduling or rescheduling +large jobs. This increases customer cost and slows down customer jobs, both +unpleasant impacts. Optimizations like this one have the potential to dramaticly +reduce the cost of scheduling in these scenarios. + +We are also working on gang scheduling, which will give us a way to consider multiple +pods at the same time. While full gang scheduling will likely deprecate the cache, +the signatures and reuse work should be portable to the new cycles that consider multiple pods. + +Another change is the shift towards 1-pod-per-node in batch and ML environments. Many +of these environments (among others) only attempt to run a single customer pod on each node, along +with a complement of daemon set pods. This simplifies our scheduling needs significantly. + +### Goals + + * Improve the performance of scheduling large jobs on large clusters where the constraints are simple. + * Begin building infrastructure to support gang scheduling and other "multi-pod" scheduling requests. + * Ensure that the infrastructure we build is maintainable as we update, add and remove plugins. + * Never impact feasibility. + * Provide identical results to our current scheduler for 1-pod-per-node environments. + * Provide an experience with limited changes in multi-pod-per-node environments. + +### Non-Goals + + * We are not attempting to apply this optimization to all pods. Some constraints +do not lend themselves to caching. Instead, we focus on customers who have +large jobs, who need high performance, and do not generally use these +more complex constraints. While we expect the cache to always be on, many pods will not be +able to take advantage of it. + * We are not adding gang scheduling of any kind in this KEP. This is purely a performance +improvement, although we hope the work on this KEP will help us with gang scheduling as we build it. + +## Proposal + +See https://github.com/bwsalmon/kubernetes/pull/1 for a draft version of the code. Feedback is extremely welcome. + +We introduce two things: + * The pod cache. + * A pod scheduling signature + +### Pod cache + +The end result of the scheduler cycles is a list of node names that will fit +the current pod, ordered by their score from most to least attractive. This result +is what we cache; the current pod will use the first node on the list, and we cache the remaining +results, indexed by the pod's "scheduling signature" (which we will describe later). + +When a pod with the same signature comes later, we find the entry in our cache and pull the first +node off the list. We then go down the nominated node path. Just as we would with a pod +with a NominatedNodeName in it, we only re-evaluate the feasibility of this node before using it. + +Since we assume 1-pod-per-node, we know that the node used by the current pod +cannot be used by subsequent pods (of any signature). +Thus we remove the host from all signatures in the cache. The cache is built +in a way that makes it easy to remove entries by either pod signature or host +so we can efficiently invalide entries. If we are not in a 1-pod-per-node +we could get "sub-optimal" results if the node just used is the best node for +some other pod, but this should be the only issue. + +All stored results are timestamped, and we remove entries when they are more +than a few seconds old to ensure we do not get stale data. Since we are targeting +large numbers of pods of the same type, this should be sufficient to get the benefit +we are looking for. + +### Pod scheduling signature + +The pod scheduling signature is used to determine if two pods are "the same" +from a scheduling perspective. In specific, what this means is that any pod +with the given signature will get the same scores / feasibility results from +any arbitrary set of nodes. This is necessary for the cache to work, since we +need to be able to reuse the previous work. + +Note that some pods will not have a signature, because the scoring uses not +just pod and node attributes, but other pods in the system, global data +about pod placement, etc. These nodes get a nil signature, and we fallback +to the slow path. + +We exclude pods with the following constraints: + * Pod affinity rules (affinity or anti-affinity) + * Topology spread rules (including inherited rules from the system default) This constraint we should attempt to lift in the future. + +To construct a signature, we add a new function for each plugin to implement. +This function takes a pod and generates a signature for that plugin as a string. +The signature is likely a set of attributes of the pod, or something derived from them. +To construct a full signature we take the signatures of all the plugins and aggeregate them into +a single string. If any plugin cannot generate a signature for a given pod (because it depends on information other +than the pod and node), then we generate a "nil" signature and don't attemp to cache the pod. + +Initially we won't require plugins to implement the new function, but we will turn off signatures for +all pods if a plugin is enabled that does not implement it. In subsequent releases we will +make implementation of the function a requirement, but of course plugins are also able to +say pods are unsignable. + +### Notes/Constraints/Caveats (Optional) + +### Risks and Mitigations + + + +## Design Details + +### Pod signature v1 + +The follow section outlines the attributes we are currently proposing to use as the signature for each of the +plugins in the scheduler. We need the plugin owners to validate that these signatures are correct, or help +us find the correct signature. + +Note that the signature does not need to be stable across versions, or even invocations of the scheduler. +It only needs to be comparable between pods on a given running scheduler instance. + + * DynamicResources: For now we mark a pod unsignable if it has dynamic resource claims. We should improve this in the future, since most + resource claims will allow for a signature. + * ImageLocality: We use the canonicalized image names from the Volumes as the signature. + * InterPodAffinity: If either the PodAffinity or PodAntiAffinity fields are set, the pod is marked unsignable, otherwise an empty signature. + * NodeAffinity: We use the NodeAffinity and NodeSelector fields as the signature. + * NodeName: We use the NodeName field as the signature. + * NodePorts: We use the results from util.GetHostPorts(pod) as the signature. + * NodeResourcesBalancedAllocation: We use the output of calculatePodResourceRequestList as the signature. + * NodeResourcesFit: We use the output of the computePodResourceRequest function as the signature. + * NodeUnschedulable: We use the Tolerations field as the signature. + * NodeVolumeLimits: We use all Volume information except from Volumes of type ConfigMap or Secret. + * PodTopologySpread: If the PodTopologySpead field is set, or it is not set but a default set of rules are applied, we mark the pod unsignable, otherwise it returns an empty signature. + * TaintToleration: We use the Tolerations field as the signature. + * VolumeBinding: Same as NodeVolumeLimits. + * VolumeRestrictions: Same as NodeVolumeLimits. + * VolumeZone: Same as NodeVolumeLimits. + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/integration/...): [integration master](https://testgrid.k8s.io/sig-release-master-blocking#integration-master?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +##### e2e tests + + + +- [test name](https://github.com/kubernetes/kubernetes/blob/2334b8469e1983c525c0c6382125710093a25883/test/e2e/...): [SIG ...](https://testgrid.k8s.io/sig-...?include-filter-by-regex=MyCoolFeature), [triage search](https://storage.googleapis.com/k8s-triage/index.html?test=MyCoolFeature) + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-scheduling/5598-scheduling-cache/kep.yaml b/keps/sig-scheduling/5598-scheduling-cache/kep.yaml new file mode 100644 index 00000000000..46cea616b75 --- /dev/null +++ b/keps/sig-scheduling/5598-scheduling-cache/kep.yaml @@ -0,0 +1,43 @@ +title: Scheduling cache +kep-number: 5598 +authors: + - "@bsalmon@google.com" +owning-sig: sig-scheduling +participating-sigs: +status: provisional +creation-date: 2025-10-01 +reviewers: + - TBD + - "@alice.doe" +approvers: + - TBD + - "@oscar.doe" + +# The target maturity stage in the current dev cycle for this KEP. +# If the purpose of this KEP is to deprecate a user-visible feature +# and a Deprecated feature gates are added, they should be deprecated|disabled|removed. +stage: alpha|beta|stable + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.35" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.35" + beta: "v1.36" + stable: "v1.37" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: MyFeature + components: + - kube-apiserver + - kube-controller-manager +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - my_feature_metric From 68a3857c5f7c4eefbf03f469e1b1bae2a1e985c9 Mon Sep 17 00:00:00 2001 From: Brandon Salmon Date: Fri, 3 Oct 2025 19:46:45 +0000 Subject: [PATCH 2/5] Add section on eCache. --- .../5598-scheduling-cache/README.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/keps/sig-scheduling/5598-scheduling-cache/README.md b/keps/sig-scheduling/5598-scheduling-cache/README.md index 817a48a75fd..1c282bffb80 100644 --- a/keps/sig-scheduling/5598-scheduling-cache/README.md +++ b/keps/sig-scheduling/5598-scheduling-cache/README.md @@ -266,6 +266,65 @@ all pods if a plugin is enabled that does not implement it. In subsequent releas make implementation of the function a requirement, but of course plugins are also able to say pods are unsignable. +### Comparison with Equivalence Cache (circa 2018) + +This KEP is addressing a very similar problem to the Equivalence Cache (eCache), an approach +suggested in 2018 and then retracted because it became extremely complex. While this KEP addresses a similar problem +it does so in a very different way, which we believe avoids the issues experienced by the eCache + +The two issues experienced by eCache were: + + * eCache performance was still O(num nodes). + * eCache was complex + * eCache was tightly coupled with plugins. + + We'll address each in turn, but at a high level the differences stem from our scope reduction in this cache, where + we focus on simple constraints in a 1-pod-per-node world, and are comfortable extending our "race" period slightly. + + #### eCache performance was still O(num nodes) + + The eCache was caching a fundamentally different result than this cache. In the case of the eCache they were caching + the results of a predicate p, (which is sounds like was one of a number of ops for a given plugin) for a specific pod and node. + This meant the number of cache lookups per pod was O(num nodes * num predicates) where num predicates was O(num plugins). Because + the cache was so fine-grained, the cache lookups were, in many cases, more expensive than the actual computation. This also meant + that while the cache could improve performance, it fundamentally did not remove the O(num nodes) nature of the per pod computation. + + In the case of this cache, we are looking up the entire host filtering and scoring for a single pod, so the number of cache lookups + per pod is 1. We are caching the entire filtering / scoring result, so the map lookup is guaranteed to be faster even + than just iterating over the plugins themselves, let alone the computation needed to filter / score. As the number of nodes go up, + the fact that the cache lookup is O(1) per pod will make it an increasingly perfromant alternative to the full computation. + + We can cache this more granular data because we only cache for simple plugins, and in fact avoid the complex plugins entirely. + Thus we do not need to be concerned about cross pod dependencies, meaning we do not need to try to keep detailed information + up-to-date. Because we assume 1-pod-per-node and some amount of "staleness" we simply need to invalidate whole hosts, rather + than requiring upkeep of complex predicate results required to keep the eCache functional. + + #### eCache was complex + + Because the eCache cached predicates, the logic for computing these results went into the cache as well. This meant that significant + amount of the plugin functionality were replicated in the cache layer. This added significant complexity to the cache, and also made + keeping the cache results themselves up to date was complex, involving multiple pods, etc. Because the eCache only improved performance for complex queries, they needed to include this complexity to provide value. + + In contrast, the signature used in this cache is just a subset of the pod object, without complex logic. It is static and as the pod object changes slowly, it will change slowly as well. In addition, we explicitly avoid all the complex predicates in this cache because they are rarely used. Thus we do not have the same complexity needed in the cache. + + ### The eCache was tightly coupled with plugins + + Beacuse a significant amount of the plugin complexity made into the eCache, it was difficult for plugin owners to keep the things in sync. + As the pod object is fairly stable, this makes keeping the signature up to date a much simpler task. The creation of the signature is + also spread across the plugins themselves, so instead of needing to keep the cache up to date, plugin owners simply have a new function + they need to manage within their plugin, which the cache simply aggregates. + + We will also need to provide tests that evaluate different pod configurations against different node configurations and ensures that + any time the signatures match the results do as well. This will help us catch issues in the future, in addition to providing + testing opportunities in other areas. + + If plugin changes prove to be an issue, we could codify the signature as a new "Scheduling" object that only has a subset + of the fields of the pod. Plugins that "opt-in" could only be given access to this reduced scheduling object, and we could then use the entire scheduling object as the signature. This would make it more or less impossible for the signature and plugins to be out of sync, and would + naturally surface new dependencies as additions to the scheduling object. However, as we expect plugin changes to be relatively + modest, we don't believe the complexity of making the interface changes is worth the risk today. + +See https://github.com/kubernetes/kubernetes/pull/65714#issuecomment-410016382 as starting point on eCache. + ### Notes/Constraints/Caveats (Optional) ### Risks and Mitigations From 25189274cc83421e05bc1d889d141450ff37d21b Mon Sep 17 00:00:00 2001 From: Brandon Salmon Date: Fri, 3 Oct 2025 19:50:36 +0000 Subject: [PATCH 3/5] Wording clarifications. --- keps/sig-scheduling/5598-scheduling-cache/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/keps/sig-scheduling/5598-scheduling-cache/README.md b/keps/sig-scheduling/5598-scheduling-cache/README.md index 1c282bffb80..125a04814c3 100644 --- a/keps/sig-scheduling/5598-scheduling-cache/README.md +++ b/keps/sig-scheduling/5598-scheduling-cache/README.md @@ -310,11 +310,11 @@ The two issues experienced by eCache were: ### The eCache was tightly coupled with plugins Beacuse a significant amount of the plugin complexity made into the eCache, it was difficult for plugin owners to keep the things in sync. - As the pod object is fairly stable, this makes keeping the signature up to date a much simpler task. The creation of the signature is - also spread across the plugins themselves, so instead of needing to keep the cache up to date, plugin owners simply have a new function - they need to manage within their plugin, which the cache simply aggregates. + Since in this cache the signature is just parts of the pod object, and the pod object is fairly stable, this makes keeping the signature up + to date a much simpler task. The creation of the signature is also spread across the plugins themselves, so instead of needing to keep the + cache up to date, plugin owners simply have a new function they need to manage within their plugin, which the cache simply aggregates. - We will also need to provide tests that evaluate different pod configurations against different node configurations and ensures that + We will also provide tests that evaluate different pod configurations against different node configurations and ensures that any time the signatures match the results do as well. This will help us catch issues in the future, in addition to providing testing opportunities in other areas. From 47bc765fbf3060974860538c6d4b0be83fc6b16f Mon Sep 17 00:00:00 2001 From: Brandon Salmon Date: Fri, 3 Oct 2025 19:52:44 +0000 Subject: [PATCH 4/5] More cleanup. --- keps/sig-scheduling/5598-scheduling-cache/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/keps/sig-scheduling/5598-scheduling-cache/README.md b/keps/sig-scheduling/5598-scheduling-cache/README.md index 125a04814c3..5d9e17b62bf 100644 --- a/keps/sig-scheduling/5598-scheduling-cache/README.md +++ b/keps/sig-scheduling/5598-scheduling-cache/README.md @@ -303,11 +303,11 @@ The two issues experienced by eCache were: Because the eCache cached predicates, the logic for computing these results went into the cache as well. This meant that significant amount of the plugin functionality were replicated in the cache layer. This added significant complexity to the cache, and also made - keeping the cache results themselves up to date was complex, involving multiple pods, etc. Because the eCache only improved performance for complex queries, they needed to include this complexity to provide value. + keeping the cache results themselves up to date complex, involving multiple pods, etc. Because the eCache only improved performance for complex queries, they needed to include this complexity to provide value. In contrast, the signature used in this cache is just a subset of the pod object, without complex logic. It is static and as the pod object changes slowly, it will change slowly as well. In addition, we explicitly avoid all the complex predicates in this cache because they are rarely used. Thus we do not have the same complexity needed in the cache. - ### The eCache was tightly coupled with plugins + #### eCache was tightly coupled with plugins Beacuse a significant amount of the plugin complexity made into the eCache, it was difficult for plugin owners to keep the things in sync. Since in this cache the signature is just parts of the pod object, and the pod object is fairly stable, this makes keeping the signature up From b8330020ee3b2e517ad79a821e4bb4abe1abfd01 Mon Sep 17 00:00:00 2001 From: Brandon Salmon Date: Mon, 6 Oct 2025 13:48:51 +0000 Subject: [PATCH 5/5] Cleanup and add comparison to ecache. --- .../5598-scheduling-cache/README.md | 50 +++++++++++-------- 1 file changed, 30 insertions(+), 20 deletions(-) diff --git a/keps/sig-scheduling/5598-scheduling-cache/README.md b/keps/sig-scheduling/5598-scheduling-cache/README.md index 5d9e17b62bf..317c84cc992 100644 --- a/keps/sig-scheduling/5598-scheduling-cache/README.md +++ b/keps/sig-scheduling/5598-scheduling-cache/README.md @@ -171,16 +171,17 @@ two pods are the "same" we introduce a signature API to plugins. ## Motivation -Today our scheduling algorithm is p x n where p is the number of pods to schedule -and n the number of nodes in the cluster. As the size of clusters and jobs continue +Today our scheduling algorithm is O(num pods x num nodes) where num pods is the number of pods to schedule +and num nodes the number of nodes in the cluster. As the size of clusters and jobs continue to increase, this leads to low performance when scheduling or rescheduling large jobs. This increases customer cost and slows down customer jobs, both unpleasant impacts. Optimizations like this one have the potential to dramaticly reduce the cost of scheduling in these scenarios. -We are also working on gang scheduling, which will give us a way to consider multiple -pods at the same time. While full gang scheduling will likely deprecate the cache, -the signatures and reuse work should be portable to the new cycles that consider multiple pods. +We are also working on multi-pod scheduling, which will give us a way to consider multiple +pods at the same time. While full multi-pod scheduling will likely deprecate the cache, +the signatures work and structuring of the cache dependencies should be portable to the new cycles +that consider multiple pods. Another change is the shift towards 1-pod-per-node in batch and ML environments. Many of these environments (among others) only attempt to run a single customer pod on each node, along @@ -272,7 +273,7 @@ This KEP is addressing a very similar problem to the Equivalence Cache (eCache), suggested in 2018 and then retracted because it became extremely complex. While this KEP addresses a similar problem it does so in a very different way, which we believe avoids the issues experienced by the eCache -The two issues experienced by eCache were: +The issues experienced by eCache were: * eCache performance was still O(num nodes). * eCache was complex @@ -302,19 +303,20 @@ The two issues experienced by eCache were: #### eCache was complex Because the eCache cached predicates, the logic for computing these results went into the cache as well. This meant that significant - amount of the plugin functionality were replicated in the cache layer. This added significant complexity to the cache, and also made - keeping the cache results themselves up to date complex, involving multiple pods, etc. Because the eCache only improved performance for complex queries, they needed to include this complexity to provide value. + amount of the plugin functionality was replicated in the cache layer. This added significant complexity to the cache, and also made + keeping the cache results themselves up to date complex, involving multiple pods, etc. Because the eCache only improved performance + for complex queries, it needed to include this complexity to provide value. - In contrast, the signature used in this cache is just a subset of the pod object, without complex logic. It is static and as the pod object changes slowly, it will change slowly as well. In addition, we explicitly avoid all the complex predicates in this cache because they are rarely used. Thus we do not have the same complexity needed in the cache. + In contrast, the signature used in this cache is just a subset of the pod object, without complex logic. It is static and as the pod object changes slowly, it will change slowly as well. In addition, we explicitly avoid all the complex plugins in this cache because they are rarely used. Thus we do not have the same complexity needed in the cache. #### eCache was tightly coupled with plugins - Beacuse a significant amount of the plugin complexity made into the eCache, it was difficult for plugin owners to keep the things in sync. + Because a significant amount of the plugin complexity made into the eCache, it was difficult for plugin owners to keep the things in sync. Since in this cache the signature is just parts of the pod object, and the pod object is fairly stable, this makes keeping the signature up to date a much simpler task. The creation of the signature is also spread across the plugins themselves, so instead of needing to keep the - cache up to date, plugin owners simply have a new function they need to manage within their plugin, which the cache simply aggregates. + cache up to date, plugin owners simply have a new function they need to manage within their plugin, which the cache only aggregates. - We will also provide tests that evaluate different pod configurations against different node configurations and ensures that + We will also provide tests that evaluate different pod configurations against different node configurations and ensure that any time the signatures match the results do as well. This will help us catch issues in the future, in addition to providing testing opportunities in other areas. @@ -329,17 +331,25 @@ See https://github.com/kubernetes/kubernetes/pull/65714#issuecomment-410016382 a ### Risks and Mitigations - +#### We are narrowing the feature set where the cache will work + +Because we are explicitly limiting the functionality that this cache will support, we run the risk of designing something +that will not work for enough customers for it to be useful. + +To mitigate this risk we are actively engaging with customers and doing analysis of data available on K8s users to ensure we are still +capturing a large enough number of user use cases. We also will address this by expanding over time; we expect to have a few interested +parties up front, but will then evaluate expansions that could onboard more. ## Design Details