Skip to content

Commit 780ee9f

Browse files
committed
reword a little bit on the section of detail implementation
1 parent c575369 commit 780ee9f

File tree

1 file changed

+9
-7
lines changed
  • keps/sig-scheduling/1923-try-nominated-node-first

1 file changed

+9
-7
lines changed

keps/sig-scheduling/1923-try-nominated-node-first/README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -96,15 +96,17 @@ nominated node.
9696
1. In filtering phase, which is currently implemented in the method of `findNodesThatFitPod`, check the nominated node
9797
first if the incoming pod has the `pod.Status.NominatedNodeName` defined and the feature gate is enabled.
9898

99-
2. In case the nominated node doesn't suit for the incoming pod anymore, return `err` got from `findNodesThatPassFilters`,
100-
the `err` will be padded with more information to tell that scheduler is evaluating the feasibility of `NominatedNode`
101-
and failed on that node.
99+
2. In case the nominated node doesn't suit for the incoming pod anymore, get `err` from `findNodesThatPassFilters` where
100+
`NominatedNode` is firstly evaluated, the `err` will be padded with more information to tell that scheduler is evaluating
101+
the feasibility of `NominatedNode` and failed on that node.
102102

103-
If no error is returned and cannot pass all the filtering, this is possibly caused by the resource that claims to be
104-
removed but has not been fully released yet, scheduler will continue to evaluate the rest of nodes to check if there
105-
is any node already available for the coming pod.
103+
If no error is returned but `NominatedNode` cannot pass all the filtering, this is possibly caused by the resource that
104+
claims to be removed but has not been fully released yet.
106105

107-
If scheduler still cannot find any node for the pod, scheduling will retry until matching either of the following cases,
106+
For both of above cases, scheduler will continue to evaluate the rest of nodes to check if there is any node already
107+
available for the coming pod.
108+
109+
Scheduler will retry until matching either of the following cases,
108110
- `NominatedNode` eventually released all the resource and the preemptor pod can be scheduled on that node.
109111
- Another node in the cluster released enough release and pod get scheduled on that node instead.
110112
[Discuss] Should scheduler clear the `NominatedNode` in this case?

0 commit comments

Comments
 (0)