Skip to content

Commit e94f80f

Browse files
Qais YousefPeter Zijlstra
authored andcommitted
sched/rt: cpupri_find: Trigger a full search as fallback
If we failed to find a fitting CPU, in cpupri_find(), we only fallback to the level we found a hit at. But Steve suggested to fallback to a second full scan instead as this could be a better effort. https://lore.kernel.org/lkml/[email protected]/ We trigger the 2nd search unconditionally since the argument about triggering a full search is that the recorded fall back level might have become empty by then. Which means storing any data about what happened would be meaningless and stale. I had a humble try at timing it and it seemed okay for the small 6 CPUs system I was running on https://lore.kernel.org/lkml/[email protected]/ On large system this second full scan could be expensive. But there are no users outside capacity awareness for this fitness function at the moment. Heterogeneous systems tend to be small with 8cores in total. Suggested-by: Steven Rostedt <[email protected]> Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Steven Rostedt (VMware) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 26c7295 commit e94f80f

File tree

1 file changed

+6
-23
lines changed

1 file changed

+6
-23
lines changed

kernel/sched/cpupri.c

Lines changed: 6 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -122,8 +122,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
122122
bool (*fitness_fn)(struct task_struct *p, int cpu))
123123
{
124124
int task_pri = convert_prio(p->prio);
125-
int best_unfit_idx = -1;
126-
int idx = 0, cpu;
125+
int idx, cpu;
127126

128127
BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES);
129128

@@ -145,31 +144,15 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
145144
* If no CPU at the current priority can fit the task
146145
* continue looking
147146
*/
148-
if (cpumask_empty(lowest_mask)) {
149-
/*
150-
* Store our fallback priority in case we
151-
* didn't find a fitting CPU
152-
*/
153-
if (best_unfit_idx == -1)
154-
best_unfit_idx = idx;
155-
147+
if (cpumask_empty(lowest_mask))
156148
continue;
157-
}
158149

159150
return 1;
160151
}
161152

162153
/*
163-
* If we failed to find a fitting lowest_mask, make sure we fall back
164-
* to the last known unfitting lowest_mask.
165-
*
166-
* Note that the map of the recorded idx might have changed since then,
167-
* so we must ensure to do the full dance to make sure that level still
168-
* holds a valid lowest_mask.
169-
*
170-
* As per above, the map could have been concurrently emptied while we
171-
* were busy searching for a fitting lowest_mask at the other priority
172-
* levels.
154+
* If we failed to find a fitting lowest_mask, kick off a new search
155+
* but without taking into account any fitness criteria this time.
173156
*
174157
* This rule favours honouring priority over fitting the task in the
175158
* correct CPU (Capacity Awareness being the only user now).
@@ -184,8 +167,8 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
184167
* must do proper RT planning to avoid overloading the system if they
185168
* really care.
186169
*/
187-
if (best_unfit_idx != -1)
188-
return __cpupri_find(cp, p, lowest_mask, best_unfit_idx);
170+
if (fitness_fn)
171+
return cpupri_find(cp, p, lowest_mask);
189172

190173
return 0;
191174
}

0 commit comments

Comments
 (0)