Skip to content

Commit fcf6631

Browse files
vingu-linaroPeter Zijlstra
authored andcommitted
sched/pelt: Ensure that *_sum is always synced with *_avg
Rounding in PELT calculation happening when entities are attached/detached of a cfs_rq can result into situations where util/runnable_avg is not null but util/runnable_sum is. This is normally not possible so we need to ensure that util/runnable_sum stays synced with util/runnable_avg. detach_entity_load_avg() is the last place where we don't sync util/runnable_sum with util/runnbale_avg when moving some sched_entities Signed-off-by: Vincent Guittot <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent f268c37 commit fcf6631

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

kernel/sched/fair.c

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3765,11 +3765,17 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
37653765
*/
37663766
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
37673767
{
3768+
/*
3769+
* cfs_rq->avg.period_contrib can be used for both cfs_rq and se.
3770+
* See ___update_load_avg() for details.
3771+
*/
3772+
u32 divider = get_pelt_divider(&cfs_rq->avg);
3773+
37683774
dequeue_load_avg(cfs_rq, se);
37693775
sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);
3770-
sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);
3776+
cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider;
37713777
sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);
3772-
sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);
3778+
cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider;
37733779

37743780
add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
37753781

0 commit comments

Comments
 (0)