Skip to content

Commit ceb6ba4

Browse files
vingu-linaroPeter Zijlstra
authored andcommitted
sched/fair: Sync load_sum with load_avg after dequeue
commit 9e077b5 ("sched/pelt: Check that *_avg are null when *_sum are") reported some inconsitencies between *_avg and *_sum. commit 1c35b07 ("sched/fair: Ensure _sum and _avg values stay consistent") fixed some but one remains when dequeuing load. sync the cfs's load_sum with its load_avg after dequeuing the load of a sched_entity. Fixes: 9e077b5 ("sched/pelt: Check that *_avg are null when *_sum are") Reported-by: Sachin Sant <[email protected]> Signed-off-by: Vincent Guittot <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Odin Ugedal <[email protected]> Tested-by: Sachin Sant <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent a22a5cb commit ceb6ba4

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

kernel/sched/fair.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3037,8 +3037,9 @@ enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
30373037
static inline void
30383038
dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
30393039
{
3040+
u32 divider = get_pelt_divider(&se->avg);
30403041
sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg);
3041-
sub_positive(&cfs_rq->avg.load_sum, se_weight(se) * se->avg.load_sum);
3042+
cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;
30423043
}
30433044
#else
30443045
static inline void

0 commit comments

Comments
 (0)