Skip to content

Commit 26cf522

Browse files
wangyun2137Peter Zijlstra
authored andcommitted
sched: Avoid scale real weight down to zero
During our testing, we found a case that shares no longer working correctly, the cgroup topology is like: /sys/fs/cgroup/cpu/A (shares=102400) /sys/fs/cgroup/cpu/A/B (shares=2) /sys/fs/cgroup/cpu/A/B/C (shares=1024) /sys/fs/cgroup/cpu/D (shares=1024) /sys/fs/cgroup/cpu/D/E (shares=1024) /sys/fs/cgroup/cpu/D/E/F (shares=1024) The same benchmark is running in group C & F, no other tasks are running, the benchmark is capable to consumed all the CPUs. We suppose the group C will win more CPU resources since it could enjoy all the shares of group A, but it's F who wins much more. The reason is because we have group B with shares as 2, since A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus, so A->cfs_rq.load.weight become very small. And in calc_group_shares() we calculate shares as: load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); shares = (tg_shares * load) / tg_weight; Since the 'cfs_rq->load.weight' is too small, the load become 0 after scale down, although 'tg_shares' is 102400, shares of the se which stand for group A on root cfs_rq become 2. While the se of D on root cfs_rq is far more bigger than 2, so it wins the battle. Thus when scale_load_down() scale real weight down to 0, it's no longer telling the real story, the caller will have the wrong information and the calculation will be buggy. This patch add check in scale_load_down(), so the real weight will be >= MIN_SHARES after scale, after applied the group C wins as expected. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Michael Wang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 1066d1b commit 26cf522

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

kernel/sched/sched.h

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
118118
#ifdef CONFIG_64BIT
119119
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
120120
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
121-
# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
121+
# define scale_load_down(w) \
122+
({ \
123+
unsigned long __w = (w); \
124+
if (__w) \
125+
__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
126+
__w; \
127+
})
122128
#else
123129
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
124130
# define scale_load(w) (w)

0 commit comments

Comments
 (0)