Skip to content

Commit 6a09c24

Browse files
deggemangregkh
authored andcommitted
Revert "sched/core: Reduce cost of sched_move_task when config autogroup"
commit 76f970c upstream. This reverts commit eff6c8c. Hazem reported a 30% drop in UnixBench spawn test with commit eff6c8c ("sched/core: Reduce cost of sched_move_task when config autogroup") on a m6g.xlarge AWS EC2 instance with 4 vCPUs and 16 GiB RAM (aarch64) (single level MC sched domain): https://lkml.kernel.org/r/[email protected] There is an early bail from sched_move_task() if p->sched_task_group is equal to p's 'cpu cgroup' (sched_get_task_group()). E.g. both are pointing to taskgroup '/user.slice/user-1000.slice/session-1.scope' (Ubuntu '22.04.5 LTS'). So in: do_exit() sched_autogroup_exit_task() sched_move_task() if sched_get_task_group(p) == p->sched_task_group return /* p is enqueued */ dequeue_task() \ sched_change_group() | task_change_group_fair() | detach_task_cfs_rq() | (1) set_task_rq() | attach_task_cfs_rq() | enqueue_task() / (1) isn't called for p anymore. Turns out that the regression is related to sgs->group_util in group_is_overloaded() and group_has_capacity(). If (1) isn't called for all the 'spawn' tasks then sgs->group_util is ~900 and sgs->group_capacity = 1024 (single CPU sched domain) and this leads to group_is_overloaded() returning true (2) and group_has_capacity() false (3) much more often compared to the case when (1) is called. I.e. there are much more cases of 'group_is_overloaded' and 'group_fully_busy' in WF_FORK wakeup sched_balance_find_dst_cpu() which then returns much more often a CPU != smp_processor_id() (5). This isn't good for these extremely short running tasks (FORK + EXIT) and also involves calling sched_balance_find_dst_group_cpu() unnecessary (single CPU sched domain). Instead if (1) is called for 'p->flags & PF_EXITING' then the path (4),(6) is taken much more often. select_task_rq_fair(..., wake_flags = WF_FORK) cpu = smp_processor_id() new_cpu = sched_balance_find_dst_cpu(..., cpu, ...) group = sched_balance_find_dst_group(..., cpu) do { update_sg_wakeup_stats() sgs->group_type = group_classify() if group_is_overloaded() (2) return group_overloaded if !group_has_capacity() (3) return group_fully_busy return group_has_spare (4) } while group if local_sgs.group_type > idlest_sgs.group_type return idlest (5) case group_has_spare: if local_sgs.idle_cpus >= idlest_sgs.idle_cpus return NULL (6) Unixbench Tests './Run -c 4 spawn' on: (a) VM AWS instance (m7gd.16xlarge) with v6.13 ('maxcpus=4 nr_cpus=4') and Ubuntu 22.04.5 LTS (aarch64). Shell & test run in '/user.slice/user-1000.slice/session-1.scope'. w/o patch w/ patch 21005 27120 (b) i7-13700K with tip/sched/core ('nosmt maxcpus=8 nr_cpus=8') and Ubuntu 22.04.5 LTS (x86_64). Shell & test run in '/A'. w/o patch w/ patch 67675 88806 CONFIG_SCHED_AUTOGROUP=y & /sys/proc/kernel/sched_autogroup_enabled equal 0 or 1. Reported-by: Hazem Mohamed Abuelfotoh <[email protected]> Signed-off-by: Dietmar Eggemann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Tested-by: Hagar Hemdan <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 8c90d43 commit 6a09c24

File tree

1 file changed

+3
-18
lines changed

1 file changed

+3
-18
lines changed

kernel/sched/core.c

Lines changed: 3 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9010,7 +9010,7 @@ void sched_release_group(struct task_group *tg)
90109010
spin_unlock_irqrestore(&task_group_lock, flags);
90119011
}
90129012

9013-
static struct task_group *sched_get_task_group(struct task_struct *tsk)
9013+
static void sched_change_group(struct task_struct *tsk)
90149014
{
90159015
struct task_group *tg;
90169016

@@ -9022,13 +9022,7 @@ static struct task_group *sched_get_task_group(struct task_struct *tsk)
90229022
tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
90239023
struct task_group, css);
90249024
tg = autogroup_task_group(tsk, tg);
9025-
9026-
return tg;
9027-
}
9028-
9029-
static void sched_change_group(struct task_struct *tsk, struct task_group *group)
9030-
{
9031-
tsk->sched_task_group = group;
9025+
tsk->sched_task_group = tg;
90329026

90339027
#ifdef CONFIG_FAIR_GROUP_SCHED
90349028
if (tsk->sched_class->task_change_group)
@@ -9049,20 +9043,11 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
90499043
{
90509044
int queued, running, queue_flags =
90519045
DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
9052-
struct task_group *group;
90539046
struct rq *rq;
90549047

90559048
CLASS(task_rq_lock, rq_guard)(tsk);
90569049
rq = rq_guard.rq;
90579050

9058-
/*
9059-
* Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous
9060-
* group changes.
9061-
*/
9062-
group = sched_get_task_group(tsk);
9063-
if (group == tsk->sched_task_group)
9064-
return;
9065-
90669051
update_rq_clock(rq);
90679052

90689053
running = task_current_donor(rq, tsk);
@@ -9073,7 +9058,7 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
90739058
if (running)
90749059
put_prev_task(rq, tsk);
90759060

9076-
sched_change_group(tsk, group);
9061+
sched_change_group(tsk);
90779062
if (!for_autogroup)
90789063
scx_cgroup_move_task(tsk);
90799064

0 commit comments

Comments
 (0)