Skip to content

Commit 22165f6

Browse files
Yicong YangPeter Zijlstra
authored andcommitted
sched/fair: Use candidate prev/recent_used CPU if scanning failed for cluster wakeup
Chen Yu reports a hackbench regression of cluster wakeup when hackbench threads equal to the CPU number [1]. Analysis shows it's because we wake up more on the target CPU even if the prev_cpu is a good wakeup candidate and leads to the decrease of the CPU utilization. Generally if the task's prev_cpu is idle we'll wake up the task on it without scanning. On cluster machines we'll try to wake up the task in the same cluster of the target for better cache affinity, so if the prev_cpu is idle but not sharing the same cluster with the target we'll still try to find an idle CPU within the cluster. This will improve the performance at low loads on cluster machines. But in the issue above, if the prev_cpu is idle but not in the cluster with the target CPU, we'll try to scan an idle one in the cluster. But since the system is busy, we're likely to fail the scanning and use target instead, even if the prev_cpu is idle. Then leads to the regression. This patch solves this in 2 steps: o record the prev_cpu/recent_used_cpu if they're good wakeup candidates but not sharing the cluster with the target. o on scanning failure use the prev_cpu/recent_used_cpu if they're recorded as idle [1] https://lore.kernel.org/all/ZGzDLuVaHR1PAYDt@chenyu5-mobl1/ Closes: https://lore.kernel.org/all/ZGsLy83wPIpamy6x@chenyu5-mobl1/ Reported-by: Chen Yu <[email protected]> Signed-off-by: Yicong Yang <[email protected]> Tested-and-reviewed-by: Chen Yu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Vincent Guittot <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 8881e16 commit 22165f6

File tree

1 file changed

+16
-1
lines changed

1 file changed

+16
-1
lines changed

kernel/sched/fair.c

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7392,7 +7392,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
73927392
bool has_idle_core = false;
73937393
struct sched_domain *sd;
73947394
unsigned long task_util, util_min, util_max;
7395-
int i, recent_used_cpu;
7395+
int i, recent_used_cpu, prev_aff = -1;
73967396

73977397
/*
73987398
* On asymmetric system, update task utilization because we will check
@@ -7424,6 +7424,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
74247424
if (!static_branch_unlikely(&sched_cluster_active) ||
74257425
cpus_share_resources(prev, target))
74267426
return prev;
7427+
7428+
prev_aff = prev;
74277429
}
74287430

74297431
/*
@@ -7456,6 +7458,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
74567458
cpus_share_resources(recent_used_cpu, target))
74577459
return recent_used_cpu;
74587460

7461+
} else {
7462+
recent_used_cpu = -1;
74597463
}
74607464

74617465
/*
@@ -7496,6 +7500,17 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
74967500
if ((unsigned)i < nr_cpumask_bits)
74977501
return i;
74987502

7503+
/*
7504+
* For cluster machines which have lower sharing cache like L2 or
7505+
* LLC Tag, we tend to find an idle CPU in the target's cluster
7506+
* first. But prev_cpu or recent_used_cpu may also be a good candidate,
7507+
* use them if possible when no idle CPU found in select_idle_cpu().
7508+
*/
7509+
if ((unsigned int)prev_aff < nr_cpumask_bits)
7510+
return prev_aff;
7511+
if ((unsigned int)recent_used_cpu < nr_cpumask_bits)
7512+
return recent_used_cpu;
7513+
74997514
return target;
75007515
}
75017516

0 commit comments

Comments
 (0)