Skip to content

Commit 1bbcfe6

Browse files
committed
sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
check_hotplug_seq() is used to detect CPU hotplug event which occurred while the BPF scheduler is being loaded so that initialization can be retried if CPU hotplug events take place before the CPU hotplug callbacks are online. As such, the best place to call it is in the same cpu_read_lock() section that enables the CPU hotplug ops. Currently, it is called in the next cpus_read_lock() block in scx_ops_enable(). The side effect of this placement is a small window in which hotplug sequence detection can trigger unnecessarily, which isn't critical. Move check_hotplug_seq() invocation to the same cpus_read_lock() block as the hotplug operation enablement to close the window and get the invocation out of the way for planned locking updates. Signed-off-by: Tejun Heo <[email protected]> Cc: David Vernet <[email protected]>
1 parent 6f34d8d commit 1bbcfe6

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

kernel/sched/ext.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5050,6 +5050,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
50505050
if (((void (**)(void))ops)[i])
50515051
static_branch_enable_cpuslocked(&scx_has_op[i]);
50525052

5053+
check_hotplug_seq(ops);
50535054
cpus_read_unlock();
50545055

50555056
ret = validate_ops(ops);
@@ -5098,8 +5099,6 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
50985099
cpus_read_lock();
50995100
scx_cgroup_lock();
51005101

5101-
check_hotplug_seq(ops);
5102-
51035102
for (i = SCX_OPI_NORMAL_BEGIN; i < SCX_OPI_NORMAL_END; i++)
51045103
if (((void (**)(void))ops)[i])
51055104
static_branch_enable_cpuslocked(&scx_has_op[i]);

0 commit comments

Comments
 (0)