Skip to content

Commit d583d36

Browse files
hnazPeter Zijlstra
authored andcommitted
psi: Fix psi state corruption when schedule() races with cgroup move
4117ceb ("psi: Optimize task switch inside shared cgroups") introduced a race condition that corrupts internal psi state. This manifests as kernel warnings, sometimes followed by bogusly high IO pressure: psi: task underflow! cpu=1 t=2 tasks=[0 0 0 0] clear=c set=0 (schedule() decreasing RUNNING and ONCPU, both of which are 0) psi: incosistent task state! task=2412744:systemd cpu=17 psi_flags=e clear=3 set=0 (cgroup_move_task() clearing MEMSTALL and IOWAIT, but task is MEMSTALL | RUNNING | ONCPU) What the offending commit does is batch the two psi callbacks in schedule() to reduce the number of cgroup tree updates. When prev is deactivated and removed from the runqueue, nothing is done in psi at first; when the task switch completes, TSK_RUNNING and TSK_IOWAIT are updated along with TSK_ONCPU. However, the deactivation and the task switch inside schedule() aren't atomic: pick_next_task() may drop the rq lock for load balancing. When this happens, cgroup_move_task() can run after the task has been physically dequeued, but the psi updates are still pending. Since it looks at the task's scheduler state, it doesn't move everything to the new cgroup that the task switch that follows is about to clear from it. cgroup_move_task() will leak the TSK_RUNNING count in the old cgroup, and psi_sched_switch() will underflow it in the new cgroup. A similar thing can happen for iowait. TSK_IOWAIT is usually set when a p->in_iowait task is dequeued, but again this update is deferred to the switch. cgroup_move_task() can see an unqueued p->in_iowait task and move a non-existent TSK_IOWAIT. This results in the inconsistent task state warning, as well as a counter underflow that will result in permanent IO ghost pressure being reported. Fix this bug by making cgroup_move_task() use task->psi_flags instead of looking at the potentially mismatching scheduler state. [ We used the scheduler state historically in order to not rely on task->psi_flags for anything but debugging. But that ship has sailed anyway, and this is simpler and more robust. We previously already batched TSK_ONCPU clearing with the TSK_RUNNING update inside the deactivation call from schedule(). But that ordering was safe and didn't result in TSK_ONCPU corruption: unlike most places in the scheduler, cgroup_move_task() only checked task_current() and handled TSK_ONCPU if the task was still queued. ] Fixes: 4117ceb ("psi: Optimize task switch inside shared cgroups") Signed-off-by: Johannes Weiner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 19987fd commit d583d36

File tree

1 file changed

+26
-10
lines changed

1 file changed

+26
-10
lines changed

kernel/sched/psi.c

Lines changed: 26 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -972,7 +972,7 @@ void psi_cgroup_free(struct cgroup *cgroup)
972972
*/
973973
void cgroup_move_task(struct task_struct *task, struct css_set *to)
974974
{
975-
unsigned int task_flags = 0;
975+
unsigned int task_flags;
976976
struct rq_flags rf;
977977
struct rq *rq;
978978

@@ -987,15 +987,31 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
987987

988988
rq = task_rq_lock(task, &rf);
989989

990-
if (task_on_rq_queued(task)) {
991-
task_flags = TSK_RUNNING;
992-
if (task_current(rq, task))
993-
task_flags |= TSK_ONCPU;
994-
} else if (task->in_iowait)
995-
task_flags = TSK_IOWAIT;
996-
997-
if (task->in_memstall)
998-
task_flags |= TSK_MEMSTALL;
990+
/*
991+
* We may race with schedule() dropping the rq lock between
992+
* deactivating prev and switching to next. Because the psi
993+
* updates from the deactivation are deferred to the switch
994+
* callback to save cgroup tree updates, the task's scheduling
995+
* state here is not coherent with its psi state:
996+
*
997+
* schedule() cgroup_move_task()
998+
* rq_lock()
999+
* deactivate_task()
1000+
* p->on_rq = 0
1001+
* psi_dequeue() // defers TSK_RUNNING & TSK_IOWAIT updates
1002+
* pick_next_task()
1003+
* rq_unlock()
1004+
* rq_lock()
1005+
* psi_task_change() // old cgroup
1006+
* task->cgroups = to
1007+
* psi_task_change() // new cgroup
1008+
* rq_unlock()
1009+
* rq_lock()
1010+
* psi_sched_switch() // does deferred updates in new cgroup
1011+
*
1012+
* Don't rely on the scheduling state. Use psi_flags instead.
1013+
*/
1014+
task_flags = task->psi_flags;
9991015

10001016
if (task_flags)
10011017
psi_task_change(task, task_flags, 0);

0 commit comments

Comments
 (0)