Skip to content

Commit 87ca4f9

Browse files
Waiman-LongIngo Molnar
authored andcommitted
sched/core: Fix use-after-free bug in dup_user_cpus_ptr()
Since commit 07ec77a ("sched: Allow task CPU affinity to be restricted on asymmetric systems"), the setting and clearing of user_cpus_ptr are done under pi_lock for arm64 architecture. However, dup_user_cpus_ptr() accesses user_cpus_ptr without any lock protection. Since sched_setaffinity() can be invoked from another process, the process being modified may be undergoing fork() at the same time. When racing with the clearing of user_cpus_ptr in __set_cpus_allowed_ptr_locked(), it can lead to user-after-free and possibly double-free in arm64 kernel. Commit 8f9ea86 ("sched: Always preserve the user requested cpumask") fixes this problem as user_cpus_ptr, once set, will never be cleared in a task's lifetime. However, this bug was re-introduced in commit 851a723 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") which allows the clearing of user_cpus_ptr in do_set_cpus_allowed(). This time, it will affect all arches. Fix this bug by always clearing the user_cpus_ptr of the newly cloned/forked task before the copying process starts and check the user_cpus_ptr state of the source task under pi_lock. Note to stable, this patch won't be applicable to stable releases. Just copy the new dup_user_cpus_ptr() function over. Fixes: 07ec77a ("sched: Allow task CPU affinity to be restricted on asymmetric systems") Fixes: 851a723 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") Reported-by: David Wang 王标 <[email protected]> Signed-off-by: Waiman Long <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Peter Zijlstra <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
1 parent 7fb3ff2 commit 87ca4f9

File tree

1 file changed

+29
-5
lines changed

1 file changed

+29
-5
lines changed

kernel/sched/core.c

Lines changed: 29 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2612,19 +2612,43 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
26122612
int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
26132613
int node)
26142614
{
2615+
cpumask_t *user_mask;
26152616
unsigned long flags;
26162617

2617-
if (!src->user_cpus_ptr)
2618+
/*
2619+
* Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
2620+
* may differ by now due to racing.
2621+
*/
2622+
dst->user_cpus_ptr = NULL;
2623+
2624+
/*
2625+
* This check is racy and losing the race is a valid situation.
2626+
* It is not worth the extra overhead of taking the pi_lock on
2627+
* every fork/clone.
2628+
*/
2629+
if (data_race(!src->user_cpus_ptr))
26182630
return 0;
26192631

2620-
dst->user_cpus_ptr = kmalloc_node(cpumask_size(), GFP_KERNEL, node);
2621-
if (!dst->user_cpus_ptr)
2632+
user_mask = kmalloc_node(cpumask_size(), GFP_KERNEL, node);
2633+
if (!user_mask)
26222634
return -ENOMEM;
26232635

2624-
/* Use pi_lock to protect content of user_cpus_ptr */
2636+
/*
2637+
* Use pi_lock to protect content of user_cpus_ptr
2638+
*
2639+
* Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
2640+
* do_set_cpus_allowed().
2641+
*/
26252642
raw_spin_lock_irqsave(&src->pi_lock, flags);
2626-
cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
2643+
if (src->user_cpus_ptr) {
2644+
swap(dst->user_cpus_ptr, user_mask);
2645+
cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
2646+
}
26272647
raw_spin_unlock_irqrestore(&src->pi_lock, flags);
2648+
2649+
if (unlikely(user_mask))
2650+
kfree(user_mask);
2651+
26282652
return 0;
26292653
}
26302654

0 commit comments

Comments
 (0)