Skip to content

Commit c3123c4

Browse files
KAGA-KOKOPeter Zijlstra
authored andcommitted
locking/rtmutex: Dont dereference waiter lockless
The new rt_mutex_spin_on_onwer() loop checks whether the spinning waiter is still the top waiter on the lock by utilizing rt_mutex_top_waiter(), which is broken because that function contains a sanity check which dereferences the top waiter pointer to check whether the waiter belongs to the lock. That's wrong in the lockless spinwait case: CPU 0 CPU 1 rt_mutex_lock(lock) rt_mutex_lock(lock); queue(waiter0) waiter0 == rt_mutex_top_waiter(lock) rt_mutex_spin_on_onwer(lock, waiter0) { queue(waiter1) waiter1 == rt_mutex_top_waiter(lock) ... top_waiter = rt_mutex_top_waiter(lock) leftmost = rb_first_cached(&lock->waiters); -> signal dequeue(waiter1) destroy(waiter1) w = rb_entry(leftmost, ....) BUG_ON(w->lock != lock) <- UAF The BUG_ON() is correct for the case where the caller holds lock->wait_lock which guarantees that the leftmost waiter entry cannot vanish. For the lockless spinwait case it's broken. Create a new helper function which avoids the pointer dereference and just compares the leftmost entry pointer with current's waiter pointer to validate that currrent is still elegible for spinning. Fixes: 992caf7 ("locking/rtmutex: Add adaptive spinwait mechanism") Reported-by: Sebastian Siewior <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
1 parent 99409b9 commit c3123c4

File tree

2 files changed

+16
-2
lines changed

2 files changed

+16
-2
lines changed

kernel/locking/rtmutex.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1329,8 +1329,9 @@ static bool rtmutex_spin_on_owner(struct rt_mutex_base *lock,
13291329
* for CONFIG_PREEMPT_RCU=y)
13301330
* - the VCPU on which owner runs is preempted
13311331
*/
1332-
if (!owner->on_cpu || waiter != rt_mutex_top_waiter(lock) ||
1333-
need_resched() || vcpu_is_preempted(task_cpu(owner))) {
1332+
if (!owner->on_cpu || need_resched() ||
1333+
rt_mutex_waiter_is_top_waiter(lock, waiter) ||
1334+
vcpu_is_preempted(task_cpu(owner))) {
13341335
res = false;
13351336
break;
13361337
}

kernel/locking/rtmutex_common.h

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,19 @@ static inline int rt_mutex_has_waiters(struct rt_mutex_base *lock)
9595
return !RB_EMPTY_ROOT(&lock->waiters.rb_root);
9696
}
9797

98+
/*
99+
* Lockless speculative check whether @waiter is still the top waiter on
100+
* @lock. This is solely comparing pointers and not derefencing the
101+
* leftmost entry which might be about to vanish.
102+
*/
103+
static inline bool rt_mutex_waiter_is_top_waiter(struct rt_mutex_base *lock,
104+
struct rt_mutex_waiter *waiter)
105+
{
106+
struct rb_node *leftmost = rb_first_cached(&lock->waiters);
107+
108+
return rb_entry(leftmost, struct rt_mutex_waiter, tree_entry) == waiter;
109+
}
110+
98111
static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base *lock)
99112
{
100113
struct rb_node *leftmost = rb_first_cached(&lock->waiters);

0 commit comments

Comments
 (0)