Skip to content

Commit a055fcc

Browse files
Peter ZijlstraKAGA-KOKO
authored andcommitted
locking/rtmutex: Return success on deadlock for ww_mutex waiters
ww_mutexes can legitimately cause a deadlock situation in the lock graph which is resolved afterwards by the wait/wound mechanics. The rtmutex chain walk can detect such a deadlock and returns EDEADLK which in turn skips the wait/wound mechanism and returns EDEADLK to the caller. That's wrong because both lock chains might get EDEADLK or the wrong waiter would back out. Detect that situation and return 'success' in case that the waiter which initiated the chain walk is a ww_mutex with context. This allows the wait/wound mechanics to resolve the situation according to the rules. [ tglx: Split it apart and added changelog ] Reported-by: Sebastian Siewior <[email protected]> Fixes: add4613 ("locking/rtmutex: Extend the rtmutex core to support ww_mutex") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 6467822 commit a055fcc

File tree

1 file changed

+14
-1
lines changed

1 file changed

+14
-1
lines changed

kernel/locking/rtmutex.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -742,8 +742,21 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task,
742742
* walk, we detected a deadlock.
743743
*/
744744
if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {
745-
raw_spin_unlock(&lock->wait_lock);
746745
ret = -EDEADLK;
746+
747+
/*
748+
* When the deadlock is due to ww_mutex; also see above. Don't
749+
* report the deadlock and instead let the ww_mutex wound/die
750+
* logic pick which of the contending threads gets -EDEADLK.
751+
*
752+
* NOTE: assumes the cycle only contains a single ww_class; any
753+
* other configuration and we fail to report; also, see
754+
* lockdep.
755+
*/
756+
if (IS_ENABLED(CONFIG_PREEMPT_RT) && orig_waiter->ww_ctx)
757+
ret = 0;
758+
759+
raw_spin_unlock(&lock->wait_lock);
747760
goto out_unlock_pi;
748761
}
749762

0 commit comments

Comments
 (0)