Skip to content

Commit 79a34e3

Browse files
ubizjakIngo Molnar
authored andcommitted
locking/qspinlock: Use atomic_try_cmpxchg_relaxed() in xchg_tail()
Use atomic_try_cmpxchg_relaxed(*ptr, &old, new) instead of atomic_cmpxchg_relaxed (*ptr, old, new) == old in xchg_tail(). x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after CMPXCHG. No functional change intended. Since this code requires NR_CPUS >= 16k, I have tested it by unconditionally setting _Q_PENDING_BITS to 1 in <asm-generic/qspinlock_types.h>. Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Waiman Long <[email protected]> Cc: Linus Torvalds <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 21689e4 commit 79a34e3

File tree

1 file changed

+5
-8
lines changed

1 file changed

+5
-8
lines changed

kernel/locking/qspinlock.c

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -220,21 +220,18 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
220220
*/
221221
static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
222222
{
223-
u32 old, new, val = atomic_read(&lock->val);
223+
u32 old, new;
224224

225-
for (;;) {
226-
new = (val & _Q_LOCKED_PENDING_MASK) | tail;
225+
old = atomic_read(&lock->val);
226+
do {
227+
new = (old & _Q_LOCKED_PENDING_MASK) | tail;
227228
/*
228229
* We can use relaxed semantics since the caller ensures that
229230
* the MCS node is properly initialized before updating the
230231
* tail.
231232
*/
232-
old = atomic_cmpxchg_relaxed(&lock->val, val, new);
233-
if (old == val)
234-
break;
233+
} while (!atomic_try_cmpxchg_relaxed(&lock->val, &old, new));
235234

236-
val = old;
237-
}
238235
return old;
239236
}
240237
#endif /* _Q_PENDING_BITS == 8 */

0 commit comments

Comments
 (0)