Skip to content

Commit 9bea8aa

Browse files
terminusKernel Patches Daemon
authored andcommitted
arm64: barrier: Add smp_cond_load_relaxed_timewait()
Add smp_cond_load_relaxed_timewait(), a timed variant of smp_cond_load_relaxed(). This uses __cmpwait_relaxed() to do the actual waiting, with the event-stream guaranteeing that we wake up from WFE periodically and not block forever in case there are no stores to the cacheline. For cases when the event-stream is unavailable, fallback to spin-waiting. Cc: Will Deacon <[email protected]> Cc: [email protected] Suggested-by: Catalin Marinas <[email protected]> Signed-off-by: Ankur Arora <[email protected]> Reviewed-by: Catalin Marinas <[email protected]>
1 parent 1beb1a6 commit 9bea8aa

File tree

1 file changed

+22
-0
lines changed

1 file changed

+22
-0
lines changed

arch/arm64/include/asm/barrier.h

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -219,6 +219,28 @@ do { \
219219
(typeof(*ptr))VAL; \
220220
})
221221

222+
extern bool arch_timer_evtstrm_available(void);
223+
224+
#define smp_cond_load_relaxed_timewait(ptr, cond_expr, time_check_expr) \
225+
({ \
226+
typeof(ptr) __PTR = (ptr); \
227+
__unqual_scalar_typeof(*ptr) VAL; \
228+
bool __wfe = arch_timer_evtstrm_available(); \
229+
\
230+
for (;;) { \
231+
VAL = READ_ONCE(*__PTR); \
232+
if (cond_expr) \
233+
break; \
234+
if (time_check_expr) \
235+
break; \
236+
if (likely(__wfe)) \
237+
__cmpwait_relaxed(__PTR, VAL); \
238+
else \
239+
cpu_relax(); \
240+
} \
241+
(typeof(*ptr)) VAL; \
242+
})
243+
222244
#include <asm-generic/barrier.h>
223245

224246
#endif /* __ASSEMBLY__ */

0 commit comments

Comments
 (0)