Skip to content

Commit d646285

Browse files
committed
alpha: Override READ_ONCE() with barriered implementation
Rather then relying on the core code to use smp_read_barrier_depends() as part of the READ_ONCE() definition, instead override __READ_ONCE() in the Alpha code so that it generates the required mb() and then implement smp_load_acquire() using the new macro to avoid redundant back-to-back barriers from the generic implementation. Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Signed-off-by: Will Deacon <[email protected]>
1 parent b78b331 commit d646285

File tree

2 files changed

+40
-54
lines changed

2 files changed

+40
-54
lines changed

arch/alpha/include/asm/barrier.h

Lines changed: 5 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -2,64 +2,15 @@
22
#ifndef __BARRIER_H
33
#define __BARRIER_H
44

5-
#include <asm/compiler.h>
6-
75
#define mb() __asm__ __volatile__("mb": : :"memory")
86
#define rmb() __asm__ __volatile__("mb": : :"memory")
97
#define wmb() __asm__ __volatile__("wmb": : :"memory")
108

11-
/**
12-
* read_barrier_depends - Flush all pending reads that subsequents reads
13-
* depend on.
14-
*
15-
* No data-dependent reads from memory-like regions are ever reordered
16-
* over this barrier. All reads preceding this primitive are guaranteed
17-
* to access memory (but not necessarily other CPUs' caches) before any
18-
* reads following this primitive that depend on the data return by
19-
* any of the preceding reads. This primitive is much lighter weight than
20-
* rmb() on most CPUs, and is never heavier weight than is
21-
* rmb().
22-
*
23-
* These ordering constraints are respected by both the local CPU
24-
* and the compiler.
25-
*
26-
* Ordering is not guaranteed by anything other than these primitives,
27-
* not even by data dependencies. See the documentation for
28-
* memory_barrier() for examples and URLs to more information.
29-
*
30-
* For example, the following code would force ordering (the initial
31-
* value of "a" is zero, "b" is one, and "p" is "&a"):
32-
*
33-
* <programlisting>
34-
* CPU 0 CPU 1
35-
*
36-
* b = 2;
37-
* memory_barrier();
38-
* p = &b; q = p;
39-
* read_barrier_depends();
40-
* d = *q;
41-
* </programlisting>
42-
*
43-
* because the read of "*q" depends on the read of "p" and these
44-
* two reads are separated by a read_barrier_depends(). However,
45-
* the following code, with the same initial values for "a" and "b":
46-
*
47-
* <programlisting>
48-
* CPU 0 CPU 1
49-
*
50-
* a = 2;
51-
* memory_barrier();
52-
* b = 3; y = b;
53-
* read_barrier_depends();
54-
* x = a;
55-
* </programlisting>
56-
*
57-
* does not enforce ordering, since there is no data dependency between
58-
* the read of "a" and the read of "b". Therefore, on some CPUs, such
59-
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
60-
* in cases like this where there are no data dependencies.
61-
*/
62-
#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
9+
#define __smp_load_acquire(p) \
10+
({ \
11+
compiletime_assert_atomic_type(*p); \
12+
__READ_ONCE(*p); \
13+
})
6314

6415
#ifdef CONFIG_SMP
6516
#define __ASM_SMP_MB "\tmb\n"

arch/alpha/include/asm/rwonce.h

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
/* SPDX-License-Identifier: GPL-2.0 */
2+
/*
3+
* Copyright (C) 2019 Google LLC.
4+
*/
5+
#ifndef __ASM_RWONCE_H
6+
#define __ASM_RWONCE_H
7+
8+
#ifdef CONFIG_SMP
9+
10+
#include <asm/barrier.h>
11+
12+
/*
13+
* Alpha is apparently daft enough to reorder address-dependent loads
14+
* on some CPU implementations. Knock some common sense into it with
15+
* a memory barrier in READ_ONCE().
16+
*
17+
* For the curious, more information about this unusual reordering is
18+
* available in chapter 15 of the "perfbook":
19+
*
20+
* https://kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html
21+
*
22+
*/
23+
#define __READ_ONCE(x) \
24+
({ \
25+
__unqual_scalar_typeof(x) __x = \
26+
(*(volatile typeof(__x) *)(&(x))); \
27+
mb(); \
28+
(typeof(x))__x; \
29+
})
30+
31+
#endif /* CONFIG_SMP */
32+
33+
#include <asm-generic/rwonce.h>
34+
35+
#endif /* __ASM_RWONCE_H */

0 commit comments

Comments
 (0)