Skip to content

Commit 703b321

Browse files
melverIngo Molnar
authored andcommitted
kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)
This introduces ASSERT_EXCLUSIVE_BITS(var, mask). ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the following access is safe w.r.t. data races (however, please see the docbook comment for disclaimer here). For more context on why this was considered necessary, please see: http://lkml.kernel.org/r/[email protected] In particular, before this patch, data races between reads (that use @Mask bits of an access that should not be modified concurrently) and writes (that change ~@Mask bits not used by the readers) would have been annotated with "data_race()" (or "READ_ONCE()"). However, doing so would then hide real problems: we would no longer be able to detect harmful races between reads to @Mask bits and writes to @Mask bits. Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish: 1. Avoid proliferation of specific macros at the call sites: by including a single mask in the argument list, we can use the same macro in a wide variety of call sites, regardless of how and which bits in a field each call site actually accesses. 2. The existing code does not need to be modified (although READ_ONCE() may still be advisable if we cannot prove that the data race is always safe). 3. We catch bugs where the exclusive bits are modified concurrently. 4. We document properties of the current code. Acked-by: John Hubbard <[email protected]> Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Jan Kara <[email protected]> Cc: Qian Cai <[email protected]>
1 parent 81af89e commit 703b321

File tree

2 files changed

+77
-7
lines changed

2 files changed

+77
-7
lines changed

include/linux/kcsan-checks.h

Lines changed: 63 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -152,9 +152,9 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
152152
#endif
153153

154154
/**
155-
* ASSERT_EXCLUSIVE_WRITER - assert no other threads are writing @var
155+
* ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
156156
*
157-
* Assert that there are no other threads writing @var; other readers are
157+
* Assert that there are no concurrent writes to @var; other readers are
158158
* allowed. This assertion can be used to specify properties of concurrent code,
159159
* where violation cannot be detected as a normal data race.
160160
*
@@ -171,11 +171,11 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
171171
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
172172

173173
/**
174-
* ASSERT_EXCLUSIVE_ACCESS - assert no other threads are accessing @var
174+
* ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
175175
*
176-
* Assert that no other thread is accessing @var (no readers nor writers). This
177-
* assertion can be used to specify properties of concurrent code, where
178-
* violation cannot be detected as a normal data race.
176+
* Assert that there are no concurrent accesses to @var (no readers nor
177+
* writers). This assertion can be used to specify properties of concurrent
178+
* code, where violation cannot be detected as a normal data race.
179179
*
180180
* For example, in a reference-counting algorithm where exclusive access is
181181
* expected after the refcount reaches 0. We can check that this property
@@ -191,4 +191,61 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
191191
#define ASSERT_EXCLUSIVE_ACCESS(var) \
192192
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
193193

194+
/**
195+
* ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
196+
*
197+
* Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
198+
*
199+
* Assert that there are no concurrent writes to a subset of bits in @var;
200+
* concurrent readers are permitted. This assertion captures more detailed
201+
* bit-level properties, compared to the other (word granularity) assertions.
202+
* Only the bits set in @mask are checked for concurrent modifications, while
203+
* ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
204+
* are ignored.
205+
*
206+
* Use this for variables, where some bits must not be modified concurrently,
207+
* yet other bits are expected to be modified concurrently.
208+
*
209+
* For example, variables where, after initialization, some bits are read-only,
210+
* but other bits may still be modified concurrently. A reader may wish to
211+
* assert that this is true as follows:
212+
*
213+
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
214+
* foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
215+
*
216+
* Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
217+
* assumed to access the masked bits only, and KCSAN optimistically assumes it
218+
* is therefore safe, even in the presence of data races, and marking it with
219+
* READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
220+
* that it may still be advisable to do so, since we cannot reason about all
221+
* compiler optimizations when it comes to bit manipulations (on the reader
222+
* and writer side). If you are sure nothing can go wrong, we can write the
223+
* above simply as:
224+
*
225+
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
226+
* foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
227+
*
228+
* Another example, where this may be used, is when certain bits of @var may
229+
* only be modified when holding the appropriate lock, but other bits may still
230+
* be modified concurrently. Writers, where other bits may change concurrently,
231+
* could use the assertion as follows:
232+
*
233+
* spin_lock(&foo_lock);
234+
* ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
235+
* old_flags = READ_ONCE(flags);
236+
* new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
237+
* if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
238+
* spin_unlock(&foo_lock);
239+
*
240+
* @var variable to assert on
241+
* @mask only check for modifications to bits set in @mask
242+
*/
243+
#define ASSERT_EXCLUSIVE_BITS(var, mask) \
244+
do { \
245+
kcsan_set_access_mask(mask); \
246+
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
247+
kcsan_set_access_mask(0); \
248+
kcsan_atomic_next(1); \
249+
} while (0)
250+
194251
#endif /* _LINUX_KCSAN_CHECKS_H */

kernel/kcsan/debugfs.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,25 +100,38 @@ static noinline void microbenchmark(unsigned long iters)
100100
* debugfs file from multiple tasks to generate real conflicts and show reports.
101101
*/
102102
static long test_dummy;
103+
static long test_flags;
103104
static noinline void test_thread(unsigned long iters)
104105
{
106+
const long CHANGE_BITS = 0xff00ff00ff00ff00L;
105107
const struct kcsan_ctx ctx_save = current->kcsan_ctx;
106108
cycles_t cycles;
107109

108110
/* We may have been called from an atomic region; reset context. */
109111
memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
110112

111113
pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
114+
pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);
112115

113116
cycles = get_cycles();
114117
while (iters--) {
118+
/* These all should generate reports. */
115119
__kcsan_check_read(&test_dummy, sizeof(test_dummy));
116-
__kcsan_check_write(&test_dummy, sizeof(test_dummy));
117120
ASSERT_EXCLUSIVE_WRITER(test_dummy);
118121
ASSERT_EXCLUSIVE_ACCESS(test_dummy);
119122

123+
ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */
124+
__kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
125+
126+
ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */
127+
__kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
128+
120129
/* not actually instrumented */
121130
WRITE_ONCE(test_dummy, iters); /* to observe value-change */
131+
__kcsan_check_write(&test_dummy, sizeof(test_dummy));
132+
133+
test_flags ^= CHANGE_BITS; /* generate value-change */
134+
__kcsan_check_write(&test_flags, sizeof(test_flags));
122135
}
123136
cycles = get_cycles() - cycles;
124137

0 commit comments

Comments
 (0)