@@ -32,8 +32,8 @@ over a rather long period of time, but improvements are always welcome!
3232 for lockless updates. This does result in the mildly
3333 counter-intuitive situation where rcu_read_lock() and
3434 rcu_read_unlock() are used to protect updates, however, this
35- approach provides the same potential simplifications that garbage
36- collectors do.
35+ approach can provide the same simplifications to certain types
36+ of lockless algorithms that garbage collectors do.
3737
38381. Does the update code have proper mutual exclusion?
3939
@@ -49,12 +49,12 @@ over a rather long period of time, but improvements are always welcome!
4949 them -- even x86 allows later loads to be reordered to precede
5050 earlier stores), and be prepared to explain why this added
5151 complexity is worthwhile. If you choose #c, be prepared to
52- explain how this single task does not become a major bottleneck on
53- big multiprocessor machines (for example, if the task is updating
54- information relating to itself that other tasks can read, there
55- by definition can be no bottleneck). Note that the definition
56- of "large" has changed significantly: Eight CPUs was "large"
57- in the year 2000, but a hundred CPUs was unremarkable in 2017.
52+ explain how this single task does not become a major bottleneck
53+ on large systems (for example, if the task is updating information
54+ relating to itself that other tasks can read, there by definition
55+ can be no bottleneck). Note that the definition of "large" has
56+ changed significantly: Eight CPUs was "large" in the year 2000,
57+ but a hundred CPUs was unremarkable in 2017.
5858
59592. Do the RCU read-side critical sections make proper use of
6060 rcu_read_lock() and friends? These primitives are needed
@@ -97,33 +97,38 @@ over a rather long period of time, but improvements are always welcome!
9797
9898 b. Proceed as in (a) above, but also maintain per-element
9999 locks (that are acquired by both readers and writers)
100- that guard per-element state. Of course, fields that
101- the readers refrain from accessing can be guarded by
102- some other lock acquired only by updaters, if desired.
100+ that guard per-element state. Fields that the readers
101+ refrain from accessing can be guarded by some other lock
102+ acquired only by updaters, if desired.
103103
104- This works quite well, also .
104+ This also works quite well.
105105
106106 c. Make updates appear atomic to readers. For example,
107107 pointer updates to properly aligned fields will
108108 appear atomic, as will individual atomic primitives.
109109 Sequences of operations performed under a lock will *not *
110110 appear to be atomic to RCU readers, nor will sequences
111- of multiple atomic primitives.
111+ of multiple atomic primitives. One alternative is to
112+ move multiple individual fields to a separate structure,
113+ thus solving the multiple-field problem by imposing an
114+ additional level of indirection.
112115
113116 This can work, but is starting to get a bit tricky.
114117
115- d. Carefully order the updates and the reads so that
116- readers see valid data at all phases of the update.
117- This is often more difficult than it sounds, especially
118- given modern CPUs' tendency to reorder memory references.
119- One must usually liberally sprinkle memory barriers
120- (smp_wmb(), smp_rmb(), smp_mb()) through the code,
121- making it difficult to understand and to test.
122-
123- It is usually better to group the changing data into
124- a separate structure, so that the change may be made
125- to appear atomic by updating a pointer to reference
126- a new structure containing updated values.
118+ d. Carefully order the updates and the reads so that readers
119+ see valid data at all phases of the update. This is often
120+ more difficult than it sounds, especially given modern
121+ CPUs' tendency to reorder memory references. One must
122+ usually liberally sprinkle memory-ordering operations
123+ through the code, making it difficult to understand and
124+ to test. Where it works, it is better to use things
125+ like smp_store_release() and smp_load_acquire(), but in
126+ some cases the smp_mb() full memory barrier is required.
127+
128+ As noted earlier, it is usually better to group the
129+ changing data into a separate structure, so that the
130+ change may be made to appear atomic by updating a pointer
131+ to reference a new structure containing updated values.
127132
1281334. Weakly ordered CPUs pose special challenges. Almost all CPUs
129134 are weakly ordered -- even x86 CPUs allow later loads to be
@@ -188,35 +193,39 @@ over a rather long period of time, but improvements are always welcome!
188193 when publicizing a pointer to a structure that can
189194 be traversed by an RCU read-side critical section.
190195
191- 5. If call_rcu() or call_srcu() is used, the callback function will
192- be called from softirq context. In particular, it cannot block.
193- If you need the callback to block, run that code in a workqueue
194- handler scheduled from the callback. The queue_rcu_work()
195- function does this for you in the case of call_rcu().
196+ 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
197+ call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
198+ the callback function may be invoked from softirq context,
199+ and in any case with bottom halves disabled. In particular,
200+ this callback function cannot block. If you need the callback
201+ to block, run that code in a workqueue handler scheduled from
202+ the callback. The queue_rcu_work() function does this for you
203+ in the case of call_rcu().
196204
1972056. Since synchronize_rcu() can block, it cannot be called
198206 from any sort of irq context. The same rule applies
199- for synchronize_srcu(), synchronize_rcu_expedited(), and
200- synchronize_srcu_expedited().
207+ for synchronize_srcu(), synchronize_rcu_expedited(),
208+ synchronize_srcu_expedited(), synchronize_rcu_tasks(),
209+ synchronize_rcu_tasks_rude(), and synchronize_rcu_tasks_trace().
201210
202211 The expedited forms of these primitives have the same semantics
203- as the non-expedited forms, but expediting is both expensive and
204- (with the exception of synchronize_srcu_expedited()) unfriendly
205- to real-time workloads. Use of the expedited primitives should
206- be restricted to rare configuration-change operations that would
207- not normally be undertaken while a real-time workload is running.
208- However, real-time workloads can use rcupdate.rcu_normal kernel
209- boot parameter to completely disable expedited grace periods,
210- though this might have performance implications.
212+ as the non-expedited forms, but expediting is more CPU intensive.
213+ Use of the expedited primitives should be restricted to rare
214+ configuration-change operations that would not normally be
215+ undertaken while a real-time workload is running. Note that
216+ IPI-sensitive real-time workloads can use the rcupdate.rcu_normal
217+ kernel boot parameter to completely disable expedited grace
218+ periods, though this might have performance implications.
211219
212220 In particular, if you find yourself invoking one of the expedited
213221 primitives repeatedly in a loop, please do everyone a favor:
214222 Restructure your code so that it batches the updates, allowing
215223 a single non-expedited primitive to cover the entire batch.
216224 This will very likely be faster than the loop containing the
217225 expedited primitive, and will be much much easier on the rest
218- of the system, especially to real-time workloads running on
219- the rest of the system.
226+ of the system, especially to real-time workloads running on the
227+ rest of the system. Alternatively, instead use asynchronous
228+ primitives such as call_rcu().
220229
2212307. As of v4.20, a given kernel implements only one RCU flavor, which
222231 is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
@@ -239,7 +248,8 @@ over a rather long period of time, but improvements are always welcome!
239248 the corresponding readers must use rcu_read_lock_trace() and
240249 rcu_read_unlock_trace(). If an updater uses call_rcu_tasks_rude()
241250 or synchronize_rcu_tasks_rude(), then the corresponding readers
242- must use anything that disables interrupts.
251+ must use anything that disables preemption, for example,
252+ preempt_disable() and preempt_enable().
243253
244254 Mixing things up will result in confusion and broken kernels, and
245255 has even resulted in an exploitable security issue. Therefore,
@@ -253,15 +263,16 @@ over a rather long period of time, but improvements are always welcome!
253263 that this usage is safe is that readers can use anything that
254264 disables BH when updaters use call_rcu() or synchronize_rcu().
255265
256- 8. Although synchronize_rcu() is slower than is call_rcu(), it
257- usually results in simpler code. So, unless update performance is
258- critically important, the updaters cannot block, or the latency of
259- synchronize_rcu() is visible from userspace, synchronize_rcu()
260- should be used in preference to call_rcu(). Furthermore,
261- kfree_rcu() usually results in even simpler code than does
262- synchronize_rcu() without synchronize_rcu()'s multi-millisecond
263- latency. So please take advantage of kfree_rcu()'s "fire and
264- forget" memory-freeing capabilities where it applies.
266+ 8. Although synchronize_rcu() is slower than is call_rcu(),
267+ it usually results in simpler code. So, unless update
268+ performance is critically important, the updaters cannot block,
269+ or the latency of synchronize_rcu() is visible from userspace,
270+ synchronize_rcu() should be used in preference to call_rcu().
271+ Furthermore, kfree_rcu() and kvfree_rcu() usually result
272+ in even simpler code than does synchronize_rcu() without
273+ synchronize_rcu()'s multi-millisecond latency. So please take
274+ advantage of kfree_rcu()'s and kvfree_rcu()'s "fire and forget"
275+ memory-freeing capabilities where it applies.
265276
266277 An especially important property of the synchronize_rcu()
267278 primitive is that it automatically self-limits: if grace periods
@@ -271,8 +282,8 @@ over a rather long period of time, but improvements are always welcome!
271282 cases where grace periods are delayed, as failing to do so can
272283 result in excessive realtime latencies or even OOM conditions.
273284
274- Ways of gaining this self-limiting property when using call_rcu()
275- include:
285+ Ways of gaining this self-limiting property when using call_rcu(),
286+ kfree_rcu(), or kvfree_rcu() include:
276287
277288 a. Keeping a count of the number of data-structure elements
278289 used by the RCU-protected data structure, including
@@ -304,18 +315,21 @@ over a rather long period of time, but improvements are always welcome!
304315 here is that superuser already has lots of ways to crash
305316 the machine.
306317
307- d. Periodically invoke synchronize_rcu(), permitting a limited
308- number of updates per grace period. Better yet, periodically
309- invoke rcu_barrier() to wait for all outstanding callbacks.
318+ d. Periodically invoke rcu_barrier(), permitting a limited
319+ number of updates per grace period.
310320
311- The same cautions apply to call_srcu() and kfree_rcu().
321+ The same cautions apply to call_srcu(), call_rcu_tasks(),
322+ call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
323+ why there is an srcu_barrier(), rcu_barrier_tasks(),
324+ rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
325+ respectively.
312326
313- Note that although these primitives do take action to avoid memory
314- exhaustion when any given CPU has too many callbacks, a determined
315- user could still exhaust memory. This is especially the case
316- if a system with a large number of CPUs has been configured to
317- offload all of its RCU callbacks onto a single CPU, or if the
318- system has relatively little free memory.
327+ Note that although these primitives do take action to avoid
328+ memory exhaustion when any given CPU has too many callbacks,
329+ a determined user or administrator can still exhaust memory.
330+ This is especially the case if a system with a large number of
331+ CPUs has been configured to offload all of its RCU callbacks onto
332+ a single CPU, or if the system has relatively little free memory.
319333
3203349. All RCU list-traversal primitives, which include
321335 rcu_dereference(), list_for_each_entry_rcu(), and
@@ -344,14 +358,14 @@ over a rather long period of time, but improvements are always welcome!
344358 and you don't hold the appropriate update-side lock, you *must *
345359 use the "_rcu()" variants of the list macros. Failing to do so
346360 will break Alpha, cause aggressive compilers to generate bad code,
347- and confuse people trying to read your code.
361+ and confuse people trying to understand your code.
348362
34936311. Any lock acquired by an RCU callback must be acquired elsewhere
350- with softirq disabled, e.g., via spin_lock_irqsave(),
351- spin_lock_bh(), etc. Failing to disable softirq on a given
352- acquisition of that lock will result in deadlock as soon as
353- the RCU softirq handler happens to run your RCU callback while
354- interrupting that acquisition's critical section.
364+ with softirq disabled, e.g., via spin_lock_bh(). Failing to
365+ disable softirq on a given acquisition of that lock will result
366+ in deadlock as soon as the RCU softirq handler happens to run
367+ your RCU callback while interrupting that acquisition's critical
368+ section.
355369
35637012. RCU callbacks can be and are executed in parallel. In many cases,
357371 the callback code simply wrappers around kfree(), so that this
@@ -372,7 +386,17 @@ over a rather long period of time, but improvements are always welcome!
372386 for some real-time workloads, this is the whole point of using
373387 the rcu_nocbs= kernel boot parameter.
374388
375- 13. Unlike other forms of RCU, it *is * permissible to block in an
389+ In addition, do not assume that callbacks queued in a given order
390+ will be invoked in that order, even if they all are queued on the
391+ same CPU. Furthermore, do not assume that same-CPU callbacks will
392+ be invoked serially. For example, in recent kernels, CPUs can be
393+ switched between offloaded and de-offloaded callback invocation,
394+ and while a given CPU is undergoing such a switch, its callbacks
395+ might be concurrently invoked by that CPU's softirq handler and
396+ that CPU's rcuo kthread. At such times, that CPU's callbacks
397+ might be executed both concurrently and out of order.
398+
399+ 13. Unlike most flavors of RCU, it *is * permissible to block in an
376400 SRCU read-side critical section (demarked by srcu_read_lock()
377401 and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
378402 Please note that if you don't need to sleep in read-side critical
@@ -412,6 +436,12 @@ over a rather long period of time, but improvements are always welcome!
412436 never sends IPIs to other CPUs, so it is easier on
413437 real-time workloads than is synchronize_rcu_expedited().
414438
439+ It is also permissible to sleep in RCU Tasks Trace read-side
440+ critical, which are delimited by rcu_read_lock_trace() and
441+ rcu_read_unlock_trace(). However, this is a specialized flavor
442+ of RCU, and you should not use it without first checking with
443+ its current users. In most cases, you should instead use SRCU.
444+
415445 Note that rcu_assign_pointer() relates to SRCU just as it does to
416446 other forms of RCU, but instead of rcu_dereference() you should
417447 use srcu_dereference() in order to avoid lockdep splats.
@@ -442,50 +472,62 @@ over a rather long period of time, but improvements are always welcome!
442472 find problems as follows:
443473
444474 CONFIG_PROVE_LOCKING:
445- check that accesses to RCU-protected data
446- structures are carried out under the proper RCU
447- read-side critical section, while holding the right
448- combination of locks, or whatever other conditions
449- are appropriate.
475+ check that accesses to RCU-protected data structures
476+ are carried out under the proper RCU read-side critical
477+ section, while holding the right combination of locks,
478+ or whatever other conditions are appropriate.
450479
451480 CONFIG_DEBUG_OBJECTS_RCU_HEAD:
452- check that you don't pass the
453- same object to call_rcu() ( or friends) before an RCU
454- grace period has elapsed since the last time that you
455- passed that same object to call_rcu() (or friends).
481+ check that you don't pass the same object to call_rcu()
482+ ( or friends) before an RCU grace period has elapsed
483+ since the last time that you passed that same object to
484+ call_rcu() (or friends).
456485
457486 __rcu sparse checks:
458- tag the pointer to the RCU-protected data
459- structure with __rcu, and sparse will warn you if you
460- access that pointer without the services of one of the
461- variants of rcu_dereference().
487+ tag the pointer to the RCU-protected data structure
488+ with __rcu, and sparse will warn you if you access that
489+ pointer without the services of one of the variants
490+ of rcu_dereference().
462491
463492 These debugging aids can help you find problems that are
464493 otherwise extremely difficult to spot.
465494
466- 17. If you register a callback using call_rcu() or call_srcu(), and
467- pass in a function defined within a loadable module, then it in
468- necessary to wait for all pending callbacks to be invoked after
469- the last invocation and before unloading that module. Note that
470- it is absolutely *not * sufficient to wait for a grace period!
471- The current (say) synchronize_rcu() implementation is *not *
472- guaranteed to wait for callbacks registered on other CPUs.
473- Or even on the current CPU if that CPU recently went offline
474- and came back online.
495+ 17. If you pass a callback function defined within a module to one of
496+ call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(),
497+ or call_rcu_tasks_trace(), then it is necessary to wait for all
498+ pending callbacks to be invoked before unloading that module.
499+ Note that it is absolutely *not * sufficient to wait for a grace
500+ period! For example, synchronize_rcu() implementation is *not *
501+ guaranteed to wait for callbacks registered on other CPUs via
502+ call_rcu(). Or even on the current CPU if that CPU recently
503+ went offline and came back online.
475504
476505 You instead need to use one of the barrier functions:
477506
478507 - call_rcu() -> rcu_barrier()
479508 - call_srcu() -> srcu_barrier()
509+ - call_rcu_tasks() -> rcu_barrier_tasks()
510+ - call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
511+ - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
480512
481513 However, these barrier functions are absolutely *not * guaranteed
482- to wait for a grace period. In fact, if there are no call_rcu()
483- callbacks waiting anywhere in the system, rcu_barrier() is within
484- its rights to return immediately.
485-
486- So if you need to wait for both an RCU grace period and for
487- all pre-existing call_rcu() callbacks, you will need to execute
488- both rcu_barrier() and synchronize_rcu(), if necessary, using
489- something like workqueues to execute them concurrently.
514+ to wait for a grace period. For example, if there are no
515+ call_rcu() callbacks queued anywhere in the system, rcu_barrier()
516+ can and will return immediately.
517+
518+ So if you need to wait for both a grace period and for all
519+ pre-existing callbacks, you will need to invoke both functions,
520+ with the pair depending on the flavor of RCU:
521+
522+ - Either synchronize_rcu() or synchronize_rcu_expedited(),
523+ together with rcu_barrier()
524+ - Either synchronize_srcu() or synchronize_srcu_expedited(),
525+ together with and srcu_barrier()
526+ - synchronize_rcu_tasks() and rcu_barrier_tasks()
527+ - synchronize_tasks_rude() and rcu_barrier_tasks_rude()
528+ - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
529+
530+ If necessary, you can use something like workqueues to execute
531+ the requisite pair of functions concurrently.
490532
491533 See rcubarrier.rst for more information.
0 commit comments