Skip to content

Commit 355debb

Browse files
author
Neeraj Upadhyay
committed
Merge branches 'context_tracking.15.08.24a', 'csd.lock.15.08.24a', 'nocb.09.09.24a', 'rcutorture.14.08.24a', 'rcustall.09.09.24a', 'srcu.12.08.24a', 'rcu.tasks.14.08.24a', 'rcu_scaling_tests.15.08.24a', 'fixes.12.08.24a' and 'misc.11.08.24a' into next.09.09.24a
8 parents 4040b11 + 7562eed + 1c5144a + 1ecd9d6 + e53cef0 + 8f35fef + 0aac9da + fb579e6 commit 355debb

35 files changed

+830
-543
lines changed

Documentation/RCU/Design/Requirements/Requirements.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2649,8 +2649,7 @@ those that are idle from RCU's perspective) and then Tasks Rude RCU can
26492649
be removed from the kernel.
26502650

26512651
The tasks-rude-RCU API is also reader-marking-free and thus quite compact,
2652-
consisting of call_rcu_tasks_rude(), synchronize_rcu_tasks_rude(),
2653-
and rcu_barrier_tasks_rude().
2652+
consisting solely of synchronize_rcu_tasks_rude().
26542653

26552654
Tasks Trace RCU
26562655
~~~~~~~~~~~~~~~

Documentation/RCU/checklist.rst

Lines changed: 28 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -194,14 +194,13 @@ over a rather long period of time, but improvements are always welcome!
194194
when publicizing a pointer to a structure that can
195195
be traversed by an RCU read-side critical section.
196196

197-
5. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
198-
call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
199-
the callback function may be invoked from softirq context,
200-
and in any case with bottom halves disabled. In particular,
201-
this callback function cannot block. If you need the callback
202-
to block, run that code in a workqueue handler scheduled from
203-
the callback. The queue_rcu_work() function does this for you
204-
in the case of call_rcu().
197+
5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), or
198+
call_rcu_tasks_trace() is used, the callback function may be
199+
invoked from softirq context, and in any case with bottom halves
200+
disabled. In particular, this callback function cannot block.
201+
If you need the callback to block, run that code in a workqueue
202+
handler scheduled from the callback. The queue_rcu_work()
203+
function does this for you in the case of call_rcu().
205204

206205
6. Since synchronize_rcu() can block, it cannot be called
207206
from any sort of irq context. The same rule applies
@@ -254,10 +253,10 @@ over a rather long period of time, but improvements are always welcome!
254253
corresponding readers must use rcu_read_lock_trace()
255254
and rcu_read_unlock_trace().
256255

257-
c. If an updater uses call_rcu_tasks_rude() or
258-
synchronize_rcu_tasks_rude(), then the corresponding
259-
readers must use anything that disables preemption,
260-
for example, preempt_disable() and preempt_enable().
256+
c. If an updater uses synchronize_rcu_tasks_rude(),
257+
then the corresponding readers must use anything that
258+
disables preemption, for example, preempt_disable()
259+
and preempt_enable().
261260

262261
Mixing things up will result in confusion and broken kernels, and
263262
has even resulted in an exploitable security issue. Therefore,
@@ -326,11 +325,9 @@ over a rather long period of time, but improvements are always welcome!
326325
d. Periodically invoke rcu_barrier(), permitting a limited
327326
number of updates per grace period.
328327

329-
The same cautions apply to call_srcu(), call_rcu_tasks(),
330-
call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
331-
why there is an srcu_barrier(), rcu_barrier_tasks(),
332-
rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
333-
respectively.
328+
The same cautions apply to call_srcu(), call_rcu_tasks(), and
329+
call_rcu_tasks_trace(). This is why there is an srcu_barrier(),
330+
rcu_barrier_tasks(), and rcu_barrier_tasks_trace(), respectively.
334331

335332
Note that although these primitives do take action to avoid
336333
memory exhaustion when any given CPU has too many callbacks,
@@ -383,17 +380,17 @@ over a rather long period of time, but improvements are always welcome!
383380
must use whatever locking or other synchronization is required
384381
to safely access and/or modify that data structure.
385382

386-
Do not assume that RCU callbacks will be executed on
387-
the same CPU that executed the corresponding call_rcu(),
388-
call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), or
389-
call_rcu_tasks_trace(). For example, if a given CPU goes offline
390-
while having an RCU callback pending, then that RCU callback
391-
will execute on some surviving CPU. (If this was not the case,
392-
a self-spawning RCU callback would prevent the victim CPU from
393-
ever going offline.) Furthermore, CPUs designated by rcu_nocbs=
394-
might well *always* have their RCU callbacks executed on some
395-
other CPUs, in fact, for some real-time workloads, this is the
396-
whole point of using the rcu_nocbs= kernel boot parameter.
383+
Do not assume that RCU callbacks will be executed on the same
384+
CPU that executed the corresponding call_rcu(), call_srcu(),
385+
call_rcu_tasks(), or call_rcu_tasks_trace(). For example, if
386+
a given CPU goes offline while having an RCU callback pending,
387+
then that RCU callback will execute on some surviving CPU.
388+
(If this was not the case, a self-spawning RCU callback would
389+
prevent the victim CPU from ever going offline.) Furthermore,
390+
CPUs designated by rcu_nocbs= might well *always* have their
391+
RCU callbacks executed on some other CPUs, in fact, for some
392+
real-time workloads, this is the whole point of using the
393+
rcu_nocbs= kernel boot parameter.
397394

398395
In addition, do not assume that callbacks queued in a given order
399396
will be invoked in that order, even if they all are queued on the
@@ -507,9 +504,9 @@ over a rather long period of time, but improvements are always welcome!
507504
These debugging aids can help you find problems that are
508505
otherwise extremely difficult to spot.
509506

510-
17. If you pass a callback function defined within a module to one of
511-
call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(),
512-
or call_rcu_tasks_trace(), then it is necessary to wait for all
507+
17. If you pass a callback function defined within a module
508+
to one of call_rcu(), call_srcu(), call_rcu_tasks(), or
509+
call_rcu_tasks_trace(), then it is necessary to wait for all
513510
pending callbacks to be invoked before unloading that module.
514511
Note that it is absolutely *not* sufficient to wait for a grace
515512
period! For example, synchronize_rcu() implementation is *not*
@@ -522,7 +519,6 @@ over a rather long period of time, but improvements are always welcome!
522519
- call_rcu() -> rcu_barrier()
523520
- call_srcu() -> srcu_barrier()
524521
- call_rcu_tasks() -> rcu_barrier_tasks()
525-
- call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
526522
- call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
527523

528524
However, these barrier functions are absolutely *not* guaranteed
@@ -539,7 +535,6 @@ over a rather long period of time, but improvements are always welcome!
539535
- Either synchronize_srcu() or synchronize_srcu_expedited(),
540536
together with and srcu_barrier()
541537
- synchronize_rcu_tasks() and rcu_barrier_tasks()
542-
- synchronize_tasks_rude() and rcu_barrier_tasks_rude()
543538
- synchronize_tasks_trace() and rcu_barrier_tasks_trace()
544539

545540
If necessary, you can use something like workqueues to execute

Documentation/RCU/whatisRCU.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1103,7 +1103,7 @@ RCU-Tasks-Rude::
11031103

11041104
Critical sections Grace period Barrier
11051105

1106-
N/A call_rcu_tasks_rude rcu_barrier_tasks_rude
1106+
N/A N/A
11071107
synchronize_rcu_tasks_rude
11081108

11091109

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4937,6 +4937,10 @@
49374937
Set maximum number of finished RCU callbacks to
49384938
process in one batch.
49394939

4940+
rcutree.csd_lock_suppress_rcu_stall= [KNL]
4941+
Do only a one-line RCU CPU stall warning when
4942+
there is an ongoing too-long CSD-lock wait.
4943+
49404944
rcutree.do_rcu_barrier= [KNL]
49414945
Request a call to rcu_barrier(). This is
49424946
throttled so that userspace tests can safely
@@ -5384,7 +5388,13 @@
53845388
Time to wait (s) after boot before inducing stall.
53855389

53865390
rcutorture.stall_cpu_irqsoff= [KNL]
5387-
Disable interrupts while stalling if set.
5391+
Disable interrupts while stalling if set, but only
5392+
on the first stall in the set.
5393+
5394+
rcutorture.stall_cpu_repeat= [KNL]
5395+
Number of times to repeat the stall sequence,
5396+
so that rcutorture.stall_cpu_repeat=3 will result
5397+
in four stall sequences.
53885398

53895399
rcutorture.stall_gp_kthread= [KNL]
53905400
Duration (s) of forced sleep within RCU
@@ -5572,14 +5582,6 @@
55725582
of zero will disable batching. Batching is
55735583
always disabled for synchronize_rcu_tasks().
55745584

5575-
rcupdate.rcu_tasks_rude_lazy_ms= [KNL]
5576-
Set timeout in milliseconds RCU Tasks
5577-
Rude asynchronous callback batching for
5578-
call_rcu_tasks_rude(). A negative value
5579-
will take the default. A value of zero will
5580-
disable batching. Batching is always disabled
5581-
for synchronize_rcu_tasks_rude().
5582-
55835585
rcupdate.rcu_tasks_trace_lazy_ms= [KNL]
55845586
Set timeout in milliseconds RCU Tasks
55855587
Trace asynchronous callback batching for

include/linux/rcu_segcblist.h

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -185,11 +185,7 @@ struct rcu_cblist {
185185
* ----------------------------------------------------------------------------
186186
*/
187187
#define SEGCBLIST_ENABLED BIT(0)
188-
#define SEGCBLIST_RCU_CORE BIT(1)
189-
#define SEGCBLIST_LOCKING BIT(2)
190-
#define SEGCBLIST_KTHREAD_CB BIT(3)
191-
#define SEGCBLIST_KTHREAD_GP BIT(4)
192-
#define SEGCBLIST_OFFLOADED BIT(5)
188+
#define SEGCBLIST_OFFLOADED BIT(1)
193189

194190
struct rcu_segcblist {
195191
struct rcu_head *head;

include/linux/rculist.h

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,10 @@ static inline void hlist_del_init_rcu(struct hlist_node *n)
191191
* @old : the element to be replaced
192192
* @new : the new element to insert
193193
*
194-
* The @old entry will be replaced with the @new entry atomically.
194+
* The @old entry will be replaced with the @new entry atomically from
195+
* the perspective of concurrent readers. It is the caller's responsibility
196+
* to synchronize with concurrent updaters, if any.
197+
*
195198
* Note: @old should not be empty.
196199
*/
197200
static inline void list_replace_rcu(struct list_head *old,
@@ -519,7 +522,9 @@ static inline void hlist_del_rcu(struct hlist_node *n)
519522
* @old : the element to be replaced
520523
* @new : the new element to insert
521524
*
522-
* The @old entry will be replaced with the @new entry atomically.
525+
* The @old entry will be replaced with the @new entry atomically from
526+
* the perspective of concurrent readers. It is the caller's responsibility
527+
* to synchronize with concurrent updaters, if any.
523528
*/
524529
static inline void hlist_replace_rcu(struct hlist_node *old,
525530
struct hlist_node *new)

include/linux/rcupdate.h

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,10 +34,12 @@
3434
#define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
3535
#define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
3636

37+
#define RCU_SEQ_CTR_SHIFT 2
38+
#define RCU_SEQ_STATE_MASK ((1 << RCU_SEQ_CTR_SHIFT) - 1)
39+
3740
/* Exported common interfaces */
3841
void call_rcu(struct rcu_head *head, rcu_callback_t func);
3942
void rcu_barrier_tasks(void);
40-
void rcu_barrier_tasks_rude(void);
4143
void synchronize_rcu(void);
4244

4345
struct rcu_gp_oldstate;
@@ -144,11 +146,18 @@ void rcu_init_nohz(void);
144146
int rcu_nocb_cpu_offload(int cpu);
145147
int rcu_nocb_cpu_deoffload(int cpu);
146148
void rcu_nocb_flush_deferred_wakeup(void);
149+
150+
#define RCU_NOCB_LOCKDEP_WARN(c, s) RCU_LOCKDEP_WARN(c, s)
151+
147152
#else /* #ifdef CONFIG_RCU_NOCB_CPU */
153+
148154
static inline void rcu_init_nohz(void) { }
149155
static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; }
150156
static inline int rcu_nocb_cpu_deoffload(int cpu) { return 0; }
151157
static inline void rcu_nocb_flush_deferred_wakeup(void) { }
158+
159+
#define RCU_NOCB_LOCKDEP_WARN(c, s)
160+
152161
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
153162

154163
/*
@@ -165,6 +174,7 @@ static inline void rcu_nocb_flush_deferred_wakeup(void) { }
165174
} while (0)
166175
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
167176
void synchronize_rcu_tasks(void);
177+
void rcu_tasks_torture_stats_print(char *tt, char *tf);
168178
# else
169179
# define rcu_tasks_classic_qs(t, preempt) do { } while (0)
170180
# define call_rcu_tasks call_rcu
@@ -191,6 +201,7 @@ void rcu_tasks_trace_qs_blkd(struct task_struct *t);
191201
rcu_tasks_trace_qs_blkd(t); \
192202
} \
193203
} while (0)
204+
void rcu_tasks_trace_torture_stats_print(char *tt, char *tf);
194205
# else
195206
# define rcu_tasks_trace_qs(t) do { } while (0)
196207
# endif
@@ -202,8 +213,8 @@ do { \
202213
} while (0)
203214

204215
# ifdef CONFIG_TASKS_RUDE_RCU
205-
void call_rcu_tasks_rude(struct rcu_head *head, rcu_callback_t func);
206216
void synchronize_rcu_tasks_rude(void);
217+
void rcu_tasks_rude_torture_stats_print(char *tt, char *tf);
207218
# endif
208219

209220
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)

include/linux/smp.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
294294
int smpcfd_dead_cpu(unsigned int cpu);
295295
int smpcfd_dying_cpu(unsigned int cpu);
296296

297+
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
298+
bool csd_lock_is_stuck(void);
299+
#else
300+
static inline bool csd_lock_is_stuck(void) { return false; }
301+
#endif
302+
297303
#endif /* __LINUX_SMP_H */

include/linux/srcutree.h

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,10 +129,23 @@ struct srcu_struct {
129129
#define SRCU_STATE_SCAN1 1
130130
#define SRCU_STATE_SCAN2 2
131131

132+
/*
133+
* Values for initializing gp sequence fields. Higher values allow wrap arounds to
134+
* occur earlier.
135+
* The second value with state is useful in the case of static initialization of
136+
* srcu_usage where srcu_gp_seq_needed is expected to have some state value in its
137+
* lower bits (or else it will appear to be already initialized within
138+
* the call check_init_srcu_struct()).
139+
*/
140+
#define SRCU_GP_SEQ_INITIAL_VAL ((0UL - 100UL) << RCU_SEQ_CTR_SHIFT)
141+
#define SRCU_GP_SEQ_INITIAL_VAL_WITH_STATE (SRCU_GP_SEQ_INITIAL_VAL - 1)
142+
132143
#define __SRCU_USAGE_INIT(name) \
133144
{ \
134145
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
135-
.srcu_gp_seq_needed = -1UL, \
146+
.srcu_gp_seq = SRCU_GP_SEQ_INITIAL_VAL, \
147+
.srcu_gp_seq_needed = SRCU_GP_SEQ_INITIAL_VAL_WITH_STATE, \
148+
.srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL, \
136149
.work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \
137150
}
138151

kernel/rcu/rcu.h

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,6 @@
5454
* grace-period sequence number.
5555
*/
5656

57-
#define RCU_SEQ_CTR_SHIFT 2
58-
#define RCU_SEQ_STATE_MASK ((1 << RCU_SEQ_CTR_SHIFT) - 1)
59-
6057
/* Low-order bit definition for polled grace-period APIs. */
6158
#define RCU_GET_STATE_COMPLETED 0x1
6259

@@ -255,6 +252,11 @@ static inline void debug_rcu_head_callback(struct rcu_head *rhp)
255252
kmem_dump_obj(rhp);
256253
}
257254

255+
static inline bool rcu_barrier_cb_is_done(struct rcu_head *rhp)
256+
{
257+
return rhp->next == rhp;
258+
}
259+
258260
extern int rcu_cpu_stall_suppress_at_boot;
259261

260262
static inline bool rcu_stall_is_suppressed_at_boot(void)

0 commit comments

Comments
 (0)