Skip to content

Commit 838e6dd

Browse files
Sebastian Andrzej Siewiorpetrpavlu
authored andcommitted
module: Begin to move from RCU-sched to RCU.
The RCU usage in module was introduced in commit d72b375 ("Remove stop_machine during module load v2") and it claimed not to be RCU but similar. Then there was another improvement in commit e91defa ("module: don't use stop_machine on module load"). It become a mix of RCU and RCU-sched and was eventually fixed 0be964b ("module: Sanitize RCU usage and locking"). Later RCU & RCU-sched was merged in commit cb2f553 ("modules: Replace synchronize_sched() and call_rcu_sched()") so that was aligned. Looking at it today, there is still leftovers. The preempt_disable() was used instead rcu_read_lock_sched(). The RCU & RCU-sched merge was not complete as there is still rcu_dereference_sched() for module::kallsyms. The RCU-list modules and unloaded_tainted_modules are always accessed under RCU protection or the module_mutex. The modules list iteration can always happen safely because the module will not disappear. Once the module is removed (free_module()) then after removing the module from the list, there is a synchronize_rcu() which waits until every RCU reader left the section. That means iterating over the list within a RCU-read section is enough, there is no need to disable preemption. module::kallsyms is first assigned in add_kallsyms() before the module is added to the list. At this point, it points to init data. This pointer is later updated and before the init code is removed there is also synchronize_rcu() in do_free_init(). That means A RCU read lock is enough for protection and rcu_dereference() can be safely used. Convert module code and its users step by step. Update comments and convert print_modules() to use RCU. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Petr Pavlu <[email protected]>
1 parent aa0fdcc commit 838e6dd

File tree

2 files changed

+8
-9
lines changed

2 files changed

+8
-9
lines changed

kernel/module/main.c

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@
6767

6868
/*
6969
* Mutex protects:
70-
* 1) List of modules (also safely readable with preempt_disable),
70+
* 1) List of modules (also safely readable within RCU read section),
7171
* 2) module_use links,
7272
* 3) mod_tree.addr_min/mod_tree.addr_max.
7373
* (delete and add uses RCU list operations).
@@ -1348,7 +1348,7 @@ static void free_module(struct module *mod)
13481348
mod_tree_remove(mod);
13491349
/* Remove this module from bug list, this uses list_del_rcu */
13501350
module_bug_cleanup(mod);
1351-
/* Wait for RCU-sched synchronizing before releasing mod->list and buglist. */
1351+
/* Wait for RCU synchronizing before releasing mod->list and buglist. */
13521352
synchronize_rcu();
13531353
if (try_add_tainted_module(mod))
13541354
pr_err("%s: adding tainted module to the unloaded tainted modules list failed.\n",
@@ -3049,7 +3049,7 @@ static noinline int do_init_module(struct module *mod)
30493049
#endif
30503050
/*
30513051
* We want to free module_init, but be aware that kallsyms may be
3052-
* walking this with preempt disabled. In all the failure paths, we
3052+
* walking this within an RCU read section. In all the failure paths, we
30533053
* call synchronize_rcu(), but we don't want to slow down the success
30543054
* path. execmem_free() cannot be called in an interrupt, so do the
30553055
* work and call synchronize_rcu() in a work queue.
@@ -3836,15 +3836,14 @@ void print_modules(void)
38363836

38373837
printk(KERN_DEFAULT "Modules linked in:");
38383838
/* Most callers should already have preempt disabled, but make sure */
3839-
preempt_disable();
3839+
guard(rcu)();
38403840
list_for_each_entry_rcu(mod, &modules, list) {
38413841
if (mod->state == MODULE_STATE_UNFORMED)
38423842
continue;
38433843
pr_cont(" %s%s", mod->name, module_flags(mod, buf, true));
38443844
}
38453845

38463846
print_unloaded_tainted_modules();
3847-
preempt_enable();
38483847
if (last_unloaded_module.name[0])
38493848
pr_cont(" [last unloaded: %s%s]", last_unloaded_module.name,
38503849
last_unloaded_module.taints);

kernel/module/tree_lookup.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@
1212

1313
/*
1414
* Use a latched RB-tree for __module_address(); this allows us to use
15-
* RCU-sched lookups of the address from any context.
15+
* RCU lookups of the address from any context.
1616
*
17-
* This is conditional on PERF_EVENTS || TRACING because those can really hit
18-
* __module_address() hard by doing a lot of stack unwinding; potentially from
19-
* NMI context.
17+
* This is conditional on PERF_EVENTS || TRACING || CFI_CLANG because those can
18+
* really hit __module_address() hard by doing a lot of stack unwinding;
19+
* potentially from NMI context.
2020
*/
2121

2222
static __always_inline unsigned long __mod_tree_val(struct latch_tree_node *n)

0 commit comments

Comments
 (0)