Skip to content

Commit 7936403

Browse files
Sebastian Andrzej Siewiorborkmann
authored andcommitted
bpf: Make sure bpf_disable_instrumentation() is safe vs preemption.
The initial implementation of migrate_disable() for mainline was a wrapper around preempt_disable(). RT kernels substituted this with a real migrate disable implementation. Later on mainline gained true migrate disable support, but neither documentation nor affected code were updated. Remove stale comments claiming that migrate_disable() is PREEMPT_RT only. Don't use __this_cpu_inc() in the !PREEMPT_RT path because preemption is not disabled and the RMW operation can be preempted. Fixes: 74d862b ("sched: Make migrate_disable/enable() independent of RT") Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
1 parent 6a631c0 commit 7936403

File tree

2 files changed

+2
-17
lines changed

2 files changed

+2
-17
lines changed

include/linux/bpf.h

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1353,28 +1353,16 @@ extern struct mutex bpf_stats_enabled_mutex;
13531353
* kprobes, tracepoints) to prevent deadlocks on map operations as any of
13541354
* these events can happen inside a region which holds a map bucket lock
13551355
* and can deadlock on it.
1356-
*
1357-
* Use the preemption safe inc/dec variants on RT because migrate disable
1358-
* is preemptible on RT and preemption in the middle of the RMW operation
1359-
* might lead to inconsistent state. Use the raw variants for non RT
1360-
* kernels as migrate_disable() maps to preempt_disable() so the slightly
1361-
* more expensive save operation can be avoided.
13621356
*/
13631357
static inline void bpf_disable_instrumentation(void)
13641358
{
13651359
migrate_disable();
1366-
if (IS_ENABLED(CONFIG_PREEMPT_RT))
1367-
this_cpu_inc(bpf_prog_active);
1368-
else
1369-
__this_cpu_inc(bpf_prog_active);
1360+
this_cpu_inc(bpf_prog_active);
13701361
}
13711362

13721363
static inline void bpf_enable_instrumentation(void)
13731364
{
1374-
if (IS_ENABLED(CONFIG_PREEMPT_RT))
1375-
this_cpu_dec(bpf_prog_active);
1376-
else
1377-
__this_cpu_dec(bpf_prog_active);
1365+
this_cpu_dec(bpf_prog_active);
13781366
migrate_enable();
13791367
}
13801368

include/linux/filter.h

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -640,9 +640,6 @@ static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void
640640
* This uses migrate_disable/enable() explicitly to document that the
641641
* invocation of a BPF program does not require reentrancy protection
642642
* against a BPF program which is invoked from a preempting task.
643-
*
644-
* For non RT enabled kernels migrate_disable/enable() maps to
645-
* preempt_disable/enable(), i.e. it disables also preemption.
646643
*/
647644
static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog,
648645
const void *ctx)

0 commit comments

Comments
 (0)