Skip to content

Commit f0c53fd

Browse files
kkdwivediAlexei Starovoitov
authored andcommitted
bpf: Add function to find program from stack trace
In preparation of figuring out the closest program that led to the current point in the kernel, implement a function that scans through the stack trace and finds out the closest BPF program when walking down the stack trace. Special care needs to be taken to skip over kernel and BPF subprog frames. We basically scan until we find a BPF main prog frame. The assumption is that if a program calls into us transitively, we'll hit it along the way. If not, we end up returning NULL. Contextually the function will be used in places where we know the program may have called into us. Due to reliance on arch_bpf_stack_walk(), this function only works on x86 with CONFIG_UNWINDER_ORC, arm64, and s390. Remove the warning from arch_bpf_stack_walk as well since we call it outside bpf_throw() context. Acked-by: Eduard Zingerman <[email protected]> Reviewed-by: Emil Tsalapatis <[email protected]> Signed-off-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent d090326 commit f0c53fd

File tree

3 files changed

+34
-1
lines changed

3 files changed

+34
-1
lines changed

arch/x86/net/bpf_jit_comp.c

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3845,7 +3845,6 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp
38453845
}
38463846
return;
38473847
#endif
3848-
WARN(1, "verification of programs using bpf_throw should have failed\n");
38493848
}
38503849

38513850
void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,

include/linux/bpf.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3663,5 +3663,6 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
36633663

36643664
int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char **filep,
36653665
const char **linep, int *nump);
3666+
struct bpf_prog *bpf_prog_find_from_stack(void);
36663667

36673668
#endif /* _LINUX_BPF_H */

kernel/bpf/core.c

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3262,4 +3262,37 @@ int bpf_prog_get_file_line(struct bpf_prog *prog, unsigned long ip, const char *
32623262
return 0;
32633263
}
32643264

3265+
struct walk_stack_ctx {
3266+
struct bpf_prog *prog;
3267+
};
3268+
3269+
static bool find_from_stack_cb(void *cookie, u64 ip, u64 sp, u64 bp)
3270+
{
3271+
struct walk_stack_ctx *ctxp = cookie;
3272+
struct bpf_prog *prog;
3273+
3274+
/*
3275+
* The RCU read lock is held to safely traverse the latch tree, but we
3276+
* don't need its protection when accessing the prog, since it has an
3277+
* active stack frame on the current stack trace, and won't disappear.
3278+
*/
3279+
rcu_read_lock();
3280+
prog = bpf_prog_ksym_find(ip);
3281+
rcu_read_unlock();
3282+
if (!prog)
3283+
return true;
3284+
if (bpf_is_subprog(prog))
3285+
return true;
3286+
ctxp->prog = prog;
3287+
return false;
3288+
}
3289+
3290+
struct bpf_prog *bpf_prog_find_from_stack(void)
3291+
{
3292+
struct walk_stack_ctx ctx = {};
3293+
3294+
arch_bpf_stack_walk(find_from_stack_cb, &ctx);
3295+
return ctx.prog;
3296+
}
3297+
32653298
#endif

0 commit comments

Comments
 (0)