Skip to content

Conversation

kernel-patches-daemon-bpf[bot]
Copy link

Pull request for series with
subject: BPF indirect jumps
version: 6
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=1013317

aspsk added 17 commits October 19, 2025 13:22
In [1] Eduard mentioned that on push_stack failure verifier code
should return -ENOMEM instead of -EFAULT. After checking with the
other call sites I've found that code randomly returns either -ENOMEM
or -EFAULT. This patch unifies the return values for the push_stack
(and similar push_async_cb) functions such that error codes are
always assigned properly.

  [1] https://lore.kernel.org/bpf/[email protected]

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Introduce a new subprog_start field in bpf_prog_aux. This field may
be used by JIT compilers wanting to know the real absolute xlated
offset of the function being jitted. The func_info[func_id] may have
served this purpose, but func_info may be NULL, so JIT compilers
can't rely on it.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
The kernel/bpf/array.c file defines the array_map_get_next_key()
function which finds the next key for array maps. It actually doesn't
use any map fields besides the generic max_entries field. Generalize
it, and export as bpf_array_get_next_key() such that it can be
re-used by other array-like maps.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
On bpf(BPF_PROG_LOAD) syscall user-supplied BPF programs are
translated by the verifier into "xlated" BPF programs. During this
process the original instructions offsets might be adjusted and/or
individual instructions might be replaced by new sets of instructions,
or deleted.

Add a new BPF map type which is aimed to keep track of how, for a
given program, the original instructions were relocated during the
verification. Also, besides keeping track of the original -> xlated
mapping, make x86 JIT to build the xlated -> jitted mapping for every
instruction listed in an instruction array. This is required for every
future application of instruction arrays: static keys, indirect jumps
and indirect calls.

A map of the BPF_MAP_TYPE_INSN_ARRAY type must be created with a u32
keys and value of size 8. The values have different semantics for
userspace and for BPF space. For userspace a value consists of two
u32 values – xlated and jitted offsets. For BPF side the value is
a real pointer to a jitted instruction.

On map creation/initialization, before loading the program, each
element of the map should be initialized to point to an instruction
offset within the program. Before the program load such maps should
be made frozen. After the program verification xlated and jitted
offsets can be read via the bpf(2) syscall.

If a tracked instruction is removed by the verifier, then the xlated
offset is set to (u32)-1 which is considered to be too big for a valid
BPF program offset.

One such a map can, obviously, be used to track one and only one BPF
program.  If the verification process was unsuccessful, then the same
map can be re-used to verify the program with a different log level.
However, if the program was loaded fine, then such a map, being
frozen in any case, can't be reused by other programs even after the
program release.

Example. Consider the following original and xlated programs:

    Original prog:                      Xlated prog:

     0:  r1 = 0x0                        0: r1 = 0
     1:  *(u32 *)(r10 - 0x4) = r1        1: *(u32 *)(r10 -4) = r1
     2:  r2 = r10                        2: r2 = r10
     3:  r2 += -0x4                      3: r2 += -4
     4:  r1 = 0x0 ll                     4: r1 = map[id:88]
     6:  call 0x1                        6: r1 += 272
                                         7: r0 = *(u32 *)(r2 +0)
                                         8: if r0 >= 0x1 goto pc+3
                                         9: r0 <<= 3
                                        10: r0 += r1
                                        11: goto pc+1
                                        12: r0 = 0
     7:  r6 = r0                        13: r6 = r0
     8:  if r6 == 0x0 goto +0x2         14: if r6 == 0x0 goto pc+4
     9:  call 0x76                      15: r0 = 0xffffffff8d2079c0
                                        17: r0 = *(u64 *)(r0 +0)
    10:  *(u64 *)(r6 + 0x0) = r0        18: *(u64 *)(r6 +0) = r0
    11:  r0 = 0x0                       19: r0 = 0x0
    12:  exit                           20: exit

An instruction array map, containing, e.g., instructions [0,4,7,12]
will be translated by the verifier to [0,4,13,20]. A map with
index 5 (the middle of 16-byte instruction) or indexes greater than 12
(outside the program boundaries) would be rejected.

The functionality provided by this patch will be extended in consequent
patches to implement BPF Static Keys, indirect jumps, and indirect calls.

Signed-off-by: Anton Protopopov <[email protected]>
Add the following selftests for new insn_array map:

  * Incorrect instruction indexes are rejected
  * Two programs can't use the same map
  * BPF progs can't operate the map
  * no changes to code => map is the same
  * expected changes when instructions are added
  * expected changes when instructions are deleted
  * expected changes when multiple functions are present

Signed-off-by: Anton Protopopov <[email protected]>
When bpf_jit_harden is enabled, all constants in the BPF code are
blinded to prevent JIT spraying attacks. This happens during JIT
phase. Adjust all the related instruction arrays accordingly.

Signed-off-by: Anton Protopopov <[email protected]>
Reviewed-by: Eduard Zingerman <[email protected]>
Add a specific test for instructions arrays with blinding enabled.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Currently the emit_indirect_jump() function only accepts one of the
RAX, RCX, ..., RBP registers as the destination. Make it to accept
R8, R9, ..., R15 as well, and make callers to pass BPF registers, not
native registers. This is required to enable indirect jumps support
in eBPF.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
The bpf_insn_successors() function is used to return successors
to a BPF instruction. So far, an instruction could have 0, 1 or 2
successors. Prepare the verifier code to introduction of instructions
with more than 2 successors (namely, indirect jumps).

To do this, introduce a new struct, struct bpf_iarray, containing
an array of bpf instruction indexes and make bpf_insn_successors
to return a pointer of that type. The storage for all instructions
is allocated in the env->succ, which holds an array of size 2,
to be used for all instructions.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Eduard Zingerman <[email protected]>
Add support for a new instruction

    BPF_JMP|BPF_X|BPF_JA, SRC=0, DST=Rx, off=0, imm=0

which does an indirect jump to a location stored in Rx.  The register
Rx should have type PTR_TO_INSN. This new type assures that the Rx
register contains a value (or a range of values) loaded from a
correct jump table – map of type instruction array.

For example, for a C switch LLVM will generate the following code:

    0:   r3 = r1                    # "switch (r3)"
    1:   if r3 > 0x13 goto +0x666   # check r3 boundaries
    2:   r3 <<= 0x3                 # adjust to an index in array of addresses
    3:   r1 = 0xbeef ll             # r1 is PTR_TO_MAP_VALUE, r1->map_ptr=M
    5:   r1 += r3                   # r1 inherits boundaries from r3
    6:   r1 = *(u64 *)(r1 + 0x0)    # r1 now has type INSN_TO_PTR
    7:   gotox r1                   # jit will generate proper code

Here the gotox instruction corresponds to one particular map. This is
possible however to have a gotox instruction which can be loaded from
different maps, e.g.

    0:   r1 &= 0x1
    1:   r2 <<= 0x3
    2:   r3 = 0x0 ll                # load from map M_1
    4:   r3 += r2
    5:   if r1 == 0x0 goto +0x4
    6:   r1 <<= 0x3
    7:   r3 = 0x0 ll                # load from map M_2
    9:   r3 += r1
    A:   r1 = *(u64 *)(r3 + 0x0)
    B:   gotox r1                   # jump to target loaded from M_1 or M_2

During check_cfg stage the verifier will collect all the maps which
point to inside the subprog being verified. When building the config,
the high 16 bytes of the insn_state are used, so this patch
(theoretically) supports jump tables of up to 2^16 slots.

During the later stage, in check_indirect_jump, it is checked that
the register Rx was loaded from a particular instruction array.

Signed-off-by: Anton Protopopov <[email protected]>
Add support for indirect jump instruction.

Example output from bpftool:

   0: (79) r3 = *(u64 *)(r1 +0)
   1: (25) if r3 > 0x4 goto pc+666
   2: (67) r3 <<= 3
   3: (18) r1 = 0xffffbeefspameggs
   5: (0f) r1 += r3
   6: (79) r1 = *(u64 *)(r1 +0)
   7: (0d) gotox r1

Signed-off-by: Anton Protopopov <[email protected]>
The linux-notes.rst states that indirect jump instruction "is not
currently supported by the verifier". Remove this part as outdated.

Signed-off-by: Anton Protopopov <[email protected]>
The commit 6c91870 ("libbpf: Refactor bpf_object__reloc_code")
added the bpf_object__append_subprog_code() with incorrect indentations.
Use tabs instead. (This also makes a consequent commit better readable.)

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
For v4 instruction set LLVM is allowed to generate indirect jumps for
switch statements and for 'goto *rX' assembly. Every such a jump will
be accompanied by necessary metadata, e.g. (`llvm-objdump -Sr ...`):

       0:       r2 = 0x0 ll
                0000000000000030:  R_BPF_64_64  BPF.JT.0.0

Here BPF.JT.1.0 is a symbol residing in the .jumptables section:

    Symbol table:
       4: 0000000000000000   240 OBJECT  GLOBAL DEFAULT     4 BPF.JT.0.0

The -bpf-min-jump-table-entries llvm option may be used to control the
minimal size of a switch which will be converted to an indirect jumps.

Signed-off-by: Anton Protopopov <[email protected]>
Teach bpftool to recognize instruction array map type.

Signed-off-by: Anton Protopopov <[email protected]>
Acked-by: Quentin Monnet <[email protected]>
Add a set of tests to validate core gotox functionality
without need to rely on compilers.

Signed-off-by: Anton Protopopov <[email protected]>
Add C-level selftests for indirect jumps to validate LLVM and libbpf
functionality. The tests are intentionally disabled, to be run
locally by developers, but will not make the CI red.

Signed-off-by: Anton Protopopov <[email protected]>
@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 7361c86
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1013317
version: 6

@kernel-patches-review-bot
Copy link

commit a468fcf91d43099e388a12fff8fa9aaee26e6843
Author: Anton Protopopov <[email protected]>

bpf, x86: allow indirect jumps to r8...r15

This commit refactors emit_indirect_jump() to accept BPF register
constants instead of native x86 register numbers, enabling support for
extended registers (R8-R15) in future indirect jump implementations.

> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index c8e628410..7443465ce 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -660,24 +660,38 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
>
>  #define EMIT_LFENCE()	EMIT3(0x0F, 0xAE, 0xE8)
>
> -static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
> +static void __emit_indirect_jump(u8 **pprog, int reg, bool ereg)
> +{
> +	u8 *prog = *pprog;
> +
> +	if (ereg)
> +		EMIT1(0x41);
> +
> +	EMIT2(0xFF, 0xE0 + reg);
> +
> +	*pprog = prog;
> +}
> +
> +static void emit_indirect_jump(u8 **pprog, int bpf_reg, u8 *ip)
>  {
>  	u8 *prog = *pprog;
> +	int reg = reg2hex[bpf_reg];
> +	bool ereg = is_ereg(bpf_reg);
>
>  	if (cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS)) {
>  		OPTIMIZER_HIDE_VAR(reg);
>  		emit_jump(&prog, its_static_thunk(reg), ip);
                                                ^^^

Can its_static_thunk() select the wrong thunk for extended registers?
The function indexes __x86_indirect_its_thunk_array[] with reg directly,
but reg holds only the lower 3 bits from reg2hex[bpf_reg]. For extended
registers like BPF_REG_5 (which maps to R8), reg would be 0, causing
its_static_thunk() to return the RAX thunk instead of the R8 thunk.

The other code paths (RETPOLINE, RETPOLINE_LFENCE, and fallback) all use
reg + 8*ereg or __emit_indirect_jump(&prog, reg, ereg) which correctly
handle extended registers. Should the ITS path also use reg + 8*ereg?

For example, with BPF_REG_5:
  reg2hex[5] = 0, is_ereg(5) = true
  Expected thunk array index: 8 (for R8)
  Current ITS path: its_static_thunk(0) -> array[0] (RAX thunk)
  RETPOLINE path: array[0 + 8*1] = array[8] (R8 thunk, correct)

>  	} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
>  		EMIT_LFENCE();
> -		EMIT2(0xFF, 0xE0 + reg);
> +		__emit_indirect_jump(&prog, reg, ereg);
>  	} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
>  		OPTIMIZER_HIDE_VAR(reg);
>  		if (cpu_feature_enabled(X86_FEATURE_CALL_DEPTH))
> -			emit_jump(&prog, &__x86_indirect_jump_thunk_array[reg], ip);
> +			emit_jump(&prog, &__x86_indirect_jump_thunk_array[reg + 8*ereg], ip);
>  		else
> -			emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
> +			emit_jump(&prog, &__x86_indirect_thunk_array[reg + 8*ereg], ip);
>  	} else {
> -		EMIT2(0xFF, 0xE0 + reg);	/* jmp *%\reg */
> +		__emit_indirect_jump(&prog, reg, ereg);
>  		if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) || IS_ENABLED(CONFIG_MITIGATION_SLS))
>  			EMIT1(0xCC);		/* int3 */
>  	}
> @@ -797,7 +811,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
>  	 * rdi == ctx (1st arg)
>  	 * rcx == prog->bpf_func + X86_TAIL_CALL_OFFSET
>  	 */
> -	emit_indirect_jump(&prog, 1 /* rcx */, ip + (prog - start));
> +	emit_indirect_jump(&prog, BPF_REG_4 /* R4 -> rcx */, ip + (prog - start));
>
>  	/* out: */
>  	ctx->tail_call_indirect_label = prog - start;
> @@ -3551,7 +3565,7 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs, u8 *image,
>  		if (err)
>  			return err;
>
> -		emit_indirect_jump(&prog, 2 /* rdx */, image + (prog - buf));
> +		emit_indirect_jump(&prog, BPF_REG_3 /* R3 -> rdx */, image + (prog - buf));
>
>  		*pprog = prog;
>  		return 0;



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf, x86: allow indirect jumps to r8...r15
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/18635636744

@kernel-patches-review-bot
Copy link

commit 2fff8febd97938dd95ef0044faa4ddae02706451
Author: Anton Protopopov <[email protected]>

bpf, x86: add support for indirect jumps

This patch adds support for a new BPF instruction that performs indirect
jumps to locations stored in registers with type PTR_TO_INSN. The verifier
tracks jump tables loaded from BPF_MAP_TYPE_INSN_ARRAY maps.

> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 7443465ce..b6e646039 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -2629,6 +2629,9 @@ st:			if (is_imm8(insn->off))
>
>  			break;
>
> +		case BPF_JMP | BPF_JA | BPF_X:
> +			emit_indirect_jump(&prog, insn->dst_reg, image + addrs[i - 1]);
> +			break;
>  		case BPF_JMP | BPF_JA:
>  		case BPF_JMP32 | BPF_JA:
>  			if (BPF_CLASS(insn->code) == BPF_JMP) {

[ ... ]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ae017c032..d2df21fde 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -6003,6 +6003,18 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
>  	return 0;
>  }
>
> +/*
> + * Return the size of the memory region accessible from a pointer to map value.
> + * For INSN_ARRAY maps whole bpf_insn_array->ips array is accessible.
> + */
> +static u32 map_mem_size(const struct bpf_map *map)
> +{
> +	if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY)
> +		return map->max_entries * sizeof(long);
> +
> +	return map->value_size;
> +}
> +

[ ... ]

> @@ -17823,6 +17851,196 @@ static struct bpf_iarray *iarray_realloc(struct bpf_iarray *old, size_t n_elem)
>  	return new;
>  }
>
> +static int copy_insn_array(struct bpf_map *map, u32 start, u32 end, u32 *items)
> +{
> +	struct bpf_insn_array_value *value;
> +	u32 i;
> +
> +	for (i = start; i <= end; i++) {
> +		value = map->ops->map_lookup_elem(map, &i);
> +		if (!value)
> +			return -EINVAL;
> +		items[i - start] = value->xlated_off;
> +	}
> +	return 0;
> +}
> +
> +static int cmp_ptr_to_u32(const void *a, const void *b)
> +{
> +	return *(u32 *)a - *(u32 *)b;
> +}
> +
> +static int sort_insn_array_uniq(u32 *items, int cnt)
> +{
> +	int unique = 1;
> +	int i;
> +
> +	sort(items, cnt, sizeof(items[0]), cmp_ptr_to_u32, NULL);
> +
> +	for (i = 1; i < cnt; i++)
> +		if (items[i] != items[unique - 1])
> +			items[unique++] = items[i];
> +
> +	return unique;
> +}
> +
> +/*
> + * sort_unique({map[start], ..., map[end]}) into off
> + */
> +static int copy_insn_array_uniq(struct bpf_map *map, u32 start, u32 end, u32 *off)
> +{
> +	u32 n = end - start + 1;
> +	int err;
> +
> +	err = copy_insn_array(map, start, end, off);
> +	if (err)
> +		return err;
> +
> +	return sort_insn_array_uniq(off, n);
> +}
> +
> +/*
> + * Copy all unique offsets from the map
> + */
> +static struct bpf_iarray *jt_from_map(struct bpf_map *map)
> +{
> +	struct bpf_iarray *jt;
> +	int n;
> +
> +	jt = iarray_realloc(NULL, map->max_entries);
> +	if (!jt)
> +		return ERR_PTR(-ENOMEM);
> +
> +	n = copy_insn_array_uniq(map, 0, map->max_entries - 1, jt->items);
> +	if (n < 0) {
> +		kvfree(jt);
> +		return ERR_PTR(n);
> +	}
> +
> +	return jt;
> +}
> +
> +/*
> + * Find and collect all maps which fit in the subprog. Return the result as one
> + * combined jump table in jt->items (allocated with kvcalloc)
> + */
> +static struct bpf_iarray *jt_from_subprog(struct bpf_verifier_env *env,
> +					  int subprog_start, int subprog_end)
> +{
> +	struct bpf_iarray *jt = NULL;
> +	struct bpf_map *map;
> +	struct bpf_iarray *jt_cur;
> +	int i;
> +
> +	for (i = 0; i < env->insn_array_map_cnt; i++) {
> +		/*
> +		 * TODO (when needed): collect only jump tables, not static keys
> +		 * or maps for indirect calls
> +		 */
> +		map = env->insn_array_maps[i];
> +
> +		jt_cur = jt_from_map(map);
> +		if (IS_ERR(jt_cur)) {
> +			kvfree(jt);
> +			return jt_cur;
> +		}
> +
> +		/*
> +		 * This is enough to check one element. The full table is
> +		 * checked to fit inside the subprog later in create_jt()
> +		 */
> +		if (jt_cur->items[0] >= subprog_start && jt_cur->items[0] < subprog_end) {
> +			u32 old_cnt = jt ? jt->cnt : 0;
> +			jt = iarray_realloc(jt, old_cnt + jt_cur->cnt);
> +			if (!jt) {
> +				kvfree(jt_cur);
> +				return ERR_PTR(-ENOMEM);
> +			}
> +			memcpy(jt->items + old_cnt, jt_cur->items, jt_cur->cnt << 2);
> +		}
> +
> +		kvfree(jt_cur);
> +	}
> +
> +	if (!jt) {
> +		verbose(env, "no jump tables found for subprog starting at %u\n", subprog_start);
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	jt->cnt = sort_insn_array_uniq(jt->items, jt->cnt);
> +	return jt;
> +}
> +
> +static struct bpf_iarray *
> +create_jt(int t, struct bpf_verifier_env *env, int fd)
> +{
> +	static struct bpf_subprog_info *subprog;
> +	int subprog_start, subprog_end;
> +	struct bpf_iarray *jt;
> +	int i;
> +
> +	subprog = bpf_find_containing_subprog(env, t);
> +	subprog_start = subprog->start;
> +	subprog_end = (subprog + 1)->start;
> +	jt = jt_from_subprog(env, subprog_start, subprog_end);
> +	if (IS_ERR(jt))
> +		return jt;
> +
> +	/* Check that the every element of the jump table fits within the given subprogram */
> +	for (i = 0; i < jt->cnt; i++) {
> +		if (jt->items[i] < subprog_start || jt->items[i] >= subprog_end) {
> +			verbose(env, "jump table for insn %d points outside of the subprog [%u,%u]",
> +					t, subprog_start, subprog_end);
> +			return ERR_PTR(-EINVAL);
                        ^^^^^^^^^^^^^^^^^^^^^^^^^

Can this leak the jt buffer? The jt_from_subprog() call above successfully
allocated the jump table, but if this validation loop finds an invalid
jump target, the function returns an error without freeing jt. The caller
visit_gotox_insn() won't store the pointer in env->insn_aux_data[t].jt,
so the buffer can never be freed later.

> +		}
> +	}
> +
> +	return jt;
> +}
> +
> +/* "conditional jump with N edges" */
> +static int visit_gotox_insn(int t, struct bpf_verifier_env *env, int fd)
> +{
> +	int *insn_stack = env->cfg.insn_stack;
> +	int *insn_state = env->cfg.insn_state;
> +	bool keep_exploring = false;
> +	struct bpf_iarray *jt;
> +	int i, w;
> +
> +	jt = env->insn_aux_data[t].jt;
> +	if (!jt) {
> +		jt = create_jt(t, env, fd);
> +		if (IS_ERR(jt))
> +			return PTR_ERR(jt);
> +
> +		env->insn_aux_data[t].jt = jt;
> +	}

[ ... ]

> @@ -19854,6 +20075,99 @@ static int process_bpf_exit_full(struct bpf_verifier_env *env,
>  	return PROCESS_BPF_EXIT;
>  }
>
> +static int indirect_jump_min_max_index(struct bpf_verifier_env *env,
> +				       int regno,
> +				       struct bpf_map *map,
> +				       u32 *pmin_index, u32 *pmax_index)
> +{
> +	struct bpf_reg_state *reg = reg_state(env, regno);
> +	u64 min_index, max_index;
> +	const u32 size = 8;
> +
> +	if (check_add_overflow(reg->umin_value, reg->off, &min_index) ||
> +		(min_index > (u64) U32_MAX * size)) {
> +		verbose(env, "the sum of R%u umin_value %llu and off %u is too big\n",
> +			     regno, reg->umin_value, reg->off);
> +		return -ERANGE;
> +	}
> +	if (check_add_overflow(reg->umax_value, reg->off, &max_index) ||
> +		(max_index > (u64) U32_MAX * size)) {
> +		verbose(env, "the sum of R%u umax_value %llu and off %u is too big\n",
> +			     regno, reg->umax_value, reg->off);
> +		return -ERANGE;
> +	}
> +
> +	min_index /= size;
> +	max_index /= size;
> +
> +	if (max_index >= map->max_entries) {
> +		verbose(env, "R%u points to outside of jump table: [%llu,%llu] max_entries %u\n",
> +			     regno, min_index, max_index, map->max_entries);
> +		return -EINVAL;
> +	}
> +
> +	*pmin_index = min_index;
> +	*pmax_index = max_index;
> +	return 0;
> +}

[ ... ]



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: bpf, x86: add support for indirect jumps
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/18635636744

@kernel-patches-daemon-bpf
Copy link
Author

Forwarding comment 3419938218 via email
In-Reply-To: [email protected]
Patch: https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

@kernel-patches-daemon-bpf
Copy link
Author

Forwarding comment 3419938448 via email
In-Reply-To: [email protected]
Patch: https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant