Skip to content

Commit d43ad9d

Browse files
mannkafaiAlexei Starovoitov
authored andcommitted
bpf: Skip bounds adjustment for conditional jumps on same scalar register
When conditional jumps are performed on the same scalar register (e.g., r0 <= r0, r0 > r0, r0 < r0), the BPF verifier incorrectly attempts to adjust the register's min/max bounds. This leads to invalid range bounds and triggers a BUG warning. The problematic BPF program: 0: call bpf_get_prandom_u32 1: w8 = 0x80000000 2: r0 &= r8 3: if r0 > r0 goto <exit> The instruction 3 triggers kernel warning: 3: if r0 > r0 goto <exit> true_reg1: range bounds violation u64=[0x1, 0x0] s64=[0x1, 0x0] u32=[0x1, 0x0] s32=[0x1, 0x0] var_off=(0x0, 0x0) true_reg2: const tnum out of sync with range bounds u64=[0x0, 0xffffffffffffffff] s64=[0x8000000000000000, 0x7fffffffffffffff] var_off=(0x0, 0x0) Comparing a register with itself should not change its bounds and for most comparison operations, comparing a register with itself has a known result (e.g., r0 == r0 is always true, r0 < r0 is always false). Fix this by: 1. Enhance is_scalar_branch_taken() to properly handle branch direction computation for same register comparisons across all BPF jump operations 2. Adds early return in reg_set_min_max() to avoid bounds adjustment for unknown branch directions (e.g., BPF_JSET) on the same register The fix ensures that unnecessary bounds adjustments are skipped, preventing the verifier bug while maintaining correct branch direction analysis. Reported-by: Kaiyan Mei <[email protected]> Reported-by: Yinhao Hu <[email protected]> Closes: https://lore.kernel.org/all/[email protected]/ Signed-off-by: KaFai Wan <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent 5dae745 commit d43ad9d

File tree

1 file changed

+31
-0
lines changed

1 file changed

+31
-0
lines changed

kernel/bpf/verifier.c

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15993,6 +15993,30 @@ static int is_scalar_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_sta
1599315993
s64 smin2 = is_jmp32 ? (s64)reg2->s32_min_value : reg2->smin_value;
1599415994
s64 smax2 = is_jmp32 ? (s64)reg2->s32_max_value : reg2->smax_value;
1599515995

15996+
if (reg1 == reg2) {
15997+
switch (opcode) {
15998+
case BPF_JGE:
15999+
case BPF_JLE:
16000+
case BPF_JSGE:
16001+
case BPF_JSLE:
16002+
case BPF_JEQ:
16003+
return 1;
16004+
case BPF_JGT:
16005+
case BPF_JLT:
16006+
case BPF_JSGT:
16007+
case BPF_JSLT:
16008+
case BPF_JNE:
16009+
return 0;
16010+
case BPF_JSET:
16011+
if (tnum_is_const(t1))
16012+
return t1.value != 0;
16013+
else
16014+
return (smin1 <= 0 && smax1 >= 0) ? -1 : 1;
16015+
default:
16016+
return -1;
16017+
}
16018+
}
16019+
1599616020
switch (opcode) {
1599716021
case BPF_JEQ:
1599816022
/* constants, umin/umax and smin/smax checks would be
@@ -16439,6 +16463,13 @@ static int reg_set_min_max(struct bpf_verifier_env *env,
1643916463
if (false_reg1->type != SCALAR_VALUE || false_reg2->type != SCALAR_VALUE)
1644016464
return 0;
1644116465

16466+
/* We compute branch direction for same SCALAR_VALUE registers in
16467+
* is_scalar_branch_taken(). For unknown branch directions (e.g., BPF_JSET)
16468+
* on the same registers, we don't need to adjust the min/max values.
16469+
*/
16470+
if (false_reg1 == false_reg2)
16471+
return 0;
16472+
1644216473
/* fallthrough (FALSE) branch */
1644316474
regs_refine_cond_op(false_reg1, false_reg2, rev_opcode(opcode), is_jmp32);
1644416475
reg_bounds_sync(false_reg1);

0 commit comments

Comments
 (0)