Skip to content

Commit 9dbccf3

Browse files
cupermirKernel Patches Daemon
authored andcommitted
bpf: verifier improvement in 32bit shift sign extension pattern
This patch improves the verifier to correctly compute bounds for sign extension compiler pattern composed of left shift by 32bits followed by a sign right shift by 32bits. Pattern in the verifier was limitted to positive value bounds and would reset bound computation for negative values. New code allows both positive and negative values for sign extension without compromising bound computation and verifier to pass. This change is required by GCC which generate such pattern, and was detected in the context of systemd, as described in the following GCC bugzilla: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119731 Three new tests were added in verifier_subreg.c. Signed-off-by: Cupertino Miranda <[email protected]> Signed-off-by: Andrew Pinski <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Cc: David Faust <[email protected]> Cc: Jose Marchesi <[email protected]> Cc: Elena Zannoni <[email protected]>
1 parent 037d2fc commit 9dbccf3

File tree

1 file changed

+7
-11
lines changed

1 file changed

+7
-11
lines changed

kernel/bpf/verifier.c

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15300,21 +15300,17 @@ static void __scalar64_min_max_lsh(struct bpf_reg_state *dst_reg,
1530015300
u64 umin_val, u64 umax_val)
1530115301
{
1530215302
/* Special case <<32 because it is a common compiler pattern to sign
15303-
* extend subreg by doing <<32 s>>32. In this case if 32bit bounds are
15304-
* positive we know this shift will also be positive so we can track
15305-
* bounds correctly. Otherwise we lose all sign bit information except
15306-
* what we can pick up from var_off. Perhaps we can generalize this
15307-
* later to shifts of any length.
15303+
* extend subreg by doing <<32 s>>32. smin/smax assignments are correct
15304+
* because s32 bounds don't flip sign when shifting to the left by
15305+
* 32bits.
1530815306
*/
15309-
if (umin_val == 32 && umax_val == 32 && dst_reg->s32_max_value >= 0)
15307+
if (umin_val == 32 && umax_val == 32) {
1531015308
dst_reg->smax_value = (s64)dst_reg->s32_max_value << 32;
15311-
else
15312-
dst_reg->smax_value = S64_MAX;
15313-
15314-
if (umin_val == 32 && umax_val == 32 && dst_reg->s32_min_value >= 0)
1531515309
dst_reg->smin_value = (s64)dst_reg->s32_min_value << 32;
15316-
else
15310+
} else {
15311+
dst_reg->smax_value = S64_MAX;
1531715312
dst_reg->smin_value = S64_MIN;
15313+
}
1531815314

1531915315
/* If we might shift our top bit out, then we know nothing */
1532015316
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {

0 commit comments

Comments
 (0)