Skip to content

Commit 1a4b1a6

Browse files
cupermirKernel Patches Daemon
authored andcommitted
bpf: verifier improvement in 32bit shift sign extension pattern
This patch improves the verifier to correctly compute bounds for sign extension compiler pattern composed of left shift by 32bits followed by a sign right shift by 32bits. Pattern in the verifier was limitted to positive value bounds and would reset bound computation for negative values. New code allows both positive and negative values for sign extension without compromising bound computation and verifier to pass. This change is required by GCC which generate such pattern, and was detected in the context of systemd, as described in the following GCC bugzilla: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119731 Three new tests were added in verifier_subreg.c. Signed-off-by: Cupertino Miranda <[email protected]> Signed-off-by: Andrew Pinski <[email protected]> Acked-by: Eduard Zingerman <[email protected]> Cc: David Faust <[email protected]> Cc: Jose Marchesi <[email protected]> Cc: Elena Zannoni <[email protected]>
1 parent 3bd2c43 commit 1a4b1a6

File tree

1 file changed

+7
-11
lines changed

1 file changed

+7
-11
lines changed

kernel/bpf/verifier.c

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15301,21 +15301,17 @@ static void __scalar64_min_max_lsh(struct bpf_reg_state *dst_reg,
1530115301
u64 umin_val, u64 umax_val)
1530215302
{
1530315303
/* Special case <<32 because it is a common compiler pattern to sign
15304-
* extend subreg by doing <<32 s>>32. In this case if 32bit bounds are
15305-
* positive we know this shift will also be positive so we can track
15306-
* bounds correctly. Otherwise we lose all sign bit information except
15307-
* what we can pick up from var_off. Perhaps we can generalize this
15308-
* later to shifts of any length.
15304+
* extend subreg by doing <<32 s>>32. smin/smax assignments are correct
15305+
* because s32 bounds don't flip sign when shifting to the left by
15306+
* 32bits.
1530915307
*/
15310-
if (umin_val == 32 && umax_val == 32 && dst_reg->s32_max_value >= 0)
15308+
if (umin_val == 32 && umax_val == 32) {
1531115309
dst_reg->smax_value = (s64)dst_reg->s32_max_value << 32;
15312-
else
15313-
dst_reg->smax_value = S64_MAX;
15314-
15315-
if (umin_val == 32 && umax_val == 32 && dst_reg->s32_min_value >= 0)
1531615310
dst_reg->smin_value = (s64)dst_reg->s32_min_value << 32;
15317-
else
15311+
} else {
15312+
dst_reg->smax_value = S64_MAX;
1531815313
dst_reg->smin_value = S64_MIN;
15314+
}
1531915315

1532015316
/* If we might shift our top bit out, then we know nothing */
1532115317
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {

0 commit comments

Comments
 (0)