Skip to content

Commit a8b242d

Browse files
Alexei Starovoitovanakryiko
authored andcommitted
bpf: Introduce "volatile compare" macros
Compilers optimize conditional operators at will, but often bpf programmers want to force compilers to keep the same operator in asm as it's written in C. Introduce bpf_cmp_likely/unlikely(var1, conditional_op, var2) macros that can be used as: - if (seen >= 1000) + if (bpf_cmp_unlikely(seen, >=, 1000)) The macros take advantage of BPF assembly that is C like. The macros check the sign of variable 'seen' and emits either signed or unsigned compare. For example: int a; bpf_cmp_unlikely(a, >, 0) will be translated to 'if rX s> 0 goto' in BPF assembly. unsigned int a; bpf_cmp_unlikely(a, >, 0) will be translated to 'if rX > 0 goto' in BPF assembly. C type conversions coupled with comparison operator are tricky. int i = -1; unsigned int j = 1; if (i < j) // this is false. long i = -1; unsigned int j = 1; if (i < j) // this is true. Make sure BPF program is compiled with -Wsign-compare then the macros will catch the mistake. The macros check LHS (left hand side) only to figure out the sign of compare. 'if 0 < rX goto' is not allowed in the assembly, so the users have to use a variable on LHS anyway. The patch updates few tests to demonstrate the use of the macros. The macro allows to use BPF_JSET in C code, since LLVM doesn't generate it at present. For example: if (i & j) compiles into r0 &= r1; if r0 == 0 goto while if (bpf_cmp_unlikely(i, &, j)) compiles into if r0 & r1 goto Note that the macros has to be careful with RHS assembly predicate. Since: u64 __rhs = 1ull << 42; asm goto("if r0 < %[rhs] goto +1" :: [rhs] "ri" (__rhs)); LLVM will silently truncate 64-bit constant into s32 imm. Note that [lhs] "r"((short)LHS) the type cast is a workaround for LLVM issue. When LHS is exactly 32-bit LLVM emits redundant <<=32, >>=32 to zero upper 32-bits. When LHS is 64 or 16 or 8-bit variable there are no shifts. When LHS is 32-bit the (u64) cast doesn't help. Hence use (short) cast. It does _not_ truncate the variable before it's assigned to a register. Traditional likely()/unlikely() macros that use __builtin_expect(!!(x), 1 or 0) have no effect on these macros, hence macros implement the logic manually. bpf_cmp_unlikely() macro preserves compare operator as-is while bpf_cmp_likely() macro flips the compare. Consider two cases: A. for() { if (foo >= 10) { bar += foo; } other code; } B. for() { if (foo >= 10) break; other code; } It's ok to use either bpf_cmp_likely or bpf_cmp_unlikely macros in both cases, but consider that 'break' is effectively 'goto out_of_the_loop'. Hence it's better to use bpf_cmp_unlikely in the B case. While 'bar += foo' is better to keep as 'fallthrough' == likely code path in the A case. When it's written as: A. for() { if (bpf_cmp_likely(foo, >=, 10)) { bar += foo; } other code; } B. for() { if (bpf_cmp_unlikely(foo, >=, 10)) break; other code; } The assembly will look like: A. for() { if r1 < 10 goto L1; bar += foo; L1: other code; } B. for() { if r1 >= 10 goto L2; other code; } L2: The bpf_cmp_likely vs bpf_cmp_unlikely changes basic block layout, hence it will greatly influence the verification process. The number of processed instructions will be different, since the verifier walks the fallthrough first. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
1 parent 495d2d8 commit a8b242d

File tree

3 files changed

+80
-12
lines changed

3 files changed

+80
-12
lines changed

tools/testing/selftests/bpf/bpf_experimental.h

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -254,6 +254,75 @@ extern void bpf_throw(u64 cookie) __ksym;
254254
} \
255255
})
256256

257+
#define __cmp_cannot_be_signed(x) \
258+
__builtin_strcmp(#x, "==") == 0 || __builtin_strcmp(#x, "!=") == 0 || \
259+
__builtin_strcmp(#x, "&") == 0
260+
261+
#define __is_signed_type(type) (((type)(-1)) < (type)1)
262+
263+
#define __bpf_cmp(LHS, OP, SIGN, PRED, RHS, DEFAULT) \
264+
({ \
265+
__label__ l_true; \
266+
bool ret = DEFAULT; \
267+
asm volatile goto("if %[lhs] " SIGN #OP " %[rhs] goto %l[l_true]" \
268+
:: [lhs] "r"((short)LHS), [rhs] PRED (RHS) :: l_true); \
269+
ret = !DEFAULT; \
270+
l_true: \
271+
ret; \
272+
})
273+
274+
/* C type conversions coupled with comparison operator are tricky.
275+
* Make sure BPF program is compiled with -Wsign-compare then
276+
* __lhs OP __rhs below will catch the mistake.
277+
* Be aware that we check only __lhs to figure out the sign of compare.
278+
*/
279+
#define _bpf_cmp(LHS, OP, RHS, NOFLIP) \
280+
({ \
281+
typeof(LHS) __lhs = (LHS); \
282+
typeof(RHS) __rhs = (RHS); \
283+
bool ret; \
284+
_Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \
285+
(void)(__lhs OP __rhs); \
286+
if (__cmp_cannot_be_signed(OP) || !__is_signed_type(typeof(__lhs))) { \
287+
if (sizeof(__rhs) == 8) \
288+
ret = __bpf_cmp(__lhs, OP, "", "r", __rhs, NOFLIP); \
289+
else \
290+
ret = __bpf_cmp(__lhs, OP, "", "i", __rhs, NOFLIP); \
291+
} else { \
292+
if (sizeof(__rhs) == 8) \
293+
ret = __bpf_cmp(__lhs, OP, "s", "r", __rhs, NOFLIP); \
294+
else \
295+
ret = __bpf_cmp(__lhs, OP, "s", "i", __rhs, NOFLIP); \
296+
} \
297+
ret; \
298+
})
299+
300+
#ifndef bpf_cmp_unlikely
301+
#define bpf_cmp_unlikely(LHS, OP, RHS) _bpf_cmp(LHS, OP, RHS, true)
302+
#endif
303+
304+
#ifndef bpf_cmp_likely
305+
#define bpf_cmp_likely(LHS, OP, RHS) \
306+
({ \
307+
bool ret; \
308+
if (__builtin_strcmp(#OP, "==") == 0) \
309+
ret = _bpf_cmp(LHS, !=, RHS, false); \
310+
else if (__builtin_strcmp(#OP, "!=") == 0) \
311+
ret = _bpf_cmp(LHS, ==, RHS, false); \
312+
else if (__builtin_strcmp(#OP, "<=") == 0) \
313+
ret = _bpf_cmp(LHS, >, RHS, false); \
314+
else if (__builtin_strcmp(#OP, "<") == 0) \
315+
ret = _bpf_cmp(LHS, >=, RHS, false); \
316+
else if (__builtin_strcmp(#OP, ">") == 0) \
317+
ret = _bpf_cmp(LHS, <=, RHS, false); \
318+
else if (__builtin_strcmp(#OP, ">=") == 0) \
319+
ret = _bpf_cmp(LHS, <, RHS, false); \
320+
else \
321+
(void) "bug"; \
322+
ret; \
323+
})
324+
#endif
325+
257326
/* Description
258327
* Assert that a conditional expression is true.
259328
* Returns

tools/testing/selftests/bpf/progs/exceptions.c

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -210,39 +210,39 @@ __noinline int assert_zero_gfunc(u64 c)
210210
{
211211
volatile u64 cookie = c;
212212

213-
bpf_assert_eq(cookie, 0);
213+
bpf_assert(bpf_cmp_unlikely(cookie, ==, 0));
214214
return 0;
215215
}
216216

217217
__noinline int assert_neg_gfunc(s64 c)
218218
{
219219
volatile s64 cookie = c;
220220

221-
bpf_assert_lt(cookie, 0);
221+
bpf_assert(bpf_cmp_unlikely(cookie, <, 0));
222222
return 0;
223223
}
224224

225225
__noinline int assert_pos_gfunc(s64 c)
226226
{
227227
volatile s64 cookie = c;
228228

229-
bpf_assert_gt(cookie, 0);
229+
bpf_assert(bpf_cmp_unlikely(cookie, >, 0));
230230
return 0;
231231
}
232232

233233
__noinline int assert_negeq_gfunc(s64 c)
234234
{
235235
volatile s64 cookie = c;
236236

237-
bpf_assert_le(cookie, -1);
237+
bpf_assert(bpf_cmp_unlikely(cookie, <=, -1));
238238
return 0;
239239
}
240240

241241
__noinline int assert_poseq_gfunc(s64 c)
242242
{
243243
volatile s64 cookie = c;
244244

245-
bpf_assert_ge(cookie, 1);
245+
bpf_assert(bpf_cmp_unlikely(cookie, >=, 1));
246246
return 0;
247247
}
248248

@@ -258,39 +258,39 @@ __noinline int assert_zero_gfunc_with(u64 c)
258258
{
259259
volatile u64 cookie = c;
260260

261-
bpf_assert_eq_with(cookie, 0, cookie + 100);
261+
bpf_assert_with(bpf_cmp_unlikely(cookie, ==, 0), cookie + 100);
262262
return 0;
263263
}
264264

265265
__noinline int assert_neg_gfunc_with(s64 c)
266266
{
267267
volatile s64 cookie = c;
268268

269-
bpf_assert_lt_with(cookie, 0, cookie + 100);
269+
bpf_assert_with(bpf_cmp_unlikely(cookie, <, 0), cookie + 100);
270270
return 0;
271271
}
272272

273273
__noinline int assert_pos_gfunc_with(s64 c)
274274
{
275275
volatile s64 cookie = c;
276276

277-
bpf_assert_gt_with(cookie, 0, cookie + 100);
277+
bpf_assert_with(bpf_cmp_unlikely(cookie, >, 0), cookie + 100);
278278
return 0;
279279
}
280280

281281
__noinline int assert_negeq_gfunc_with(s64 c)
282282
{
283283
volatile s64 cookie = c;
284284

285-
bpf_assert_le_with(cookie, -1, cookie + 100);
285+
bpf_assert_with(bpf_cmp_unlikely(cookie, <=, -1), cookie + 100);
286286
return 0;
287287
}
288288

289289
__noinline int assert_poseq_gfunc_with(s64 c)
290290
{
291291
volatile s64 cookie = c;
292292

293-
bpf_assert_ge_with(cookie, 1, cookie + 100);
293+
bpf_assert_with(bpf_cmp_unlikely(cookie, >=, 1), cookie + 100);
294294
return 0;
295295
}
296296

tools/testing/selftests/bpf/progs/iters_task_vma.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,8 @@ int iter_task_vma_for_each(const void *ctx)
2828
return 0;
2929

3030
bpf_for_each(task_vma, vma, task, 0) {
31-
if (seen >= 1000)
31+
if (bpf_cmp_unlikely(seen, >=, 1000))
3232
break;
33-
barrier_var(seen);
3433

3534
vm_ranges[seen].vm_start = vma->vm_start;
3635
vm_ranges[seen].vm_end = vma->vm_end;

0 commit comments

Comments
 (0)