Skip to content

Commit 9efa9e4

Browse files
jrfastabborkmann
authored andcommitted
bpf, selftests: Add tests to sock_ops for loading sk
Add tests to directly accesse sock_ops sk field. Then use it to ensure a bad pointer access will fault if something goes wrong. We do three tests: The first test ensures when we read sock_ops sk pointer into the same register that we don't fault as described earlier. Here r9 is chosen as the temp register. The xlated code is, 36: (7b) *(u64 *)(r1 +32) = r9 37: (61) r9 = *(u32 *)(r1 +28) 38: (15) if r9 == 0x0 goto pc+3 39: (79) r9 = *(u64 *)(r1 +32) 40: (79) r1 = *(u64 *)(r1 +0) 41: (05) goto pc+1 42: (79) r9 = *(u64 *)(r1 +32) The second test ensures the temp register selection does not collide with in-use register r9. Shown here r8 is chosen because r9 is the sock_ops pointer. The xlated code is as follows, 46: (7b) *(u64 *)(r9 +32) = r8 47: (61) r8 = *(u32 *)(r9 +28) 48: (15) if r8 == 0x0 goto pc+3 49: (79) r8 = *(u64 *)(r9 +32) 50: (79) r9 = *(u64 *)(r9 +0) 51: (05) goto pc+1 52: (79) r8 = *(u64 *)(r9 +32) And finally, ensure we didn't break the base case where dst_reg does not equal the source register, 56: (61) r2 = *(u32 *)(r1 +28) 57: (15) if r2 == 0x0 goto pc+1 58: (79) r2 = *(u64 *)(r1 +0) Notice it takes us an extra four instructions when src reg is the same as dst reg. One to save the reg, two to restore depending on the branch taken and a goto to jump over the second restore. Signed-off-by: John Fastabend <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/159718355325.4728.4163036953345999636.stgit@john-Precision-5820-Tower
1 parent 8e0c151 commit 9efa9e4

File tree

1 file changed

+21
-0
lines changed

1 file changed

+21
-0
lines changed

tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,27 @@ int bpf_testcb(struct bpf_sock_ops *skops)
8282
:: [skops] "r"(skops)
8383
: "r9", "r8");
8484

85+
asm volatile (
86+
"r1 = %[skops];\n"
87+
"r1 = *(u64 *)(r1 +184);\n"
88+
"if r1 == 0 goto +1;\n"
89+
"r1 = *(u32 *)(r1 +4);\n"
90+
:: [skops] "r"(skops):"r1");
91+
92+
asm volatile (
93+
"r9 = %[skops];\n"
94+
"r9 = *(u64 *)(r9 +184);\n"
95+
"if r9 == 0 goto +1;\n"
96+
"r9 = *(u32 *)(r9 +4);\n"
97+
:: [skops] "r"(skops):"r9");
98+
99+
asm volatile (
100+
"r1 = %[skops];\n"
101+
"r2 = *(u64 *)(r1 +184);\n"
102+
"if r2 == 0 goto +1;\n"
103+
"r2 = *(u32 *)(r2 +4);\n"
104+
:: [skops] "r"(skops):"r1", "r2");
105+
85106
op = (int) skops->op;
86107

87108
update_event_map(op);

0 commit comments

Comments
 (0)