Skip to content

Commit 488a854

Browse files
author
Alexei Starovoitov
committed
Merge branch 'bpf-introduce-helper-for-populating-bpf_cpumask'
Emil Tsalapatis says: ==================== bpf: introduce helper for populating bpf_cpumask Some BPF programs like scx schedulers have their own internal CPU mask types, mask types, which they must transform into struct bpf_cpumask instances before passing them to scheduling-related kfuncs. There is currently no way to efficiently populate the bitfield of a bpf_cpumask from BPF memory, and programs must use multiple bpf_cpumask_[set, clear] calls to do so. Introduce a kfunc helper to populate the bitfield of a bpf_cpumask from valid BPF memory with a single call. Changelog : ----------- v6->v7 v6:https://lore.kernel.org/bpf/[email protected]/ Addressed feedback by Hou Tao: * Removed RUN_TESTS invocation causing tests to run twice * Added is_test_task guard to new selftests * Removed extraneous __success attribute from existing selftests v5->v6 v5:https://lore.kernel.org/bpf/[email protected]/ Addressed feedback by Hou Tao: * Removed __success attributes from cpumask selftests * Fixed stale patch description that used old function name v4->v5 v4: https://lore.kernel.org/bpf/[email protected]/ Addressed feedback by Hou Tao: * Readded the tests in tools/selftests/bpf/prog_tests/cpumask.c, turns out the selftest entries were not duplicates. * Removed stray whitespace in selftest. * Add patch the missing selftest to prog_tests/cpumask.c * Explicitly annotate all cpumask selftests with __success The last patch could very well be its own cleanup patch, but I rolled it into this series because it came up in the discussion. If the last patch in the series has any issues I'd be fine with applying the first 3 patches and dealing with it separately. v3->v4 v3: https://lore.kernel.org/bpf/[email protected]/ * Removed new tests from tools/selftests/bpf/prog_tests/cpumask.c because they were being run twice. Addressed feedback by Alexei Starovoitov: * Added missing return value in function kdoc * Added an additional patch fixing some missing kdoc fields in kernel/bpf/cpumask.c Addressed feedback by Tejun Heo: * Renamed the kfunc to bpf_cpumask_populate to avoid confusion w/ bitmap_fill() v2->v3 v2: https://lore.kernel.org/bpf/[email protected]/ Addressed feedback by Alexei Starovoitov: * Added back patch descriptions dropped from v1->v2 * Elide the alignment check for archs with efficient unaligned accesses v1->v2 v1: https://lore.kernel.org/bpf/[email protected]/ Addressed feedback by Hou Tao: * Add check that the input buffer is aligned to sizeof(long) * Adjust input buffer size check to use bitmap_size() * Add selftest for checking the bit pattern of the bpf_cpumask * Moved all selftests into existing files Signed-off-by: Emil Tsalapatis (Meta) <[email protected]> ==================== Link: https://patch.msgid.link/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
2 parents 103b9ab + c06707f commit 488a854

File tree

5 files changed

+215
-2
lines changed

5 files changed

+215
-2
lines changed

kernel/bpf/cpumask.c

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,10 @@ __bpf_kfunc_start_defs();
4545
*
4646
* bpf_cpumask_create() allocates memory using the BPF memory allocator, and
4747
* will not block. It may return NULL if no memory is available.
48+
*
49+
* Return:
50+
* * A pointer to a new struct bpf_cpumask instance on success.
51+
* * NULL if the BPF memory allocator is out of memory.
4852
*/
4953
__bpf_kfunc struct bpf_cpumask *bpf_cpumask_create(void)
5054
{
@@ -71,6 +75,10 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_create(void)
7175
* Acquires a reference to a BPF cpumask. The cpumask returned by this function
7276
* must either be embedded in a map as a kptr, or freed with
7377
* bpf_cpumask_release().
78+
*
79+
* Return:
80+
* * The struct bpf_cpumask pointer passed to the function.
81+
*
7482
*/
7583
__bpf_kfunc struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask)
7684
{
@@ -106,6 +114,9 @@ CFI_NOSEAL(bpf_cpumask_release_dtor);
106114
*
107115
* Find the index of the first nonzero bit of the cpumask. A struct bpf_cpumask
108116
* pointer may be safely passed to this function.
117+
*
118+
* Return:
119+
* * The index of the first nonzero bit in the struct cpumask.
109120
*/
110121
__bpf_kfunc u32 bpf_cpumask_first(const struct cpumask *cpumask)
111122
{
@@ -119,6 +130,9 @@ __bpf_kfunc u32 bpf_cpumask_first(const struct cpumask *cpumask)
119130
*
120131
* Find the index of the first unset bit of the cpumask. A struct bpf_cpumask
121132
* pointer may be safely passed to this function.
133+
*
134+
* Return:
135+
* * The index of the first zero bit in the struct cpumask.
122136
*/
123137
__bpf_kfunc u32 bpf_cpumask_first_zero(const struct cpumask *cpumask)
124138
{
@@ -133,6 +147,9 @@ __bpf_kfunc u32 bpf_cpumask_first_zero(const struct cpumask *cpumask)
133147
*
134148
* Find the index of the first nonzero bit of the AND of two cpumasks.
135149
* struct bpf_cpumask pointers may be safely passed to @src1 and @src2.
150+
*
151+
* Return:
152+
* * The index of the first bit that is nonzero in both cpumask instances.
136153
*/
137154
__bpf_kfunc u32 bpf_cpumask_first_and(const struct cpumask *src1,
138155
const struct cpumask *src2)
@@ -414,12 +431,47 @@ __bpf_kfunc u32 bpf_cpumask_any_and_distribute(const struct cpumask *src1,
414431
* @cpumask: The cpumask being queried.
415432
*
416433
* Count the number of set bits in the given cpumask.
434+
*
435+
* Return:
436+
* * The number of bits set in the mask.
417437
*/
418438
__bpf_kfunc u32 bpf_cpumask_weight(const struct cpumask *cpumask)
419439
{
420440
return cpumask_weight(cpumask);
421441
}
422442

443+
/**
444+
* bpf_cpumask_populate() - Populate the CPU mask from the contents of
445+
* a BPF memory region.
446+
*
447+
* @cpumask: The cpumask being populated.
448+
* @src: The BPF memory holding the bit pattern.
449+
* @src__sz: Length of the BPF memory region in bytes.
450+
*
451+
* Return:
452+
* * 0 if the struct cpumask * instance was populated successfully.
453+
* * -EACCES if the memory region is too small to populate the cpumask.
454+
* * -EINVAL if the memory region is not aligned to the size of a long
455+
* and the architecture does not support efficient unaligned accesses.
456+
*/
457+
__bpf_kfunc int bpf_cpumask_populate(struct cpumask *cpumask, void *src, size_t src__sz)
458+
{
459+
unsigned long source = (unsigned long)src;
460+
461+
/* The memory region must be large enough to populate the entire CPU mask. */
462+
if (src__sz < bitmap_size(nr_cpu_ids))
463+
return -EACCES;
464+
465+
/* If avoiding unaligned accesses, the input region must be aligned to the nearest long. */
466+
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&
467+
!IS_ALIGNED(source, sizeof(long)))
468+
return -EINVAL;
469+
470+
bitmap_copy(cpumask_bits(cpumask), src, nr_cpu_ids);
471+
472+
return 0;
473+
}
474+
423475
__bpf_kfunc_end_defs();
424476

425477
BTF_KFUNCS_START(cpumask_kfunc_btf_ids)
@@ -448,6 +500,7 @@ BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_RCU)
448500
BTF_ID_FLAGS(func, bpf_cpumask_any_distribute, KF_RCU)
449501
BTF_ID_FLAGS(func, bpf_cpumask_any_and_distribute, KF_RCU)
450502
BTF_ID_FLAGS(func, bpf_cpumask_weight, KF_RCU)
503+
BTF_ID_FLAGS(func, bpf_cpumask_populate, KF_RCU)
451504
BTF_KFUNCS_END(cpumask_kfunc_btf_ids)
452505

453506
static const struct btf_kfunc_id_set cpumask_kfunc_set = {

tools/testing/selftests/bpf/prog_tests/cpumask.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,10 @@ static const char * const cpumask_success_testcases[] = {
2525
"test_global_mask_nested_deep_rcu",
2626
"test_global_mask_nested_deep_array_rcu",
2727
"test_cpumask_weight",
28+
"test_refcount_null_tracking",
29+
"test_populate_reject_small_mask",
30+
"test_populate_reject_unaligned",
31+
"test_populate",
2832
};
2933

3034
static void verify_success(const char *prog_name)
@@ -78,6 +82,5 @@ void test_cpumask(void)
7882
verify_success(cpumask_success_testcases[i]);
7983
}
8084

81-
RUN_TESTS(cpumask_success);
8285
RUN_TESTS(cpumask_failure);
8386
}

tools/testing/selftests/bpf/progs/cpumask_common.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@ u32 bpf_cpumask_any_distribute(const struct cpumask *src) __ksym __weak;
6161
u32 bpf_cpumask_any_and_distribute(const struct cpumask *src1,
6262
const struct cpumask *src2) __ksym __weak;
6363
u32 bpf_cpumask_weight(const struct cpumask *cpumask) __ksym __weak;
64+
int bpf_cpumask_populate(struct cpumask *cpumask, void *src, size_t src__sz) __ksym __weak;
6465

6566
void bpf_rcu_read_lock(void) __ksym __weak;
6667
void bpf_rcu_read_unlock(void) __ksym __weak;

tools/testing/selftests/bpf/progs/cpumask_failure.c

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,3 +222,41 @@ int BPF_PROG(test_invalid_nested_array, struct task_struct *task, u64 clone_flag
222222

223223
return 0;
224224
}
225+
226+
SEC("tp_btf/task_newtask")
227+
__failure __msg("type=scalar expected=fp")
228+
int BPF_PROG(test_populate_invalid_destination, struct task_struct *task, u64 clone_flags)
229+
{
230+
struct bpf_cpumask *invalid = (struct bpf_cpumask *)0x123456;
231+
u64 bits;
232+
int ret;
233+
234+
ret = bpf_cpumask_populate((struct cpumask *)invalid, &bits, sizeof(bits));
235+
if (!ret)
236+
err = 2;
237+
238+
return 0;
239+
}
240+
241+
SEC("tp_btf/task_newtask")
242+
__failure __msg("leads to invalid memory access")
243+
int BPF_PROG(test_populate_invalid_source, struct task_struct *task, u64 clone_flags)
244+
{
245+
void *garbage = (void *)0x123456;
246+
struct bpf_cpumask *local;
247+
int ret;
248+
249+
local = create_cpumask();
250+
if (!local) {
251+
err = 1;
252+
return 0;
253+
}
254+
255+
ret = bpf_cpumask_populate((struct cpumask *)local, garbage, 8);
256+
if (!ret)
257+
err = 2;
258+
259+
bpf_cpumask_release(local);
260+
261+
return 0;
262+
}

tools/testing/selftests/bpf/progs/cpumask_success.c

Lines changed: 119 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -749,7 +749,6 @@ int BPF_PROG(test_cpumask_weight, struct task_struct *task, u64 clone_flags)
749749
}
750750

751751
SEC("tp_btf/task_newtask")
752-
__success
753752
int BPF_PROG(test_refcount_null_tracking, struct task_struct *task, u64 clone_flags)
754753
{
755754
struct bpf_cpumask *mask1, *mask2;
@@ -770,3 +769,122 @@ int BPF_PROG(test_refcount_null_tracking, struct task_struct *task, u64 clone_fl
770769
bpf_cpumask_release(mask2);
771770
return 0;
772771
}
772+
773+
SEC("tp_btf/task_newtask")
774+
int BPF_PROG(test_populate_reject_small_mask, struct task_struct *task, u64 clone_flags)
775+
{
776+
struct bpf_cpumask *local;
777+
u8 toofewbits;
778+
int ret;
779+
780+
if (!is_test_task())
781+
return 0;
782+
783+
local = create_cpumask();
784+
if (!local)
785+
return 0;
786+
787+
/* The kfunc should prevent this operation */
788+
ret = bpf_cpumask_populate((struct cpumask *)local, &toofewbits, sizeof(toofewbits));
789+
if (ret != -EACCES)
790+
err = 2;
791+
792+
bpf_cpumask_release(local);
793+
794+
return 0;
795+
}
796+
797+
/* Mask is guaranteed to be large enough for bpf_cpumask_t. */
798+
#define CPUMASK_TEST_MASKLEN (sizeof(cpumask_t))
799+
800+
/* Add an extra word for the test_populate_reject_unaligned test. */
801+
u64 bits[CPUMASK_TEST_MASKLEN / 8 + 1];
802+
extern bool CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS __kconfig __weak;
803+
804+
SEC("tp_btf/task_newtask")
805+
int BPF_PROG(test_populate_reject_unaligned, struct task_struct *task, u64 clone_flags)
806+
{
807+
struct bpf_cpumask *mask;
808+
char *src;
809+
int ret;
810+
811+
if (!is_test_task())
812+
return 0;
813+
814+
/* Skip if unaligned accesses are fine for this arch. */
815+
if (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
816+
return 0;
817+
818+
mask = bpf_cpumask_create();
819+
if (!mask) {
820+
err = 1;
821+
return 0;
822+
}
823+
824+
/* Misalign the source array by a byte. */
825+
src = &((char *)bits)[1];
826+
827+
ret = bpf_cpumask_populate((struct cpumask *)mask, src, CPUMASK_TEST_MASKLEN);
828+
if (ret != -EINVAL)
829+
err = 2;
830+
831+
bpf_cpumask_release(mask);
832+
833+
return 0;
834+
}
835+
836+
837+
SEC("tp_btf/task_newtask")
838+
int BPF_PROG(test_populate, struct task_struct *task, u64 clone_flags)
839+
{
840+
struct bpf_cpumask *mask;
841+
bool bit;
842+
int ret;
843+
int i;
844+
845+
if (!is_test_task())
846+
return 0;
847+
848+
/* Set only odd bits. */
849+
__builtin_memset(bits, 0xaa, CPUMASK_TEST_MASKLEN);
850+
851+
mask = bpf_cpumask_create();
852+
if (!mask) {
853+
err = 1;
854+
return 0;
855+
}
856+
857+
/* Pass the entire bits array, the kfunc will only copy the valid bits. */
858+
ret = bpf_cpumask_populate((struct cpumask *)mask, bits, CPUMASK_TEST_MASKLEN);
859+
if (ret) {
860+
err = 2;
861+
goto out;
862+
}
863+
864+
/*
865+
* Test is there to appease the verifier. We cannot directly
866+
* access NR_CPUS, the upper bound for nr_cpus, so we infer
867+
* it from the size of cpumask_t.
868+
*/
869+
if (nr_cpus < 0 || nr_cpus >= CPUMASK_TEST_MASKLEN * 8) {
870+
err = 3;
871+
goto out;
872+
}
873+
874+
bpf_for(i, 0, nr_cpus) {
875+
/* Odd-numbered bits should be set, even ones unset. */
876+
bit = bpf_cpumask_test_cpu(i, (const struct cpumask *)mask);
877+
if (bit == (i % 2 != 0))
878+
continue;
879+
880+
err = 4;
881+
break;
882+
}
883+
884+
out:
885+
bpf_cpumask_release(mask);
886+
887+
return 0;
888+
}
889+
890+
#undef CPUMASK_TEST_MASKLEN

0 commit comments

Comments
 (0)