Skip to content

Commit 9077fc2

Browse files
Hou TaoAlexei Starovoitov
authored andcommitted
bpf: Use kmalloc_size_roundup() to adjust size_index
Commit d52b593 ("bpf: Adjust size_index according to the value of KMALLOC_MIN_SIZE") uses KMALLOC_MIN_SIZE to adjust size_index, but as reported by Nathan, the adjustment is not enough, because __kmalloc_minalign() also decides the minimal alignment of slab object as shown in new_kmalloc_cache() and its value may be greater than KMALLOC_MIN_SIZE (e.g., 64 bytes vs 8 bytes under a riscv QEMU VM). Instead of invoking __kmalloc_minalign() in bpf subsystem to find the maximal alignment, just using kmalloc_size_roundup() directly to get the corresponding slab object size for each allocation size. If these two sizes are unmatched, adjust size_index to select a bpf_mem_cache with unit_size equal to the object_size of the underlying slab cache for the allocation size. Fixes: 822fb26 ("bpf: Add a hint to allocated objects.") Reported-by: Nathan Chancellor <[email protected]> Closes: https://lore.kernel.org/bpf/[email protected]/ Signed-off-by: Hou Tao <[email protected]> Tested-by: Emil Renner Berthing <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
1 parent d1a783d commit 9077fc2

File tree

1 file changed

+19
-25
lines changed

1 file changed

+19
-25
lines changed

kernel/bpf/memalloc.c

Lines changed: 19 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -965,37 +965,31 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
965965
return !ret ? NULL : ret + LLIST_NODE_SZ;
966966
}
967967

968-
/* Most of the logic is taken from setup_kmalloc_cache_index_table() */
969968
static __init int bpf_mem_cache_adjust_size(void)
970969
{
971-
unsigned int size, index;
970+
unsigned int size;
972971

973-
/* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be
974-
* up-to 256-bytes.
972+
/* Adjusting the indexes in size_index() according to the object_size
973+
* of underlying slab cache, so bpf_mem_alloc() will select a
974+
* bpf_mem_cache with unit_size equal to the object_size of
975+
* the underlying slab cache.
976+
*
977+
* The maximal value of KMALLOC_MIN_SIZE and __kmalloc_minalign() is
978+
* 256-bytes, so only do adjustment for [8-bytes, 192-bytes].
975979
*/
976-
size = KMALLOC_MIN_SIZE;
977-
if (size <= 192)
978-
index = size_index[(size - 1) / 8];
979-
else
980-
index = fls(size - 1) - 1;
981-
for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8)
982-
size_index[(size - 1) / 8] = index;
980+
for (size = 192; size >= 8; size -= 8) {
981+
unsigned int kmalloc_size, index;
983982

984-
/* The minimal alignment is 64-bytes, so disable 96-bytes cache and
985-
* use 128-bytes cache instead.
986-
*/
987-
if (KMALLOC_MIN_SIZE >= 64) {
988-
index = size_index[(128 - 1) / 8];
989-
for (size = 64 + 8; size <= 96; size += 8)
990-
size_index[(size - 1) / 8] = index;
991-
}
983+
kmalloc_size = kmalloc_size_roundup(size);
984+
if (kmalloc_size == size)
985+
continue;
992986

993-
/* The minimal alignment is 128-bytes, so disable 192-bytes cache and
994-
* use 256-bytes cache instead.
995-
*/
996-
if (KMALLOC_MIN_SIZE >= 128) {
997-
index = fls(256 - 1) - 1;
998-
for (size = 128 + 8; size <= 192; size += 8)
987+
if (kmalloc_size <= 192)
988+
index = size_index[(kmalloc_size - 1) / 8];
989+
else
990+
index = fls(kmalloc_size - 1) - 1;
991+
/* Only overwrite if necessary */
992+
if (size_index[(size - 1) / 8] != index)
999993
size_index[(size - 1) / 8] = index;
1000994
}
1001995

0 commit comments

Comments
 (0)