Skip to content

Commit 905889b

Browse files
committed
btrfs: send: Proactively round up to kmalloc bucket size
Instead of discovering the kmalloc bucket size _after_ allocation, round up proactively so the allocation is explicitly made for the full size, allowing the compiler to correctly reason about the resulting size of the buffer through the existing __alloc_size() hint. Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Acked-by: David Sterba <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent cd536db commit 905889b

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

fs/btrfs/send.c

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -438,6 +438,11 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
438438
path_len = p->end - p->start;
439439
old_buf_len = p->buf_len;
440440

441+
/*
442+
* Allocate to the next largest kmalloc bucket size, to let
443+
* the fast path happen most of the time.
444+
*/
445+
len = kmalloc_size_roundup(len);
441446
/*
442447
* First time the inline_buf does not suffice
443448
*/
@@ -451,11 +456,7 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
451456
if (!tmp_buf)
452457
return -ENOMEM;
453458
p->buf = tmp_buf;
454-
/*
455-
* The real size of the buffer is bigger, this will let the fast path
456-
* happen most of the time
457-
*/
458-
p->buf_len = ksize(p->buf);
459+
p->buf_len = len;
459460

460461
if (p->reversed) {
461462
tmp_buf = p->buf + old_buf_len - path_len - 1;

0 commit comments

Comments
 (0)