Skip to content

Commit 4ef3982

Browse files
Christoph Hellwigcmaiolino
authored andcommitted
xfs: remove the kmalloc to page allocator fallback
Since commit 59bb479 ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)", kmalloc and friends guarantee that power of two sized allocations are naturally aligned. Limit our use of kmalloc for buffers to these power of two sizes and remove the fallback to the page allocator for this case, but keep a check in addition to trusting the slab allocator to get the alignment right. Also refactor the kmalloc path to reuse various calculations for the size and gfp flags. Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> Signed-off-by: Carlos Maiolino <[email protected]>
1 parent 50a524e commit 4ef3982

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

fs/xfs/xfs_buf.c

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -243,23 +243,23 @@ xfs_buf_free(
243243

244244
static int
245245
xfs_buf_alloc_kmem(
246-
struct xfs_buf *bp,
247-
xfs_buf_flags_t flags)
246+
struct xfs_buf *bp,
247+
size_t size,
248+
gfp_t gfp_mask)
248249
{
249-
gfp_t gfp_mask = GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL;
250-
size_t size = BBTOB(bp->b_length);
251-
252-
/* Assure zeroed buffer for non-read cases. */
253-
if (!(flags & XBF_READ))
254-
gfp_mask |= __GFP_ZERO;
250+
ASSERT(is_power_of_2(size));
251+
ASSERT(size < PAGE_SIZE);
255252

256-
bp->b_addr = kmalloc(size, gfp_mask);
253+
bp->b_addr = kmalloc(size, gfp_mask | __GFP_NOFAIL);
257254
if (!bp->b_addr)
258255
return -ENOMEM;
259256

260-
if (((unsigned long)(bp->b_addr + size - 1) & PAGE_MASK) !=
261-
((unsigned long)bp->b_addr & PAGE_MASK)) {
262-
/* b_addr spans two pages - use alloc_page instead */
257+
/*
258+
* Slab guarantees that we get back naturally aligned allocations for
259+
* power of two sizes. Keep this check as the canary in the coal mine
260+
* if anything changes in slab.
261+
*/
262+
if (WARN_ON_ONCE(!IS_ALIGNED((unsigned long)bp->b_addr, size))) {
263263
kfree(bp->b_addr);
264264
bp->b_addr = NULL;
265265
return -ENOMEM;
@@ -300,18 +300,22 @@ xfs_buf_alloc_backing_mem(
300300
if (xfs_buftarg_is_mem(bp->b_target))
301301
return xmbuf_map_page(bp);
302302

303-
/*
304-
* For buffers that fit entirely within a single page, first attempt to
305-
* allocate the memory from the heap to minimise memory usage. If we
306-
* can't get heap memory for these small buffers, we fall back to using
307-
* the page allocator.
308-
*/
309-
if (size < PAGE_SIZE && xfs_buf_alloc_kmem(new_bp, flags) == 0)
310-
return 0;
303+
/* Assure zeroed buffer for non-read cases. */
304+
if (!(flags & XBF_READ))
305+
gfp_mask |= __GFP_ZERO;
311306

312307
if (flags & XBF_READ_AHEAD)
313308
gfp_mask |= __GFP_NORETRY;
314309

310+
/*
311+
* For buffers smaller than PAGE_SIZE use a kmalloc allocation if that
312+
* is properly aligned. The slab allocator now guarantees an aligned
313+
* allocation for all power of two sizes, which matches most of the
314+
* smaller than PAGE_SIZE buffers used by XFS.
315+
*/
316+
if (size < PAGE_SIZE && is_power_of_2(size))
317+
return xfs_buf_alloc_kmem(bp, size, gfp_mask);
318+
315319
/* Make sure that we have a page list */
316320
bp->b_page_count = DIV_ROUND_UP(size, PAGE_SIZE);
317321
if (bp->b_page_count <= XB_PAGES) {
@@ -324,10 +328,6 @@ xfs_buf_alloc_backing_mem(
324328
}
325329
bp->b_flags |= _XBF_PAGES;
326330

327-
/* Assure zeroed buffer for non-read cases. */
328-
if (!(flags & XBF_READ))
329-
gfp_mask |= __GFP_ZERO;
330-
331331
/*
332332
* Bulk filling of pages can take multiple calls. Not filling the entire
333333
* array is not an allocation failure, so don't back off if we get at

0 commit comments

Comments
 (0)