Skip to content

Commit 4144a10

Browse files
mfijalkokuba-moo
authored andcommitted
xsk: fix batch alloc API on non-coherent systems
In cases when synchronizing DMA operations is necessary, xsk_buff_alloc_batch() returns a single buffer instead of the requested count. This puts the pressure on drivers that use batch API as they have to check for this corner case on their side and take care of allocations by themselves, which feels counter productive. Let us improve the core by looping over xp_alloc() @max times when slow path needs to be taken. Another issue with current interface, as spotted and fixed by Dries, was that when driver called xsk_buff_alloc_batch() with @max == 0, for slow path case it still allocated and returned a single buffer, which should not happen. By introducing the logic from first paragraph we kill two birds with one stone and address this problem as well. Fixes: 47e4075 ("xsk: Batched buffer allocation for the pool") Reported-and-tested-by: Dries De Winter <[email protected]> Co-developed-by: Dries De Winter <[email protected]> Signed-off-by: Dries De Winter <[email protected]> Signed-off-by: Maciej Fijalkowski <[email protected]> Acked-by: Magnus Karlsson <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent 1f2e900 commit 4144a10

File tree

1 file changed

+18
-7
lines changed

1 file changed

+18
-7
lines changed

net/xdp/xsk_buff_pool.c

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
623623
return nb_entries;
624624
}
625625

626-
u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
626+
static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
627+
u32 max)
627628
{
628-
u32 nb_entries1 = 0, nb_entries2;
629+
int i;
629630

630-
if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
631+
for (i = 0; i < max; i++) {
631632
struct xdp_buff *buff;
632633

633-
/* Slow path */
634634
buff = xp_alloc(pool);
635-
if (buff)
636-
*xdp = buff;
637-
return !!buff;
635+
if (unlikely(!buff))
636+
return i;
637+
*xdp = buff;
638+
xdp++;
638639
}
639640

641+
return max;
642+
}
643+
644+
u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
645+
{
646+
u32 nb_entries1 = 0, nb_entries2;
647+
648+
if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
649+
return xp_alloc_slow(pool, xdp, max);
650+
640651
if (unlikely(pool->free_list_cnt)) {
641652
nb_entries1 = xp_alloc_reused(pool, xdp, max);
642653
if (nb_entries1 == max)

0 commit comments

Comments
 (0)