Skip to content

Commit 99d028c

Browse files
committed
eth: bnxt: update header sizing defaults
300-400B RPC requests are fairly common. With the current default of 256B HDS threshold bnxt ends up splitting those, lowering PCIe bandwidth efficiency and increasing the number of memory allocation. Increase the HDS threshold to fit 4 buffers in a 4k page. This works out to 640B as the threshold on a typical kernel confing. This change increases the performance for a microbenchmark which receives 400B RPCs and sends empty responses by 4.5%. Admittedly this is just a single benchmark, but 256B works out to just 6 (so 2 more) packets per head page, because shinfo size dominates the headers. Now that we use page pool for the header pages I was also tempted to default rx_copybreak to 0, but in synthetic testing the copybreak size doesn't seem to make much difference. Reviewed-by: Michael Chan <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent bee0180 commit 99d028c

File tree

1 file changed

+6
-1
lines changed
  • drivers/net/ethernet/broadcom/bnxt

1 file changed

+6
-1
lines changed

drivers/net/ethernet/broadcom/bnxt/bnxt.c

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4609,8 +4609,13 @@ void bnxt_set_tpa_flags(struct bnxt *bp)
46094609

46104610
static void bnxt_init_ring_params(struct bnxt *bp)
46114611
{
4612+
unsigned int rx_size;
4613+
46124614
bp->rx_copybreak = BNXT_DEFAULT_RX_COPYBREAK;
4613-
bp->dev->cfg->hds_thresh = BNXT_DEFAULT_RX_COPYBREAK;
4615+
/* Try to fit 4 chunks into a 4k page */
4616+
rx_size = SZ_1K -
4617+
NET_SKB_PAD - SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
4618+
bp->dev->cfg->hds_thresh = max(BNXT_DEFAULT_RX_COPYBREAK, rx_size);
46144619
}
46154620

46164621
/* bp->rx_ring_size, bp->tx_ring_size, dev->mtu, BNXT_FLAG_{G|L}RO flags must

0 commit comments

Comments
 (0)