Skip to content

Commit 24eb35b

Browse files
jahay1anguy11
authored andcommitted
idpf: refactor Tx completion routines
Add a mechanism to guard against stashing partial packets into the hash table to make the driver more robust, with more efficient decision making when cleaning. Don't stash partial packets. This can happen when an RE (Report Event) completion is received in flow scheduling mode, or when an out of order RS (Report Status) completion is received. The first buffer with the skb is stashed, but some or all of its frags are not because the stack is out of reserve buffers. This leaves the ring in a weird state since the frags are still on the ring. Use the field libeth_sqe::nr_frags to track the number of fragments/tx_bufs representing the packet. The clean routines check to make sure there are enough reserve buffers on the stack before stashing any part of the packet. If there are not, next_to_clean is left pointing to the first buffer of the packet that failed to be stashed. This leaves the whole packet on the ring, and the next time around, cleaning will start from this packet. An RS completion is still expected for this packet in either case. So instead of being cleaned from the hash table, it will be cleaned from the ring directly. This should all still be fine since the DESC_UNUSED and BUFS_UNUSED will reflect the state of the ring. If we ever fall below the thresholds, the TxQ will still be stopped, giving the completion queue time to catch up. This may lead to stopping the queue more frequently, but it guarantees the Tx ring will always be in a good state. Also, always use the idpf_tx_splitq_clean function to clean descriptors, i.e. use it from clean_buf_ring as well. This way we avoid duplicating the logic and make sure we're using the same reserve buffers guard rail. This does require a switch from the s16 next_to_clean overflow descriptor ring wrap calculation to u16 and the normal ring size check. Signed-off-by: Joshua Hay <[email protected]> Reviewed-by: Przemek Kitszel <[email protected]> Signed-off-by: Alexander Lobakin <[email protected]> Signed-off-by: Tony Nguyen <[email protected]>
1 parent 3dc95a3 commit 24eb35b

File tree

3 files changed

+122
-76
lines changed

3 files changed

+122
-76
lines changed

drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -239,15 +239,16 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
239239
offsets,
240240
max_data,
241241
td_tag);
242-
tx_desc++;
243-
i++;
244-
245-
if (i == tx_q->desc_count) {
242+
if (unlikely(++i == tx_q->desc_count)) {
243+
tx_buf = &tx_q->tx_buf[0];
246244
tx_desc = &tx_q->base_tx[0];
247245
i = 0;
246+
} else {
247+
tx_buf++;
248+
tx_desc++;
248249
}
249250

250-
tx_q->tx_buf[i].type = LIBETH_SQE_EMPTY;
251+
tx_buf->type = LIBETH_SQE_EMPTY;
251252

252253
dma += max_data;
253254
size -= max_data;
@@ -261,21 +262,21 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
261262

262263
tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets,
263264
size, td_tag);
264-
tx_desc++;
265-
i++;
266265

267-
if (i == tx_q->desc_count) {
266+
if (unlikely(++i == tx_q->desc_count)) {
267+
tx_buf = &tx_q->tx_buf[0];
268268
tx_desc = &tx_q->base_tx[0];
269269
i = 0;
270+
} else {
271+
tx_buf++;
272+
tx_desc++;
270273
}
271274

272275
size = skb_frag_size(frag);
273276
data_len -= size;
274277

275278
dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
276279
DMA_TO_DEVICE);
277-
278-
tx_buf = &tx_q->tx_buf[i];
279280
}
280281

281282
skb_tx_timestamp(first->skb);
@@ -454,6 +455,9 @@ static bool idpf_tx_singleq_clean(struct idpf_tx_queue *tx_q, int napi_budget,
454455
goto fetch_next_txq_desc;
455456
}
456457

458+
if (unlikely(tx_buf->type != LIBETH_SQE_SKB))
459+
break;
460+
457461
/* prevent any other reads prior to type */
458462
smp_rmb();
459463

0 commit comments

Comments
 (0)