Skip to content

Commit 059217c

Browse files
nealcardwellkuba-moo
authored andcommitted
tcp: fix quick-ack counting to count actual ACKs of new data
This commit fixes quick-ack counting so that it only considers that a quick-ack has been provided if we are sending an ACK that newly acknowledges data. The code was erroneously using the number of data segments in outgoing skbs when deciding how many quick-ack credits to remove. This logic does not make sense, and could cause poor performance in request-response workloads, like RPC traffic, where requests or responses can be multi-segment skbs. When a TCP connection decides to send N quick-acks, that is to accelerate the cwnd growth of the congestion control module controlling the remote endpoint of the TCP connection. That quick-ack decision is purely about the incoming data and outgoing ACKs. It has nothing to do with the outgoing data or the size of outgoing data. And in particular, an ACK only serves the intended purpose of allowing the remote congestion control to grow the congestion window quickly if the ACK is ACKing or SACKing new data. The fix is simple: only count packets as serving the goal of the quickack mechanism if they are ACKing/SACKing new data. We can tell whether this is the case by checking inet_csk_ack_scheduled(), since we schedule an ACK exactly when we are ACKing/SACKing new data. Fixes: fc6415b ("[TCP]: Fix quick-ack decrementing with TSO.") Signed-off-by: Neal Cardwell <[email protected]> Reviewed-by: Yuchung Cheng <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
1 parent c56e67f commit 059217c

File tree

2 files changed

+7
-6
lines changed

2 files changed

+7
-6
lines changed

include/net/tcp.h

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -348,12 +348,14 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
348348
struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp,
349349
bool force_schedule);
350350

351-
static inline void tcp_dec_quickack_mode(struct sock *sk,
352-
const unsigned int pkts)
351+
static inline void tcp_dec_quickack_mode(struct sock *sk)
353352
{
354353
struct inet_connection_sock *icsk = inet_csk(sk);
355354

356355
if (icsk->icsk_ack.quick) {
356+
/* How many ACKs S/ACKing new data have we sent? */
357+
const unsigned int pkts = inet_csk_ack_scheduled(sk) ? 1 : 0;
358+
357359
if (pkts >= icsk->icsk_ack.quick) {
358360
icsk->icsk_ack.quick = 0;
359361
/* Leaving quickack mode we deflate ATO. */

net/ipv4/tcp_output.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -177,8 +177,7 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
177177
}
178178

179179
/* Account for an ACK we sent. */
180-
static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
181-
u32 rcv_nxt)
180+
static inline void tcp_event_ack_sent(struct sock *sk, u32 rcv_nxt)
182181
{
183182
struct tcp_sock *tp = tcp_sk(sk);
184183

@@ -192,7 +191,7 @@ static inline void tcp_event_ack_sent(struct sock *sk, unsigned int pkts,
192191

193192
if (unlikely(rcv_nxt != tp->rcv_nxt))
194193
return; /* Special ACK sent by DCTCP to reflect ECN */
195-
tcp_dec_quickack_mode(sk, pkts);
194+
tcp_dec_quickack_mode(sk);
196195
inet_csk_clear_xmit_timer(sk, ICSK_TIME_DACK);
197196
}
198197

@@ -1387,7 +1386,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
13871386
sk, skb);
13881387

13891388
if (likely(tcb->tcp_flags & TCPHDR_ACK))
1390-
tcp_event_ack_sent(sk, tcp_skb_pcount(skb), rcv_nxt);
1389+
tcp_event_ack_sent(sk, rcv_nxt);
13911390

13921391
if (skb->len != tcp_header_size) {
13931392
tcp_event_data_sent(tp, sk);

0 commit comments

Comments
 (0)