Skip to content

Commit 7f846c6

Browse files
qsnkuba-moo
authored andcommitted
tls: don't rely on tx_work during send()
With async crypto, we rely on tx_work to actually transmit records once encryption completes. But while send() is running, both the tx_lock and socket lock are held, so tx_work_handler cannot process the queue of encrypted records, and simply reschedules itself. During a large send(), this could last a long time, and use a lot of memory. Transmit any pending encrypted records before restarting the main loop of tls_sw_sendmsg_locked. Fixes: a42055e ("net/tls: Add support for async encryption of records for performance") Reported-by: Jann Horn <[email protected]> Signed-off-by: Sabrina Dubroca <[email protected]> Link: https://patch.msgid.link/8396631478f70454b44afb98352237d33f48d34d.1760432043.git.sd@queasysnail.net Signed-off-by: Jakub Kicinski <[email protected]>
1 parent b8a6ff8 commit 7f846c6

File tree

1 file changed

+13
-0
lines changed

1 file changed

+13
-0
lines changed

net/tls/tls_sw.c

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1152,6 +1152,13 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
11521152
} else if (ret != -EAGAIN)
11531153
goto send_end;
11541154
}
1155+
1156+
/* Transmit if any encryptions have completed */
1157+
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
1158+
cancel_delayed_work(&ctx->tx_work.work);
1159+
tls_tx_records(sk, msg->msg_flags);
1160+
}
1161+
11551162
continue;
11561163
rollback_iter:
11571164
copied -= try_to_copy;
@@ -1207,6 +1214,12 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
12071214
goto send_end;
12081215
}
12091216
}
1217+
1218+
/* Transmit if any encryptions have completed */
1219+
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
1220+
cancel_delayed_work(&ctx->tx_work.work);
1221+
tls_tx_records(sk, msg->msg_flags);
1222+
}
12101223
}
12111224

12121225
continue;

0 commit comments

Comments
 (0)