Skip to content

Commit 10f6d46

Browse files
Paolo Abenidavem330
authored andcommitted
mptcp: fix race between MP_JOIN and close
If a MP_JOIN subflow completes the 3whs while another CPU is closing the master msk, we can hit the following race: CPU1 CPU2 close() mptcp_close subflow_syn_recv_sock mptcp_token_get_sock mptcp_finish_join inet_sk_state_load mptcp_token_destroy inet_sk_state_store(TCP_CLOSE) __mptcp_flush_join_list() mptcp_sock_graft list_add_tail sk_common_release sock_orphan() <socket free> The MP_JOIN socket will be leaked. Additionally we can hit UaF for the msk 'struct socket' referenced via the 'conn' field. This change try to address the issue introducing some synchronization between the MP_JOIN 3whs and mptcp_close via the join_list spinlock. If we detect the msk is closing the MP_JOIN socket is closed, too. Fixes: f296234 ("mptcp: Add handling of incoming MP_JOIN requests") Signed-off-by: Paolo Abeni <[email protected]> Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 41be81a commit 10f6d46

File tree

1 file changed

+27
-15
lines changed

1 file changed

+27
-15
lines changed

net/mptcp/protocol.c

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1266,8 +1266,12 @@ static void mptcp_close(struct sock *sk, long timeout)
12661266
mptcp_token_destroy(msk->token);
12671267
inet_sk_state_store(sk, TCP_CLOSE);
12681268

1269-
__mptcp_flush_join_list(msk);
1270-
1269+
/* be sure to always acquire the join list lock, to sync vs
1270+
* mptcp_finish_join().
1271+
*/
1272+
spin_lock_bh(&msk->join_list_lock);
1273+
list_splice_tail_init(&msk->join_list, &msk->conn_list);
1274+
spin_unlock_bh(&msk->join_list_lock);
12711275
list_splice_init(&msk->conn_list, &conn_list);
12721276

12731277
data_fin_tx_seq = msk->write_seq;
@@ -1623,22 +1627,30 @@ bool mptcp_finish_join(struct sock *sk)
16231627
if (!msk->pm.server_side)
16241628
return true;
16251629

1626-
/* passive connection, attach to msk socket */
1630+
if (!mptcp_pm_allow_new_subflow(msk))
1631+
return false;
1632+
1633+
/* active connections are already on conn_list, and we can't acquire
1634+
* msk lock here.
1635+
* use the join list lock as synchronization point and double-check
1636+
* msk status to avoid racing with mptcp_close()
1637+
*/
1638+
spin_lock_bh(&msk->join_list_lock);
1639+
ret = inet_sk_state_load(parent) == TCP_ESTABLISHED;
1640+
if (ret && !WARN_ON_ONCE(!list_empty(&subflow->node)))
1641+
list_add_tail(&subflow->node, &msk->join_list);
1642+
spin_unlock_bh(&msk->join_list_lock);
1643+
if (!ret)
1644+
return false;
1645+
1646+
/* attach to msk socket only after we are sure he will deal with us
1647+
* at close time
1648+
*/
16271649
parent_sock = READ_ONCE(parent->sk_socket);
16281650
if (parent_sock && !sk->sk_socket)
16291651
mptcp_sock_graft(sk, parent_sock);
1630-
1631-
ret = mptcp_pm_allow_new_subflow(msk);
1632-
if (ret) {
1633-
subflow->map_seq = msk->ack_seq;
1634-
1635-
/* active connections are already on conn_list */
1636-
spin_lock_bh(&msk->join_list_lock);
1637-
if (!WARN_ON_ONCE(!list_empty(&subflow->node)))
1638-
list_add_tail(&subflow->node, &msk->join_list);
1639-
spin_unlock_bh(&msk->join_list_lock);
1640-
}
1641-
return ret;
1652+
subflow->map_seq = msk->ack_seq;
1653+
return true;
16421654
}
16431655

16441656
bool mptcp_sk_is_subflow(const struct sock *sk)

0 commit comments

Comments
 (0)