Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 22 additions & 3 deletions subsys/net/ip/net_context.c
Original file line number Diff line number Diff line change
Expand Up @@ -1573,11 +1573,18 @@ static struct net_pkt *context_alloc_pkt(struct net_context *context,
{
struct net_pkt *pkt;

/* We reference it once again to make sure context is not going
* to disappear while allocating a packet
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would make it disappear?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any guaranty that net_context_unref() is not going to be called one time too many (or net_context_put) in the mean time? allocation might take a while depending on load. The application could close the socket (thus the context) for instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The application could close the socket (thus the context) for instance.

Right. And as I mentioned in the next comment (in the main comment thread), this touches on other (proposed) changes, e.g. #9565 , which vice-versa try to remove ref/unref wrapping, about which I came to conclusion that it's superfluous. Is it right or not? I wish we had a clear rules and understanding where we need to lock/unlock, and where ref/unref. As certainly, a case addressed by this PR is not the only one.

*/
net_context_ref(context);

k_mutex_unlock(&context->lock);

#if defined(CONFIG_NET_CONTEXT_NET_PKT_POOL)
if (context->tx_slab) {
pkt = net_pkt_alloc_from_slab(context->tx_slab(), timeout);
if (!pkt) {
return NULL;
goto out;
}

net_pkt_set_iface(pkt, net_context_get_iface(context));
Expand All @@ -1588,18 +1595,30 @@ static struct net_pkt *context_alloc_pkt(struct net_context *context,
net_context_get_ip_proto(context),
timeout)) {
net_pkt_unref(pkt);
return NULL;
pkt = NULL;
}

return pkt;
goto out;
}
#endif
pkt = net_pkt_alloc_with_buffer(net_context_get_iface(context), len,
net_context_get_family(context),
net_context_get_ip_proto(context),
timeout);
if (!pkt) {
goto out;
}

net_pkt_set_context(pkt, context);
out:
net_context_unref(context);

k_mutex_lock(&context->lock, K_FOREVER);

if (!net_context_is_used(context) && pkt) {
net_pkt_unref(pkt);
pkt = NULL;
}

return pkt;
}
Expand Down
5 changes: 3 additions & 2 deletions subsys/net/ip/tcp.c
Original file line number Diff line number Diff line change
Expand Up @@ -1926,9 +1926,10 @@ NET_CONN_CB(tcp_established)
* ack # value can/should be sent, so we just force resend.
*/
resend_ack:
k_mutex_unlock(&context->lock);
send_ack(context, &conn->remote_addr, true);
ret = NET_DROP;
goto unlock;

return NET_DROP;
}

if (net_tcp_seq_cmp(sys_get_be32(tcp_hdr->seq),
Expand Down