-
Notifications
You must be signed in to change notification settings - Fork 140
fix: prevent broadcast deadlock from zombie gRPC connections #3809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3809 +/- ##
==========================================
- Coverage 87.00% 86.98% -0.03%
==========================================
Files 128 128
Lines 16404 16421 +17
Branches 62 62
==========================================
+ Hits 14272 14283 +11
- Misses 1957 1962 +5
- Partials 175 176 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Add explicit messenger context cancellation to ensure goroutine cleanup - Use non-blocking channel sends for response synchronization to prevent deadlocks when zombie connections can't process responses - Enhanced cleanup logging for connection tracking and subscriptions Fixes issue where terminated gRPC streams remained subscribed to the broadcaster, causing the deployment broadcast system to hang indefinitely waiting for responses from dead connections. This resulted in NGINX reloads stopping completely and upstream server IP synchronization failures.
24228f1 to
5095edf
Compare
| // Use non-blocking send to prevent deadlock if broadcaster is waiting for | ||
| // responses from zombie connections that can't process the response | ||
| select { | ||
| case channels.ResponseCh <- struct{}{}: | ||
| default: | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First thing I want to try to understand is why there's a zombie connection.
Second thing, if this was blocking, it means that the broadcaster isn't reading at all from the channel. The language in the comment confused me a bit.
So if the connection was dropped, but we're still tracking it somehow, how would we get into a state where we've sent a request but the broadcaster isn't waiting for the response?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is a zombie connection occurs when the gRPC stream from an agent is terminated (e.g., network issue, agent crash, or control plane restart), the cleanup logic (removing the connection/subscription) for some reason does not run, but the control plane’s broadcaster still has the subscriber registered, so the broadcaster continues to expect responses from this dead connection. I can see this in the logs - this particular connection is created, but never cleaned up when a new one is spawned, and we even see tons of error messages coming from this particular connection long after it should be dead showing it's still subscribed:
2025-07-25T17:26:25.256355078Z time=2025-07-25T17:26:25.256Z level=ERROR msg="Failed to receive message from subscribe stream" error="rpc error: code = Canceled desc = grpc: the client connection is closing" correlation_id=c8fbba24-6975-11f0-92bb-124e1d2c32aa
Normally, the broadcaster sends a request and waits for a response from each subscriber via their response channel. If the subscriber’s goroutine is dead (e.g., due to a dropped connection), it will never read from the channel. The broadcaster’s send to that channel will block if the channel is unbuffered or full, causing a deadlock. I think initially, the sends succeed until the buffer fills up (since the subscriber isn’t reading). Once full, any further sends will block, causing a deadlock. The non-blocking select prevents this by skipping the send if the channel can’t accept more data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the term "send" might be a bit conflated here, because we have
broadcaster -send-> subscriber
subscriber -respond-> broadcaster
and your select statement is on the response.
I do think there could definitely be a deadlock on these channels, which would make sense why config isn't sent, and why restarting agent fixes things, because that should close the previous connection and start a new one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to make sure if that's the case, we unblock in the right place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although, restarting the control plane should also fix it, but apparently it doesn't...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick comment (sorry if this throws off more than helps!) but in our problem mentioned in the other issue - restarting control plane fixed our problems everytime!
Referring to this - seems like I didn't bring this crucial info there, sorry!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to know, thanks!
Proposed changes
Problem: The agent deployment broadcast system could deadlock when zombie gRPC connections (terminated agent streams) remained subscribed. When the broadcaster waited for responses from all subscribers, it would hang indefinitely if any connection was unable to process responses, causing NGINX reloads and upstream server IP sync to stop.
Solution: Update the response channel signalling to use a non-blocking select statement, preventing deadlocks by allowing the broadcaster to continue even if a subscriber (e.g., a zombie connection) cannot receive the response, and
ensure messenger goroutines are properly canceled to clean up resources and avoid lingering connections.
Closes #3626
Checklist
Before creating a PR, run through this checklist and mark each as complete.
Release notes
If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.