Skip to content

Conversation

@ciarams87
Copy link
Contributor

Proposed changes

Problem: The agent deployment broadcast system could deadlock when zombie gRPC connections (terminated agent streams) remained subscribed. When the broadcaster waited for responses from all subscribers, it would hang indefinitely if any connection was unable to process responses, causing NGINX reloads and upstream server IP sync to stop.

Solution: Update the response channel signalling to use a non-blocking select statement, preventing deadlocks by allowing the broadcaster to continue even if a subscriber (e.g., a zombie connection) cannot receive the response, and
ensure messenger goroutines are properly canceled to clean up resources and avoid lingering connections.

Closes #3626

Checklist

Before creating a PR, run through this checklist and mark each as complete.

  • I have read the CONTRIBUTING doc
  • I have added tests that prove my fix is effective or that my feature works
  • I have checked that all unit tests pass after adding my changes
  • I have updated necessary documentation
  • I have rebased my branch onto main
  • I will ensure my PR is targeting the main branch and pulling from my branch from my own fork

Release notes

If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.

Fixed a possible deadlock in config broadcasts by ensuring the control plane no longer waits for responses from inactive or zombie agent connections.

@github-actions github-actions bot added the bug Something isn't working label Aug 28, 2025
@codecov
Copy link

codecov bot commented Aug 28, 2025

Codecov Report

❌ Patch coverage is 68.18182% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.98%. Comparing base (c3d544b) to head (5095edf).
⚠️ Report is 8 commits behind head on main.

Files with missing lines Patch % Lines
internal/controller/nginx/agent/command.go 68.18% 7 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3809      +/-   ##
==========================================
- Coverage   87.00%   86.98%   -0.03%     
==========================================
  Files         128      128              
  Lines       16404    16421      +17     
  Branches       62       62              
==========================================
+ Hits        14272    14283      +11     
- Misses       1957     1962       +5     
- Partials      175      176       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

- Add explicit messenger context cancellation to ensure goroutine cleanup
- Use non-blocking channel sends for response synchronization to prevent
  deadlocks when zombie connections can't process responses
- Enhanced cleanup logging for connection tracking and subscriptions

Fixes issue where terminated gRPC streams remained subscribed to the
broadcaster, causing the deployment broadcast system to hang indefinitely
waiting for responses from dead connections. This resulted in NGINX
reloads stopping completely and upstream server IP synchronization
failures.
@ciarams87 ciarams87 force-pushed the fix/ensure-conn-cleanup branch from 24228f1 to 5095edf Compare August 28, 2025 14:16
Comment on lines +204 to +209
// Use non-blocking send to prevent deadlock if broadcaster is waiting for
// responses from zombie connections that can't process the response
select {
case channels.ResponseCh <- struct{}{}:
default:
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First thing I want to try to understand is why there's a zombie connection.

Second thing, if this was blocking, it means that the broadcaster isn't reading at all from the channel. The language in the comment confused me a bit.

So if the connection was dropped, but we're still tracking it somehow, how would we get into a state where we've sent a request but the broadcaster isn't waiting for the response?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess is a zombie connection occurs when the gRPC stream from an agent is terminated (e.g., network issue, agent crash, or control plane restart), the cleanup logic (removing the connection/subscription) for some reason does not run, but the control plane’s broadcaster still has the subscriber registered, so the broadcaster continues to expect responses from this dead connection. I can see this in the logs - this particular connection is created, but never cleaned up when a new one is spawned, and we even see tons of error messages coming from this particular connection long after it should be dead showing it's still subscribed:

2025-07-25T17:26:25.256355078Z time=2025-07-25T17:26:25.256Z level=ERROR msg="Failed to receive message from subscribe stream" error="rpc error: code = Canceled desc = grpc: the client connection is closing" correlation_id=c8fbba24-6975-11f0-92bb-124e1d2c32aa

Normally, the broadcaster sends a request and waits for a response from each subscriber via their response channel. If the subscriber’s goroutine is dead (e.g., due to a dropped connection), it will never read from the channel. The broadcaster’s send to that channel will block if the channel is unbuffered or full, causing a deadlock. I think initially, the sends succeed until the buffer fills up (since the subscriber isn’t reading). Once full, any further sends will block, causing a deadlock. The non-blocking select prevents this by skipping the send if the channel can’t accept more data

Copy link
Collaborator

@sjberman sjberman Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the term "send" might be a bit conflated here, because we have

broadcaster -send-> subscriber
subscriber -respond-> broadcaster

and your select statement is on the response.

I do think there could definitely be a deadlock on these channels, which would make sense why config isn't sent, and why restarting agent fixes things, because that should close the previous connection and start a new one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want to make sure if that's the case, we unblock in the right place.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although, restarting the control plane should also fix it, but apparently it doesn't...

Copy link

@stutommi stutommi Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick comment (sorry if this throws off more than helps!) but in our problem mentioned in the other issue - restarting control plane fixed our problems everytime!

Referring to this - seems like I didn't bring this crucial info there, sorry!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know, thanks!

@ciarams87 ciarams87 closed this Aug 30, 2025
@github-project-automation github-project-automation bot moved this from 🆕 New to ✅ Done in NGINX Gateway Fabric Aug 30, 2025
@ciarams87 ciarams87 deleted the fix/ensure-conn-cleanup branch October 13, 2025 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working release-notes

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Data plane does not sync upstream servers IPs

3 participants