Conversation
|
👋 mchain0, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
✅ API Diff Results - No breaking changes |
| subs.Go(func() { | ||
| for { | ||
| select { | ||
| case update := <-poller.Updates(): |
There was a problem hiding this comment.
updates are exposed by poller here. The main goal of updatesFanIn channel seems just to collect all the updates together. Changing its buffer from 0 to 1 does not really change anything on the poller side - they will continue writing updates to their poller.Updates() channels.
There was a problem hiding this comment.
it is a valid point that this may not be entirely sufficient.
let's analyze: poller goroutine has it's own channel (which is buffered at 100), they write independently, when read from poller.Update() and then trying to write to updatesFanIn and blocked if unbuffered consumer is not receiving, while blocked can't loop back to Updates(). The problem in that as I see it is that we don't update uniformly across the poller pool, when buffered we avoid this. Buffering could be 100 to match pollers.
But I agree it's not a great improvement (although does the job). In the code on line 67 is a comment // TODO (dru) do we need a worker pool here? <- maybe that's a better direction for the consumer?
There was a problem hiding this comment.
Currently, I don't see how 1 (or any other number) improves the setting over 0. It would be useful to have a test where it actually matters, or some profiling information which can better justify changing these files.
|
Is this module still used? If so, it needs proper code owners. |
|
|
||
| // Listen for updates | ||
| updatesFanIn := make(chan interface{}) | ||
| updatesFanIn := make(chan interface{}, 1) |
There was a problem hiding this comment.
Unless this relieves a deadlock, I don't think it will have the effect that you intend. Blocking sends can be a feature in a system like this, due to becoming a natural scheduling point to swap over to executing the worker on the receiving end. Instant backpressure like before this change leads to less latency. Adding/Increasing the buffer trades increased latency for what is effectively batch executions - do we really want/need that? And if so, why only 1 instead of 10, or some factor of the number of senders, etc.?
how do we identify this? |
|
This PR is stale because it has been open 30 days with no activity. |
|
This PR has been automatically closed because it has been stale for > 30 days. |
CRE-1323
Multiple pollers may send updates, but with a buffer of 1, at least one sender won't block waiting for the consumer.
With unbuffered channel, send blocks until a receiver is actively waiting. But consumer may not be at the receive select yet - it may still be in the exporter loop.