-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Remove batcher and related config in favor of sending queue #42767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Remove batcher and introduce to sending_queue::batch as defaults
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a first pass. Thanks for taking on this big task!
| } | ||
|
|
||
| func createDefaultConfig() component.Config { | ||
| // TODO(lahsivjar): This is deviating from the original defaults: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++. 2 concerns here:
- block_on_overflow: I incline to have it as
trueso that the default es exporter behavior is not too breaking although bothtrueandfalsecome with different problems. You may easily convince me the other way around. - wait_for_result: I believe the default is
trueand that is desired. But it is not our previous default and this behavior change should be clearly stated in the changelog
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the default is true and that is desired
wait_for_result defaults to false, not true - this is close to what we currently have.
block_on_overflow: I incline to have it as true so that the default es exporter behavior is not too breaking although both true and false come with different problems. You may easily convince me the other way around.
I am in favor of true for block_on_overflow - I believe it is a better default to start with but it will deviate from the default queue configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait_for_result defaults to false, not true
Is that the desired end state of the defaults though? Or is it an intermediate state to approximate the previous async behavior? This will have a significant impact on latency and throughput of small serial requests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that the desired end state of the defaults though?
Yes, I think wait_for_result is a specialized config and we should not turn it on by default.
This will have a significant impact on latency and throughput of small serial requests
Agreed, wait_for_result set to true will have a significant impact on latency and throughput. That is why I have kept it to false and that is also the reason why I think false is a better default for this configuration option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vigneshshanmugam What do you think about the block_on_overflow config? I am slightly inclined towards keeping block_on_overflow to be true as the recommended ES exporter defaults.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the code to use block_on_overflow set to true as the default based on the above discussion so far. Happy to discuss it further if reviewers feel this to be not a good default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ to keeping it as true as its a better default.
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the changes look good. Would love @axw 's eyes as well.
Co-authored-by: Carson Ip <[email protected]>
Co-authored-by: Carson Ip <[email protected]>
Co-authored-by: Carson Ip <[email protected]>
Co-authored-by: Carson Ip <[email protected]>
Co-authored-by: Carson Ip <[email protected]>
carsonip
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for covering the max flush bytes case.
nit: there's an outdated comment on async session if you grep for "async"
// The behavior of Flush depends on whether the bulk indexer is
// synchronous or asynchronous. Calling Flush on an asynchronous bulk
// indexer session is effectively a no-op; flushing will be done in
// the background. Calling Flush on a synchronous bulk indexer session
// will wait for bulk indexing of added documents to complete,
// successfully or not.
vigneshshanmugam
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batching defaults looks great, Thanks @lahsivjar LGTM
axw
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @lahsivjar!
|
Please address the conflict. |
|
@atoulme Conflicts are addressed. |
…emetry#42767) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Removes the deprecated `batcher` configuration and makes `sending_queue` as default. To adhere to the previous behaviour, the sending queue defaults are kept as close to the current defaults as possible. The PR also drops `asyncBulkIndexer` and related code, a future PR will follow to refactor the code around bulk indexer sessions. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Part of open-telemetry#42718 <!--Describe what testing was performed and which tests were added.--> #### Testing Updated <!--Describe the documentation added.--> #### Documentation Updated <!--Please delete paragraphs that you did not use before submitting.--> --------- Co-authored-by: Carson Ip <[email protected]>
…emetry#42767) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Removes the deprecated `batcher` configuration and makes `sending_queue` as default. To adhere to the previous behaviour, the sending queue defaults are kept as close to the current defaults as possible. The PR also drops `asyncBulkIndexer` and related code, a future PR will follow to refactor the code around bulk indexer sessions. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Part of open-telemetry#42718 <!--Describe what testing was performed and which tests were added.--> #### Testing Updated <!--Describe the documentation added.--> #### Documentation Updated <!--Please delete paragraphs that you did not use before submitting.--> --------- Co-authored-by: Carson Ip <[email protected]>
…emetry#42767) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Removes the deprecated `batcher` configuration and makes `sending_queue` as default. To adhere to the previous behaviour, the sending queue defaults are kept as close to the current defaults as possible. The PR also drops `asyncBulkIndexer` and related code, a future PR will follow to refactor the code around bulk indexer sessions. <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Part of open-telemetry#42718 <!--Describe what testing was performed and which tests were added.--> #### Testing Updated <!--Describe the documentation added.--> #### Documentation Updated <!--Please delete paragraphs that you did not use before submitting.--> --------- Co-authored-by: Carson Ip <[email protected]>
Description
Removes the deprecated
batcherconfiguration and makessending_queueas default. To adhere to the previous behaviour, the sending queue defaults are kept as close to the current defaults as possible. The PR also dropsasyncBulkIndexerand related code, a future PR will follow to refactor the code around bulk indexer sessions.Link to tracking issue
Part of #42718
Testing
Updated
Documentation
Updated