-
Notifications
You must be signed in to change notification settings - Fork 96
Optimize DialogueFeignClient small response reader #3114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Generate changelog in
|
This PR has been automatically marked as stale because it has not been touched in the last 14 days. If you'd like to keep it open, please leave a comment or add the 'long-lived' label, otherwise it'll be closed in 7 days. |
✅ Successfully generated changelog entry!What happened?Your changelog entries have been stored in the database as part of our migration to ChangelogV3. Need to regenerate?Simply interact with the changelog bot comment again to regenerate these entries. |
Avoid InputStreamReader / HeapByteBuffer overhead for small (less than 8KiB) inputs, see FasterXML/jackson-core#1081
e336971
to
2d19331
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR optimizes the DialogueFeignClient by avoiding InputStreamReader and HeapByteBuffer overhead for small responses (less than 8KiB). The optimization reads the entire response into memory as a string for small payloads, which can be more efficient than stream-based reading for small data.
Key changes:
- Added optimization for small response bodies in the
asReader()
method - Updated URL decoding to use StandardCharsets.UTF_8 directly instead of string literal
- Removed unused UnsupportedEncodingException import
Before this PR
Small responses <= 8KiB would always allocate 8KiB
ByteBuffer
asInputStreamReader
creates aStreamDecoder
that allocates a fixed 8192 byteByteBuffer
. This allocation becomes a scalability bottleneck for high throughput RPCs with small responses (think something returning timestamps, locks, authorization results, etc.)See https://github.com/openjdk/jdk/blob/4c03e5938df0a9cb10c2379af81163795dd3a086/src/java.base/share/classes/sun/nio/cs/StreamDecoder.java#L248
After this PR
==COMMIT_MSG==
Avoid
InputStreamReader
/HeapByteBuffer
overhead for small (less than 8KiB) inputs, see FasterXML/jackson-core#1081 and FasterXML/jackson-benchmarks#9 (comment) for benchmarks showing between 2x and 10x speedup handling deserialization of small values.==COMMIT_MSG==
Possible downsides?