Skip to content

Optimize message encoding buffer reuse#5

Merged
bravo1goingdark merged 1 commit intomainfrom
introduce-pooled-allocation-strategy-for-messages
Oct 31, 2025
Merged

Optimize message encoding buffer reuse#5
bravo1goingdark merged 1 commit intomainfrom
introduce-pooled-allocation-strategy-for-messages

Conversation

@bravo1goingdark
Copy link
Owner

@bravo1goingdark bravo1goingdark commented Oct 31, 2025

Summary

  • reuse a thread-local FlatBufferBuilder and BytesMut so message encoding no longer allocates new buffers for every payload
  • adjust the encoder helpers to return Bytes views while updating frame writers to reuse the pooled buffers
  • extend the production benchmark with an allocation-counting encoder test and add the required dev dependency

Testing

  • cargo check (fails: inherits must not be specified in root profile release)

@bravo1goingdark bravo1goingdark self-assigned this Oct 31, 2025
@bravo1goingdark bravo1goingdark added bug Something isn't working enhancement New feature or request core core logic for the blipmq performance Related to optimization or improving latency/throughput labels Oct 31, 2025
@bravo1goingdark bravo1goingdark added this to the v1.0.0 milestone Oct 31, 2025
@bravo1goingdark bravo1goingdark merged commit 307e38e into main Oct 31, 2025
1 check failed
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +34 to +35
#[global_allocator]
static GLOBAL_ALLOCATOR: CountingAllocator = CountingAllocator;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid defining a second global allocator in benchmarks

The new allocation-counting benchmark unconditionally declares #[global_allocator] static GLOBAL_ALLOCATOR: CountingAllocator = CountingAllocator;. The library already exposes a #[global_allocator] when the mimalloc feature (and thus the ultra/production feature sets) is enabled in src/lib.rs. Running cargo bench --features mimalloc will now fail to compile with “the #[global_allocator] has already been defined” because both the library and the benchmark provide one. Consider guarding the counting allocator behind a cfg that disables it when mimalloc (or any feature that sets a global allocator) is active, or reusing the existing allocator via a wrapper instead of redefining it.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working core core logic for the blipmq enhancement New feature or request performance Related to optimization or improving latency/throughput

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant