Skip to content

Conversation

@TheSafo
Copy link
Contributor

@TheSafo TheSafo commented Dec 30, 2025

What does this PR do?

When constructing a block cache, the cache pre-allocates a buffer of max block size for each block. Payload generators then serialize bytes into that buffer. The buffer previously was never resized and thus held onto its excess memory. This made caches with small blocks hold (# blocks * max block size) in memory instead of the specified cache size.

This change resizes blocks that use under half of the pre-allocated space. This should enforce that the buffers of a cache are now no more than 2x the maximum configured cache size.

Motivation

Significant memory usage from my payload generator that has lots of smaller blocks.

Additional notes

We could probably resize every block since this is only done on lading creation anyways - wdyt? 2x was an arbitrary decision here.

Copy link
Contributor Author

TheSafo commented Dec 30, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@TheSafo TheSafo changed the title resize blocks Resize blocks Dec 30, 2025
@TheSafo TheSafo changed the title Resize blocks Resize block buffer after allocation if significant wasted space Dec 30, 2025
@TheSafo TheSafo marked this pull request as ready for review December 30, 2025 15:31
@TheSafo TheSafo requested a review from a team as a code owner December 30, 2025 15:31
Copy link
Contributor

@GeorgeHahn GeorgeHahn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me. Thanks for investigating - this was an interesting finding.

@GeorgeHahn
Copy link
Contributor

We could probably resize every block since this is only done on lading creation anyways - wdyt? 2x was an arbitrary decision here.

I went back and forth on this, but I do like the idea of resizing all blocks. Some extra work up front to reduce memory usage at runtime is probably the right tradeoff for lading as it runs in SMP jobs. That said, I'm not too sad about the 50% worst case inefficiency either. That's about the same as we could expect from allocating the BytesMut on the fly. Your call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants