-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Precompute the BitsetCacheKey hashCode, and account for the size of the key #132418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This actually matters, because while the hashCode for a BooleanQuery is already cached, it's not eagerly computed. Pre-computing the hashCode moves the initial calculation of the hashCode outside of anywhere that we're holding a lock.
An entry in the cache can be small, but it's not fair to consider it to be *zero* (in extremely bad luck scenarios, we can end up with a lot of NULL_MARKER entries, so let's not treat them as *free*).
|
Pinging @elastic/es-security (Team:Security) |
|
Hi @joegallo, I've created a changelog YAML for you. |
| .setExpireAfterAccess(ttl) | ||
| .setMaximumWeight(maxWeightBytes) | ||
| .weigher((key, bitSet) -> bitSet == NULL_MARKER ? 0 : bitSet.ramBytesUsed()) | ||
| .weigher((key, bitSet) -> BitsetCacheKey.SHALLOW_SIZE + (bitSet == NULL_MARKER ? 0 : bitSet.ramBytesUsed())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get the intention but this concerns me a little. It means that the cache will now hold few items than it did before (for the same configuration). That's potentially going to mean someone's node ends up caching less than it used to, even though nothing has really changed.
Have we done any analysis on what it means for the overall cache size?
|
I'm going to close this one for now -- I'll split out the cached hashCode component into its own PR and revisit the issue of the zero size of |
If you trace the
org.elasticsearch.common.cache.Cache, you can see that it uses thehashCodeof the key object pretty extenstively (see some of the discussion on #96050). For example, in acache.evictEntry(entry)call, we firsthashCodeto get the right segment, then within the segment wehashCodeto do aget, then finally we do ahashCodein theremove. So that's threehashCodecalls to remove an entry. And the latter two are done while holding thewriteLock. This isn't the be-all-end-all of performance, but it's a tidy little speedup, and it's easy to write.In the same area, there's a minor bug around the handling of the
NULL_MARKERin the bitset cache. While theNULL_MARKERbitset is shared, and so it's fair to count the size of the object as zero, the entries in the cache still take up space because of the size of the keys. Since we're O(segments * queries), it could be the case that many many entries in the cache that could be pointing to theNULL_MARKER, so let's not consider them to be completely free in terms of memory consumption. (Note: I didn't bother to update the logic around logging a warning if a bitset is created that won't fit in the cache -- so technically we could be off by 24 bytes there. I can change that if you'd prefer we remain technically correct.)As with #132416, I've added @tvernum as a reviewer just as an FYI to him that this PR exists, I don't actually need his +1 specifically (anybody the @elastic/es-security team is fine by me).