-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Track bytes used by in-memory postings #129969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track bytes used by in-memory postings #129969
Conversation
Mostly copied from Nhat's implementation in elastic#121476
|
Pinging @elastic/es-storage-engine (Team:StorageEngine) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, Jordan. I do wonder a little bit about the potential overhead of TrackingPostingsInMemoryBytesCodec. Maybe check this quickly with esbench?
| iwc.setSimilarity(engineConfig.getSimilarity()); | ||
| iwc.setRAMBufferSizeMB(engineConfig.getIndexingBufferSize().getMbFrac()); | ||
| iwc.setCodec(engineConfig.getCodec()); | ||
| iwc.setCodec(new TrackingPostingsInMemoryBytesCodec(engineConfig.getCodec())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder what the overhead is of always wrapping the codec in TrackingPostingsInMemoryBytesCodec. Maybe let's quickly run benchmark? (elastic/logs?)
Additionally I wonder whether this should only be done for stateless only.
server/src/main/java/org/elasticsearch/index/codec/TrackingPostingsInMemoryBytesCodec.java
Show resolved
Hide resolved
| Terms terms = super.terms(field); | ||
| if (terms == null) { | ||
| return terms; | ||
| } | ||
| int fieldNum = fieldInfos.fieldInfo(field).number; | ||
| return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether we can do this instead:
| Terms terms = super.terms(field); | |
| if (terms == null) { | |
| return terms; | |
| } | |
| int fieldNum = fieldInfos.fieldInfo(field).number; | |
| return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len))); | |
| Terms terms = super.terms(field); | |
| // Only org.apache.lucene.codecs.lucene90.blocktree.FieldReader keeps min and max term in jvm heap, | |
| // so only account for these cases: | |
| if (terms instanceof FieldReader fieldReader) { | |
| int fieldNum = fieldInfos.fieldInfo(field).number; | |
| int length = fieldReader.getMin().length; | |
| length += fieldReader.getMax().length; | |
| maxLengths.put(fieldNum, length); | |
| } | |
| return terms; |
This way there is way less wrapping. We only care about min and max term, given that this is loaded in jvm heap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scratch that idea. The implementation provided here different. This gets invoked during indexing / merging. During indexing this implementation of terms is FreqProxTermsWriterPerField. Invoking getMax() is potentially expensive as it causes reading ahead to figure out which is the max term, these terms get later read via terms enum.
| public BytesRef next() throws IOException { | ||
| final BytesRef term = super.next(); | ||
| if (term != null) { | ||
| maxTermLength = Math.max(maxTermLength, term.length); | ||
| } else { | ||
| onFinish.accept(maxTermLength); | ||
| } | ||
| return term; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that we need to estimate the terms that get loaded in jvm heap would the following be more accurate?
| public BytesRef next() throws IOException { | |
| final BytesRef term = super.next(); | |
| if (term != null) { | |
| maxTermLength = Math.max(maxTermLength, term.length); | |
| } else { | |
| onFinish.accept(maxTermLength); | |
| } | |
| return term; | |
| } | |
| int prevTermLength = 0; | |
| @Override | |
| public BytesRef next() throws IOException { | |
| final BytesRef term = super.next(); | |
| if (term == null) { | |
| maxTermLength += prevTermLength; | |
| onFinish.accept(maxTermLength); | |
| return term; | |
| } | |
| if (maxTermLength == 0) { | |
| maxTermLength = term.length; | |
| } | |
| prevTermLength = term.length; | |
| return term; | |
| } |
In the org.apache.lucene.codecs.lucene90.blocktree.FieldReader class, the lowest and highest lexicographically term is kept around in jvm heap. The current code just keeps track what the longest term is and report that, which doesn't map with the minTerm and maxTerm in FieldReader?
|
To measure the performance impact, I ran the elastic/logs indexing throughput benchmark against the latest main with both standard-mode and logsdb-mode indices, here are the results. Standard mode: Logsdb mode: In both benchmarks, indexing throughput was within 2%, which looks like benchmark noise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment, but LGTM. Thanks @jordan-powers
| static final class TrackingLengthFieldsConsumer extends FieldsConsumer { | ||
| final SegmentWriteState state; | ||
| final FieldsConsumer in; | ||
| final IntIntHashMap termsBytesPerField; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we can eliminate this map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right this can be turned into a regular variable in the write method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, the suggestion is to replace this map with a single long that is incremented by the TrackingLengthFields in the write method.
However, sometimes TrackingLengthFields#terms() is called twice for the same field (usually the _id field). If we replace this map with a single value that is incremented every time the terms are iterated, then we double-count the bytes for that field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
| static final class TrackingLengthFieldsConsumer extends FieldsConsumer { | ||
| final SegmentWriteState state; | ||
| final FieldsConsumer in; | ||
| final IntIntHashMap termsBytesPerField; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right this can be turned into a regular variable in the write method.
This patch adds a field postingsInMemoryBytes to the ShardFieldStats record which tracks the memory usage of the min and max posting, which are stored in-memory by the postings FieldReader. This postingsInMemoryBytes value is then used by the serverless autoscaler to better estimate memory requirements. Most of this was already done by @dnhatn in elastic#121476, but was never merged.
This patch adds a field postingsInMemoryBytes to the ShardFieldStats record which tracks the memory usage of the min and max posting, which are stored in-memory by the postings FieldReader. This postingsInMemoryBytes value is then used by the serverless autoscaler to better estimate memory requirements. Most of this was already done by @dnhatn in elastic#121476, but was never merged.
|
Drive-by comment: have we considered porting some of this back to Lucene, if it makes sense? I mean, I understand we need a codec to overcome some technical difficulties in getting the info we need, is that something that could be adjusted in Lucene, that its users would benefit from? |
Follow-up to #129969 to remove the feature flag.
This patch adds a field
totalPostingBytesto theShardFieldsrecord that tracks the memory usage of the largest term, which may be stored in-memory by the postingsFieldReader.Most of this was already done by @dnhatn in #121476, but was never merged.