Skip to content

Conversation

@jordan-powers
Copy link
Contributor

This patch adds a field totalPostingBytes to the ShardFields record that tracks the memory usage of the largest term, which may be stored in-memory by the postings FieldReader.

Most of this was already done by @dnhatn in #121476, but was never merged.

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-storage-engine (Team:StorageEngine)

Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, Jordan. I do wonder a little bit about the potential overhead of TrackingPostingsInMemoryBytesCodec. Maybe check this quickly with esbench?

iwc.setSimilarity(engineConfig.getSimilarity());
iwc.setRAMBufferSizeMB(engineConfig.getIndexingBufferSize().getMbFrac());
iwc.setCodec(engineConfig.getCodec());
iwc.setCodec(new TrackingPostingsInMemoryBytesCodec(engineConfig.getCodec()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what the overhead is of always wrapping the codec in TrackingPostingsInMemoryBytesCodec. Maybe let's quickly run benchmark? (elastic/logs?)

Additionally I wonder whether this should only be done for stateless only.

Comment on lines 96 to 101
Terms terms = super.terms(field);
if (terms == null) {
return terms;
}
int fieldNum = fieldInfos.fieldInfo(field).number;
return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we can do this instead:

Suggested change
Terms terms = super.terms(field);
if (terms == null) {
return terms;
}
int fieldNum = fieldInfos.fieldInfo(field).number;
return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len)));
Terms terms = super.terms(field);
// Only org.apache.lucene.codecs.lucene90.blocktree.FieldReader keeps min and max term in jvm heap,
// so only account for these cases:
if (terms instanceof FieldReader fieldReader) {
int fieldNum = fieldInfos.fieldInfo(field).number;
int length = fieldReader.getMin().length;
length += fieldReader.getMax().length;
maxLengths.put(fieldNum, length);
}
return terms;

This way there is way less wrapping. We only care about min and max term, given that this is loaded in jvm heap.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scratch that idea. The implementation provided here different. This gets invoked during indexing / merging. During indexing this implementation of terms is FreqProxTermsWriterPerField. Invoking getMax() is potentially expensive as it causes reading ahead to figure out which is the max term, these terms get later read via terms enum.

Comment on lines 129 to 137
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term != null) {
maxTermLength = Math.max(maxTermLength, term.length);
} else {
onFinish.accept(maxTermLength);
}
return term;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we need to estimate the terms that get loaded in jvm heap would the following be more accurate?

Suggested change
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term != null) {
maxTermLength = Math.max(maxTermLength, term.length);
} else {
onFinish.accept(maxTermLength);
}
return term;
}
int prevTermLength = 0;
@Override
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term == null) {
maxTermLength += prevTermLength;
onFinish.accept(maxTermLength);
return term;
}
if (maxTermLength == 0) {
maxTermLength = term.length;
}
prevTermLength = term.length;
return term;
}

In the org.apache.lucene.codecs.lucene90.blocktree.FieldReader class, the lowest and highest lexicographically term is kept around in jvm heap. The current code just keeps track what the longest term is and report that, which doesn't map with the minTerm and maxTerm in FieldReader?

@jordan-powers
Copy link
Contributor Author

To measure the performance impact, I ran the elastic/logs indexing throughput benchmark against the latest main with both standard-mode and logsdb-mode indices, here are the results.

Standard mode:

|                                                        Metric |       Task |        Baseline |       Contender |         Diff |   Unit |   Diff % |
|--------------------------------------------------------------:|-----------:|----------------:|----------------:|-------------:|-------:|---------:|
|                    Cumulative indexing time of primary shards |            |   974.479       |   972.405       |     -2.07363 |    min |   -0.21% |
|             Min cumulative indexing time across primary shard |            |     3.26407     |     2.92043     |     -0.34363 |    min |  -10.53% |
|          Median cumulative indexing time across primary shard |            |    12.8124      |    12.9958      |      0.18338 |    min |   +1.43% |
|             Max cumulative indexing time across primary shard |            |   116.301       |   106.542       |     -9.75917 |    min |   -8.39% |
|           Cumulative indexing throttle time of primary shards |            |     0           |     0           |      0       |    min |    0.00% |
|    Min cumulative indexing throttle time across primary shard |            |     0           |     0           |      0       |    min |    0.00% |
| Median cumulative indexing throttle time across primary shard |            |     0           |     0           |      0       |    min |    0.00% |
|    Max cumulative indexing throttle time across primary shard |            |     0           |     0           |      0       |    min |    0.00% |
|                       Cumulative merge time of primary shards |            |   744.255       |   697.394       |    -46.8616  |    min |   -6.30% |
|                      Cumulative merge count of primary shards |            |   445           |   450           |      5       |        |   +1.12% |
|                Min cumulative merge time across primary shard |            |     0.438767    |     0.81735     |      0.37858 |    min |  +86.28% |
|             Median cumulative merge time across primary shard |            |     3.86583     |     4.32655     |      0.46072 |    min |  +11.92% |
|                Max cumulative merge time across primary shard |            |   143.798       |    99.8091      |    -43.9885  |    min |  -30.59% |
|              Cumulative merge throttle time of primary shards |            |   307.301       |   306.526       |     -0.77447 |    min |   -0.25% |
|       Min cumulative merge throttle time across primary shard |            |     0.17465     |     0.422367    |      0.24772 |    min | +141.84% |
|    Median cumulative merge throttle time across primary shard |            |     1.67092     |     1.91561     |      0.24468 |    min |  +14.64% |
|       Max cumulative merge throttle time across primary shard |            |    57.4162      |    41.5969      |    -15.8193  |    min |  -27.55% |
|                     Cumulative refresh time of primary shards |            |     5.5474      |     5.90067     |      0.35327 |    min |   +6.37% |
|                    Cumulative refresh count of primary shards |            |  6687           |  6622           |    -65       |        |   -0.97% |
|              Min cumulative refresh time across primary shard |            |     0.00693333  |     0.00733333  |      0.0004  |    min |   +5.77% |
|           Median cumulative refresh time across primary shard |            |     0.0431583   |     0.0485667   |      0.00541 |    min |  +12.53% |
|              Max cumulative refresh time across primary shard |            |     0.885467    |     0.796633    |     -0.08883 |    min |  -10.03% |
|                       Cumulative flush time of primary shards |            |   108.968       |   110.393       |      1.42463 |    min |   +1.31% |
|                      Cumulative flush count of primary shards |            |  6185           |  6094           |    -91       |        |   -1.47% |
|                Min cumulative flush time across primary shard |            |     0.489633    |     0.425717    |     -0.06392 |    min |  -13.05% |
|             Median cumulative flush time across primary shard |            |     1.38215     |     1.42537     |      0.04322 |    min |   +3.13% |
|                Max cumulative flush time across primary shard |            |    12.3797      |    11.083       |     -1.2967  |    min |  -10.47% |
|                                       Total Young Gen GC time |            |   277.622       |   288.329       |     10.707   |      s |   +3.86% |
|                                      Total Young Gen GC count |            |  7646           |  7628           |    -18       |        |   -0.24% |
|                                         Total Old Gen GC time |            |     0           |     0           |      0       |      s |    0.00% |
|                                        Total Old Gen GC count |            |     0           |     0           |      0       |        |    0.00% |
|                                                  Dataset size |            |   234.579       |   240.773       |      6.19464 |     GB |   +2.64% |
|                                                    Store size |            |   234.579       |   240.773       |      6.19464 |     GB |   +2.64% |
|                                                 Translog size |            |     3.39044     |     3.08131     |     -0.30913 |     GB |   -9.12% |
|                                        Heap used for segments |            |     0           |     0           |      0       |     MB |    0.00% |
|                                      Heap used for doc values |            |     0           |     0           |      0       |     MB |    0.00% |
|                                           Heap used for terms |            |     0           |     0           |      0       |     MB |    0.00% |
|                                           Heap used for norms |            |     0           |     0           |      0       |     MB |    0.00% |
|                                          Heap used for points |            |     0           |     0           |      0       |     MB |    0.00% |
|                                   Heap used for stored fields |            |     0           |     0           |      0       |     MB |    0.00% |
|                                                 Segment count |            |  1793           |  1624           |   -169       |        |   -9.43% |
|                                   Total Ingest Pipeline count |            |     4.88622e+08 |     4.88622e+08 |      0       |        |    0.00% |
|                                    Total Ingest Pipeline time |            |     3.85937e+07 |     3.93352e+07 | 741501       |     ms |   +1.92% |
|                                  Total Ingest Pipeline failed |            |     0           |     0           |      0       |        |    0.00% |
|                                                Min Throughput | bulk-index |  1074.01        |  1027.21        |    -46.8076  | docs/s |   -4.36% |
|                                               Mean Throughput | bulk-index | 33959.9         | 34119.4         |    159.485   | docs/s |   +0.47% |
|                                             Median Throughput | bulk-index | 33799.4         | 34140.5         |    341.09    | docs/s |   +1.01% |
|                                                Max Throughput | bulk-index | 37835.9         | 37203           |   -632.863   | docs/s |   -1.67% |
|                                       50th percentile latency | bulk-index |  1530.4         |  1535.73        |      5.33643 |     ms |   +0.35% |
|                                       90th percentile latency | bulk-index |  2593.55        |  2600.85        |      7.30061 |     ms |   +0.28% |
|                                       99th percentile latency | bulk-index |  3958.64        |  4005.74        |     47.1013  |     ms |   +1.19% |
|                                     99.9th percentile latency | bulk-index |  5564.52        |  5705.69        |    141.168   |     ms |   +2.54% |
|                                    99.99th percentile latency | bulk-index |  6620.68        |  6957.74        |    337.064   |     ms |   +5.09% |
|                                      100th percentile latency | bulk-index |  7958.79        |  7710.19        |   -248.605   |     ms |   -3.12% |
|                                  50th percentile service time | bulk-index |  1533.1         |  1533.07        |     -0.03075 |     ms |   -0.00% |
|                                  90th percentile service time | bulk-index |  2596.15        |  2600.26        |      4.11416 |     ms |   +0.16% |
|                                  99th percentile service time | bulk-index |  3953.17        |  4008.87        |     55.6964  |     ms |   +1.41% |
|                                99.9th percentile service time | bulk-index |  5568.05        |  5708.71        |    140.65    |     ms |   +2.53% |
|                               99.99th percentile service time | bulk-index |  6621.55        |  6972.16        |    350.609   |     ms |   +5.29% |
|                                 100th percentile service time | bulk-index |  7958.79        |  7710.19        |   -248.605   |     ms |   -3.12% |
|                                                    error rate | bulk-index |     0           |     0           |      0       |      % |    0.00% |

Logsdb mode:

|                                                        Metric |       Task |        Baseline |       Contender |             Diff |   Unit |   Diff % |
|--------------------------------------------------------------:|-----------:|----------------:|----------------:|-----------------:|-------:|---------:|
|                    Cumulative indexing time of primary shards |            |  1036.95        |   943.164       |    -93.7862      |    min |   -9.04% |
|             Min cumulative indexing time across primary shard |            |     3.29632     |     3.29893     |      0.00262     |    min |   +0.08% |
|          Median cumulative indexing time across primary shard |            |    12.9723      |    12.3318      |     -0.64048     |    min |   -4.94% |
|             Max cumulative indexing time across primary shard |            |   191.868       |   165.657       |    -26.2115      |    min |  -13.66% |
|           Cumulative indexing throttle time of primary shards |            |     0           |     0           |      0           |    min |    0.00% |
|    Min cumulative indexing throttle time across primary shard |            |     0           |     0           |      0           |    min |    0.00% |
| Median cumulative indexing throttle time across primary shard |            |     0           |     0           |      0           |    min |    0.00% |
|    Max cumulative indexing throttle time across primary shard |            |     0           |     0           |      0           |    min |    0.00% |
|                       Cumulative merge time of primary shards |            |   302.749       |   326.563       |     23.8138      |    min |   +7.87% |
|                      Cumulative merge count of primary shards |            |   467           |   643           |    176           |        |  +37.69% |
|                Min cumulative merge time across primary shard |            |     0.457717    |     0.4715      |      0.01378     |    min |   +3.01% |
|             Median cumulative merge time across primary shard |            |     2.14187     |     2.58343     |      0.44157     |    min |  +20.62% |
|                Max cumulative merge time across primary shard |            |    68.7774      |    74.7508      |      5.97343     |    min |   +8.69% |
|              Cumulative merge throttle time of primary shards |            |    84.262       |    97.6927      |     13.4307      |    min |  +15.94% |
|       Min cumulative merge throttle time across primary shard |            |     0.0927667   |     0.0876167   |     -0.00515     |    min |   -5.55% |
|    Median cumulative merge throttle time across primary shard |            |     0.496567    |     0.5361      |      0.03953     |    min |   +7.96% |
|       Max cumulative merge throttle time across primary shard |            |    17.8209      |    22.996       |      5.17515     |    min |  +29.04% |
|                     Cumulative refresh time of primary shards |            |    17.1375      |    14.5432      |     -2.59432     |    min |  -15.14% |
|                    Cumulative refresh count of primary shards |            |  6745           |  7192           |    447           |        |   +6.63% |
|              Min cumulative refresh time across primary shard |            |     0.0153333   |     0.0375      |      0.02217     |    min | +144.57% |
|           Median cumulative refresh time across primary shard |            |     0.1435      |     0.1342      |     -0.0093      |    min |   -6.48% |
|              Max cumulative refresh time across primary shard |            |     4.28473     |     3.22698     |     -1.05775     |    min |  -24.69% |
|                       Cumulative flush time of primary shards |            |   190.471       |   177.555       |    -12.9165      |    min |   -6.78% |
|                      Cumulative flush count of primary shards |            |  6314           |  6787           |    473           |        |   +7.49% |
|                Min cumulative flush time across primary shard |            |     0.670333    |     0.73965     |      0.06932     |    min |  +10.34% |
|             Median cumulative flush time across primary shard |            |     2.57422     |     2.50512     |     -0.0691      |    min |   -2.68% |
|                Max cumulative flush time across primary shard |            |    33.0024      |    29.4237      |     -3.57873     |    min |  -10.84% |
|                                       Total Young Gen GC time |            |   318.716       |   236.402       |    -82.314       |      s |  -25.83% |
|                                      Total Young Gen GC count |            |  9239           |  8878           |   -361           |        |   -3.91% |
|                                         Total Old Gen GC time |            |     0           |     0           |      0           |      s |    0.00% |
|                                        Total Old Gen GC count |            |     0           |     0           |      0           |        |    0.00% |
|                                                  Dataset size |            |    66.2624      |    67.5387      |      1.27629     |     GB |   +1.93% |
|                                                    Store size |            |    66.2624      |    67.5387      |      1.27629     |     GB |   +1.93% |
|                                                 Translog size |            |     4.21482     |     3.22785     |     -0.98697     |     GB |  -23.42% |
|                                        Heap used for segments |            |     0           |     0           |      0           |     MB |    0.00% |
|                                      Heap used for doc values |            |     0           |     0           |      0           |     MB |    0.00% |
|                                           Heap used for terms |            |     0           |     0           |      0           |     MB |    0.00% |
|                                           Heap used for norms |            |     0           |     0           |      0           |     MB |    0.00% |
|                                          Heap used for points |            |     0           |     0           |      0           |     MB |    0.00% |
|                                   Heap used for stored fields |            |     0           |     0           |      0           |     MB |    0.00% |
|                                                 Segment count |            |  1398           |  1585           |    187           |        |  +13.38% |
|                                   Total Ingest Pipeline count |            |     4.88622e+08 |     4.8861e+08  | -12000           |        |   -0.00% |
|                                    Total Ingest Pipeline time |            |     3.84848e+07 |     3.69325e+07 |     -1.55229e+06 |     ms |   -4.03% |
|                                  Total Ingest Pipeline failed |            |     0           |     0           |      0           |        |    0.00% |
|                                                Min Throughput | bulk-index |   957.789       |   603.272       |   -354.517       | docs/s |  -37.01% |
|                                               Mean Throughput | bulk-index | 32697.1         | 32115.7         |   -581.402       | docs/s |   -1.78% |
|                                             Median Throughput | bulk-index | 32754.5         | 32141.2         |   -613.383       | docs/s |   -1.87% |
|                                                Max Throughput | bulk-index | 35378.4         | 34786.9         |   -591.438       | docs/s |   -1.67% |
|                                       50th percentile latency | bulk-index |  1550.93        |   310.76        |  -1240.17        |     ms |  -79.96% |
|                                       90th percentile latency | bulk-index |  2620.51        |   550.011       |  -2070.5         |     ms |  -79.01% |
|                                       99th percentile latency | bulk-index |  4840.9         |   973.462       |  -3867.44        |     ms |  -79.89% |
|                                     99.9th percentile latency | bulk-index |  9556.57        |  5393.45        |  -4163.11        |     ms |  -43.56% |
|                                    99.99th percentile latency | bulk-index | 11747.1         |  7364.61        |  -4382.52        |     ms |  -37.31% |
|                                      100th percentile latency | bulk-index | 15008.2         | 14499           |   -509.189       |     ms |   -3.39% |
|                                  50th percentile service time | bulk-index |  1553.22        |   310.063       |  -1243.15        |     ms |  -80.04% |
|                                  90th percentile service time | bulk-index |  2619.7         |   550.747       |  -2068.95        |     ms |  -78.98% |
|                                  99th percentile service time | bulk-index |  4744.12        |   965.282       |  -3778.84        |     ms |  -79.65% |
|                                99.9th percentile service time | bulk-index |  9540.72        |  5385.13        |  -4155.59        |     ms |  -43.56% |
|                               99.99th percentile service time | bulk-index | 11788.2         |  7365.58        |  -4422.64        |     ms |  -37.52% |
|                                 100th percentile service time | bulk-index | 15008.2         | 14499           |   -509.189       |     ms |   -3.39% |
|                                                    error rate | bulk-index |     0           |     0           |      0           |      % |    0.00% |

In both benchmarks, indexing throughput was within 2%, which looks like benchmark noise.

@elasticsearchmachine elasticsearchmachine added the serverless-linked Added by automation, don't add manually label Jul 2, 2025
Copy link
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment, but LGTM. Thanks @jordan-powers

static final class TrackingLengthFieldsConsumer extends FieldsConsumer {
final SegmentWriteState state;
final FieldsConsumer in;
final IntIntHashMap termsBytesPerField;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we can eliminate this map?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right this can be turned into a regular variable in the write method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, the suggestion is to replace this map with a single long that is incremented by the TrackingLengthFields in the write method.

However, sometimes TrackingLengthFields#terms() is called twice for the same field (usually the _id field). If we replace this map with a single value that is incremented every time the terms are iterated, then we double-count the bytes for that field.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the map.

Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

static final class TrackingLengthFieldsConsumer extends FieldsConsumer {
final SegmentWriteState state;
final FieldsConsumer in;
final IntIntHashMap termsBytesPerField;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right this can be turned into a regular variable in the write method.

@jordan-powers jordan-powers merged commit 29ccfb8 into elastic:main Jul 10, 2025
33 checks passed
mridula-s109 pushed a commit to mridula-s109/elasticsearch that referenced this pull request Jul 17, 2025
This patch adds a field postingsInMemoryBytes to the ShardFieldStats record
which tracks the memory usage of the min and max posting, which are stored
in-memory by the postings FieldReader. This postingsInMemoryBytes value is
then used by the serverless autoscaler to better estimate
memory requirements.

Most of this was already done by @dnhatn in elastic#121476, but was never merged.
mridula-s109 pushed a commit to mridula-s109/elasticsearch that referenced this pull request Jul 17, 2025
This patch adds a field postingsInMemoryBytes to the ShardFieldStats record
which tracks the memory usage of the min and max posting, which are stored
in-memory by the postings FieldReader. This postingsInMemoryBytes value is
then used by the serverless autoscaler to better estimate
memory requirements.

Most of this was already done by @dnhatn in elastic#121476, but was never merged.
@javanna
Copy link
Member

javanna commented Jul 22, 2025

Drive-by comment: have we considered porting some of this back to Lucene, if it makes sense? I mean, I understand we need a codec to overcome some technical difficulties in getting the info we need, is that something that could be adjusted in Lucene, that its users would benefit from?

@jordan-powers jordan-powers deleted the track-field-term-length branch July 28, 2025 16:56
jordan-powers added a commit that referenced this pull request Aug 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants