Skip to content

Conversation

@shubhamvishu
Copy link
Contributor

@shubhamvishu shubhamvishu commented Jul 17, 2025

Description

This change avoids creating a HNSW graph if the segment is small (here we have taken the thresholdfor number of vectors as 10000 based on the conversation here for now).

Some of the points I'm not sure how we would want to go about :

  • All the tests passes currently since the option to enable the optimization is false by default but setting it to true reveals some failing unit tests which inherently assumes that the HNSW graph is created and KNN search is triggered (do we have some idea of how to bypass those in some good clean way?)
  • I understand we might want to always keep this optimization on (also less invasive change), but for now in this PR, I made it configurable and enabled it on the KNN format - just to be cautious (wasn't sure if it would not affect back-compact in some unknown way), but happy to make it as default behaviour

 
TODOs:

  • Add specific unit tests
  • Benchmarks (luceneutil)

 
Closes #13447

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor ideas.

* When enabled, segments with fewer than the threshold number of vectors will store only flat
* vectors, significantly improving indexing performance for workloads with frequent flushes.
*/
private final boolean bypassTinySegments;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we allow this to be a parameter, it should be a threshold that refers to the typical k used when querying.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do wonder if we would want to expose as a parameter though? Maybe it should just be a fixed value? I would have thought about setting it based on a threshold where exhaustive search is no-or-only-slightly more expensive than hnsw search? I would expect this to be related to the M of the graph maybe?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to have it at least as a parameter to a pkg-protected constructor, so that we can pass random values in TestLucene99HnswVectorsFormat to make sure that we exercise properly both the case when there is a graph and the case when there is no graph.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be good to be able to pass random values in RandomCodec as well to help with the test coverage of the approximate case (we have very few tests that index more than 10k vectors).

Comment on lines 338 to 341
boolean doHnsw =
knnCollector.k() < scorer.maxOrd()
&& (bypassTinySegments == false
|| fieldEntry.size() > Lucene99HnswVectorsFormat.HNSW_GRAPH_THRESHOLD);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reader should just look to see if there is a graph.

@benwtrent
Copy link
Member

I think 10K is likely way too large 10k vector ops vs. 10 * log(10k) ops is a huge difference.

If a user typical searches for 10 nearest neighbors, the graph should be built at around 90 vectors

@msokolov
Copy link
Contributor

I can think of a circumstance where we might create small segments that will probably never get searched at all, but will very quickly be merged. In that case we might want to allow a larger threshold?

@msokolov
Copy link
Contributor

As far as the tests are concerned I'm confused, wouldn't they fail with a high threshold since we wouldn't build a graph until there are many documents? Maybe I didn't understand the meaning of the threshold though.

@shubhamvishu
Copy link
Contributor Author

shubhamvishu commented Jul 17, 2025

@msokolov Actually they do, I had missed setting byPassTinySegments=true locally in one of the constructors so tests didn't exercise that path. Setting it to true does reveal the failing unit tests.

I'll try if there is some clean way to override those checks in the failing unit tests.

@jpountz
Copy link
Contributor

jpountz commented Jul 17, 2025

I wonder how this interacts with how AbstractKnnVectorQuery does pre-filtering by first passing the filter to KnnVectorsReader#search, and then falling back to an exact search. If the segment doesn't have a HNSW graph, this may effectively start an exact search (via KnnVectorsReader#search) and then abort it to do an exact search again? Or am I missing something?

@benwtrent
Copy link
Member

I can think of a circumstance where we might create small segments that will probably never get searched at all, but will very quickly be merged. In that case we might want to allow a larger threshold?

I think that is fine. I am thinking of semi-nrt with lots of updates. In cases like that 10k is way too big a default. I think the value should be used as an input to expectedVisitedNodes that takes into account the potential graph size.

Additionally, I would assume users would want to scale quantized formats vs. non-quantized differently (as their vector ops can be much cheaper than floating point ops).

and then abort it to do an exact search again? Or am I missing something?

I would hope the format just does the right thing, and searches everything, knowing that there isn't a graph.

@shubhamvishu
Copy link
Contributor Author

shubhamvishu commented Jul 18, 2025

@jpountz Ahh, I see what you are pointing towards and here is what think we could try maybe :

  • We currently also fallback to exact search after the visitedLimit is breached in HNSW search, so now that same visited limit would be applicable when we are iterating over the docs i.e. net-net approximateKnn (visit V nodes) + exactSearch ~== exactSearch (visit V nodes linearly) + exactSearch which I might not impact the search time?. So one way is to gulp this since we will visit small no. of docs but I agree we can further optimize this path (more on this below points)

  • We could completely remove the fallback to exactSearch in AbstractKnnVectorQuery and we could relax the check from

    • if (knnCollector.earlyTerminated()) to
    • if (knnCollector instanceof TimeLimitingKnnCollectorManager.TimeLimitingKnnCollector && ((TimeLimitingKnnCollectorManager.TimeLimitingKnnCollector)knnCollector).shouldExit()) after making TimeLimitingKnnCollector public and exposing shouldExit()

    This would ensure we continue the exact search VectorsReader and don't fallback to exactSearch in AbstractKnnVectorQuery. (we can do better maybe, more on it below)

  • [PROPOSED] Though I think AbstractKnnVectorQuery#exactSearch is better with exact search since it uses a conjunctive DocIdSetIterator rather than iterating on all the docs?. If yes, then for this we could maybe simply add an else if condition in VectorsReader to straightaway overwhelm the collector (forcing its earlyTerminated to return true) and return so it automatically fallsback to best exactSearch impl (I hope that gives us best of both worlds?)

    else if (getGraph(fieldEntry).equals(HnswGraph.EMPTY)) {
      // MakesFallback to exactSearch directly
      knnCollector.incVisitedCount((int) knnCollector.visitLimit() + 1);
    }

Let me know your thoughts or if I'm missing something here. Thanks!

@jpountz
Copy link
Contributor

jpountz commented Jul 18, 2025

My recommendation would be to move the logic of switching to an exact search when the filter is selective to KnnVectorsReader#search (in a separate PR) so that the file format can make the right decision depending on whether it only has a flat index or something more sophisticated such as a HNSW index. (It doesn't feel completely straight forward since KnnVectorsReader#search may not know how to pull an efficient iterator that matches the same docs at the Bits acceptDocs).

@benwtrent
Copy link
Member

This makes me wonder if the knn search method should accept a ScorerSupplier and the live docs Bits instead of fully realized bit set that represent both the filter and live docs....

@jpountz
Copy link
Contributor

jpountz commented Jul 19, 2025

Or some higher-level abstraction that can either be consumed in a random-access fashion (Bits) or sequential (DocIdSetIterator)?

class AcceptDocs {

  /** Random access to the accepted documents. */
  Bits getBits();

  /** Get an iterator of accepted docs. */
  DocIdSetIterator getIterator();

  /** Return an approximation of the number of accepted documents. */
  long cost();
}

Copy link
Contributor

@vigyasharma vigyasharma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we handle backward compatibility in this change? I noticed we don't write any metadata (e.g. in FieldEntry) about bypassTinySegments or whether a graph was built or not. The flag gets configured when the format is initialized from the codec.

What happens if I create an index with bypassTinySegments=true, but later read it in an application with the flag set to false? I think we need to persist information about whether graph was built for the segment.

this.bypassTinySegments = bypassTinySegments;
this.flatFieldVectorsWriter = Objects.requireNonNull(flatFieldVectorsWriter);
if (bypassTinySegments) {
this.bufferedVectors = new ArrayList<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we only store upto HNSW_GRAPH_THRESHOLD no. of vectors, beyond which we resume the regular flow of adding them to the graph, we could use an array here instead of an ArrayList?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense

replayBufferedVectors();
bufferedVectors.clear();
}
if (hnswGraphBuilder != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does hnswGraphBuilder != null do the same thing as graphBuilderInisialized ? if so, do we need graphBuilderInisialized ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, we can drop this check

@vigyasharma
Copy link
Contributor

I think we need to persist information about whether graph was built for the segment.

Maybe we could use one of the existing fields that describe the graph. Like set numLevels=0 when there is no graph (otherwise it would at least be 1)?

@shubhamvishu
Copy link
Contributor Author

Or some higher-level abstraction that can either be consumed in a random-access fashion (Bits) or sequential (DocIdSetIterator)?

Thanks @jpountz. I opened #15011 for adding such abstraction

@shubhamvishu
Copy link
Contributor Author

Thanks for the review @vigyasharma!

What happens if I create an index with bypassTinySegments=true, but later read it in an application with the flag set to false? I think we need to persist information about whether graph was built for the segment.

Maybe we could use one of the existing fields that describe the graph. Like set numLevels=0 when there is no graph (otherwise it would at least be 1)?

We write the vectorIndexLength here as the meta info which would be 0 in the case there is no HNSW graph so we could use this to determine if we want to do graph search or not (update the doHnsw check with this). So, I think there shouldn't be any issue with the backward compatibility and need to store this bypassing info explicitly?

Let me know if missing something here maybe.

* vectors will use flat storage only, improving indexing performance when having frequent
* flushes.
*/
public static final int HNSW_GRAPH_THRESHOLD = 10_000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the comment should try to expand a bit more on this value to help future readers think through whether it's still right or whether it should be updated.

One thing we discussed on the linked issue is that the number of visited nodes is in the order of log(size) * k. So having a graph only helps if log(size) * k << size <=> size / log(size) >> k. If we arbitrarily choose k = 100, 10,000 is the first power of 10 so that size / log(size) is one order of magnitude greater than k (10/log(10) ~= 4.3, 100/log(100) ~= 22, 1000/log(1000) ~= 144, 10000 / log(10000) ~= 1085).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll elaborate on this.

@msokolov
Copy link
Contributor

ooh, it's nice to avoid force-push -- recommended way to keep up with main is git merge main into your branch

@shubhamvishu
Copy link
Contributor Author

shubhamvishu commented Aug 14, 2025

ooh, it's nice to avoid force-push -- recommended way to keep up with main is git merge main into your branch

Agreed, sometimes I end up doing git commit --amend --no-edit to merge staged changes with the top commit. It changes the commit id and doesn't allow to push cleanly (I should probably train my muscle mem to stop doing that)

@shubhamvishu
Copy link
Contributor Author

shubhamvishu commented Sep 1, 2025

OK, I ran the luceneutil benchmarks and I see a huge improvement in the indexing throughput with this PR compared to baseline(without this change). I see an almost 4x improvement in the indexing rate/time here. Looking forward to your thoughts. Thanks!

Note : The improvement in the latency or CPUTime seems to be driven by the less no. of segments which also very slightly impact the recall as we know.

CC - @benwtrent @msokolov @jpountz @vigyasharma

With HNSW_GRAPH_THRESHOLD = 10

Baseline

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.515       11.084  11.071        0.999  500000   100      50       64        250     4 bits    120.51       4149.14             7         1690.12      1649.857      185.013       HNSW
 0.875        9.965   9.946        0.998  500000   100      50       64        250     7 bits    119.18       4195.23             3         1871.00      1832.962      368.118       HNSW
 0.978       19.637  19.621        0.999  500000   100      50       64        250         no    127.63       3917.51             8         1501.98      1464.844     1464.844       HNSW

Candidate

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.517        5.933   5.914        0.997  500000   100      50       64        250     4 bits     30.34      16482.07             2         1694.34      1649.857      185.013       HNSW
 0.871        9.655   9.635        0.998  500000   100      50       64        250     7 bits     27.86      17945.59             3         1869.52      1832.962      368.118       HNSW
 0.961       11.280  11.269        0.999  500000   100      50       64        250         no     31.16      16046.73             3         1503.46      1464.844     1464.844       HNSW

With HNSW_GRAPH_THRESHOLD = 100

Baseline

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.515       11.101  11.074        0.998  500000   100      50       64        250     4 bits    118.33       4225.40             7         1690.02      1649.857      185.013       HNSW
 0.874       10.199  10.176        0.998  500000   100      50       64        250     7 bits    118.18       4230.83             3         1871.16      1832.962      368.118       HNSW
 0.977       19.990  19.979        0.999  500000   100      50       64        250         no    126.92       3939.61             8         1501.95      1464.844     1464.844       HNSW

Candidate

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.517        5.900   5.882        0.997  500000   100      50       64        250     4 bits     30.13      16596.96             2         1694.14      1649.857      185.013       HNSW
 0.872        9.680   9.660        0.998  500000   100      50       64        250     7 bits     28.58      17495.98             3         1869.05      1832.962      368.118       HNSW
 0.964       11.640  11.619        0.998  500000   100      50       64        250         no     27.25      18349.97             3         1502.64      1464.844     1464.844       HNSW

@shubhamvishu shubhamvishu marked this pull request as ready for review September 30, 2025 19:15
@shubhamvishu
Copy link
Contributor Author

@benwtrent I fixed the failing tests in the latest commit. They were making assumptions that we always do HNSW and since many of those don't create high cardinality graphs it was causing failures. One way would have been to just increase the number of docs for some of them which create many vectors(but that might have increased the time those take to test; but did that with just one test I think). Others I mostly updated to check if there would be graph present or not. There are some refactoring opportunities in the testing I could think of like to test more random behaviors like when this optimization OFF vs ON (default is on) and some more simplifications but for now I didn't focus much on that and kept it mostly to make all tests happy in the most straightforward way. Maybe we can make these tests more random in a followup change? Looking for your feedback. I'll also post the latest upstream numbers shortly with the latest changes.

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shubhamvishu could you confirm the benchmark results?

When I tried benchmarking, I didn't get anywhere near the numbers you got. Would be good to know the exact settings, etc. tested so we can replicate and see where the benefits are for this.

Comment on lines 3282 to 3285
public static boolean hasGraphPresent(int k, int size) {
int expectedVisitedNodes = expectedVisitedNodes(k, size);
return size > expectedVisitedNodes && expectedVisitedNodes > 0;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very trappy. This sort of assumes that the test case knows the logic that makes a graph exist or not.

Something better might be adjust TestUtil.getDefaultCodec() to set the threshold for building the graph to 0 for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case we will be testing not with this optimization disabled but in general it would be on by default. Maybe we can add a separate test case to test this and leave others to be disabled on the codec as you mentioned? I'll make those changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can add a separate test case to test this and leave others to be disabled on the codec as you mentioned? I'll make those changes.

I think maybe for tests that assume there MUST be a graph, they have codec settings that enforce such. For all other tests, it can be the default.

Comment on lines 143 to 144
* GITHUB#15169: Add codecs for 4 and 8 bit Optimized Scalar Quantization vectors (Trevor McCulloch)
* GITHUB#15169, GITHUB#15223: Add codecs for 4 and 8 bit Optimized Scalar Quantization vectors. The new format
`Lucene104HnswScalarQuantizedVectorsFormat` replaces the now legacy `Lucene99HnswScalarQuantizedVectorsFormat`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there was a bit of an issue with main branch recently and it needed to be restored. So the commits might be out of alignment. I think your branch needs to have main merged back in or something to fix the changes.

@shubhamvishu
Copy link
Contributor Author

could you confirm the benchmark results?

@benwtrent Sure, I'll share the results with the latest changes soon.

 

When I tried benchmarking, I didn't get anywhere near the numbers you got. Would be good to know the exact settings, etc. tested so we can replicate and see where the benefits are for this.

Below are the settings I tried for the above results if I'm not mistaken. Also as we discussed earlier, I tried with a little higher threshold(mentioned below) so that is also one thing I would like to confirm the impact on numbers as we have changed it now. Let me know if there is anything else I might be missing. Thanks!

Dataset : Cohere

dim = 768
doc_vectors = f"{constants.BASE_DIR}/data/cohere-wikipedia-docs-5M-{dim}d.vec"
query_vectors = f"{constants.BASE_DIR}/data/cohere-wikipedia-queries-{dim}d.vec"

Graph Threshold

private static int graphCreationThreshold(int k, int numNodes) {
  return (int)
      Math.pow(10, String.valueOf(HnswGraphSearcher.expectedVisitedNodes(k, numNodes)).length());
}
PARAMS = {
  "ndoc": (500_000,),
  "maxConn": (64,),
  "beamWidthIndex": (250,),
  "fanout": (50,),
  "numMergeWorker": (12,),
  "numMergeThread": (4,),
  "numSearchThread": (0,),
  "encoding": ("float32",),
  "quantizeBits": (
    4,
    7,
    32,
  ),
  "topK": (100,),
  "quantizeCompress": (True,),
  "queryStartIndex": (0,),
  "forceMerge": (False,),
}

@github-actions
Copy link
Contributor

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@github-actions
Copy link
Contributor

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@shubhamvishu
Copy link
Contributor Author

shubhamvishu commented Oct 20, 2025

@benwtrent I reran the benchmarks and somehow I don't see the old performance difference like earlier and rather similar to baseline. Maybe the recent developments had some indexing improvements (like eg I see 4 bit recall is much better now due to OSQ?). Net-net, against the current main we are not creating those many tiny segments in our luceneutil benchmarks by default. Maybe we see still see higher benefits in older Lucene version probably?

However, to simulate a more realtime update scenario by changing luceneutil KnnIndexer to use setMaxBufferedDocs(100) to trigger more flushes and re-ran the benchmarks and I see the below results where indexing gets ~30% faster with 1000 as the threshold(configurable) in that particular case. So it seems the user could configure a value(1 order higher than topk at query time?) accordingly that helps avoid the overhead of creating tiny graphs. @benwtrent @ChrisHegarty I patched the test related changes in the latest commit, let me know if this looks good or not. Thanks!

Baseline :

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.901        5.941   5.923        0.997  500000   100      50       64        250     4 bits    31624     84.64       5907.09             4         1690.88      1655.579      190.735       HNSW
 0.965       13.023  13.012        0.999  500000   100      50       64        250     7 bits    40750    127.19       3931.00             7         1871.98      1838.684      373.840       HNSW
 0.971        9.997   9.986        0.999  500000   100      50       64        250         no    40086    103.35       4837.93             7         1498.13      1464.844     1464.844       HNSW

Candidate (HNSW_GRAPH_THRESHOLD = 100) [DEFAULT]:

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.898        4.598   4.587        0.998  500000   100      50       64        250     4 bits    23503     88.53       5647.55             2         1692.61      1655.579      190.735       HNSW
 0.964       13.309  13.291        0.999  500000   100      50       64        250     7 bits    41490    122.04       4097.02             8         1871.81      1838.684      373.840       HNSW
 0.964        7.809   7.788        0.997  500000   100      50       64        250         no    27908    104.95       4764.08             3         1501.41      1464.844     1464.844       HNSW

Candidate (HNSW_GRAPH_THRESHOLD = 1000) :

Results:
recall  latency(ms)  netCPU  avgCpuCount    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.901        6.737   6.720        0.997  500000   100      50       64        250     4 bits    37284     58.96       8480.61             6         1689.44      1655.579      190.735       HNSW
 0.959       10.027  10.015        0.999  500000   100      50       64        250     7 bits    31131     86.87       5755.46             4         1874.15      1838.684      373.840       HNSW
 0.963        7.463   7.452        0.999  500000   100      50       64        250         no    27939     72.11       6933.85             3         1501.16      1464.844     1464.844       HNSW

@github-actions
Copy link
Contributor

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking really good.

Thank you for doing the more active flushing test, that is exactly what I am looking for!

I think keeping it simple and allowing users to set k to their "expected values" or indicating that they may want it to be higher (to be more lazy).

Could you add a CHANGES.txt entry for 10.4 Optimizations?

@github-actions github-actions bot added this to the 10.4.0 milestone Oct 21, 2025
@shubhamvishu
Copy link
Contributor Author

Thanks for the review @benwtrent. I addressed the comments and added the CHANGES entry now.

shubhamvishu and others added 2 commits October 21, 2025 10:42
…104HnswScalarQuantizedVectorsFormat.java

Co-authored-by: Benjamin Trent <ben.w.trent@gmail.com>
Copy link
Contributor

@ChrisHegarty ChrisHegarty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@benwtrent benwtrent merged commit 4a7c9c2 into apache:main Oct 22, 2025
12 checks passed
benwtrent pushed a commit that referenced this pull request Oct 22, 2025
This change avoids creating a HNSW graph if the segment is small (here we have taken the thresholdfor number of vectors  as `10000` based on the conversation [here](#13447 (comment)) for now).

Some of the points I'm not sure how we would want to go about :
- All the tests passes currently since the option to enable the optimization is `false` by default but setting it to `true` reveals some failing unit tests which inherently assumes that the HNSW graph is created and KNN search is triggered (do we have some idea of how to bypass those in some good clean way?)
- I understand we might want to always keep this optimization on (also less invasive change), but for now in this PR, I made it configurable and enabled it on the KNN format - just to be cautious (wasn't sure if it would not affect back-compact in some unknown way), but happy to make it as default behaviour
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Explore bypassing HNSW graph building for tiny segments

8 participants