Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/changelog/126385.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 126385
summary: Filter out empty top docs results before merging
area: Search
type: bug
issues:
- 126118
Original file line number Diff line number Diff line change
Expand Up @@ -150,11 +150,11 @@ static TopDocs mergeTopDocs(List<TopDocs> results, int topN, int from) {
return topDocs;
} else if (topDocs instanceof TopFieldGroups firstTopDocs) {
final Sort sort = new Sort(firstTopDocs.fields);
final TopFieldGroups[] shardTopDocs = results.toArray(new TopFieldGroups[0]);
final TopFieldGroups[] shardTopDocs = results.stream().filter(td -> td != Lucene.EMPTY_TOP_DOCS).toArray(TopFieldGroups[]::new);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's obviously hard to justify this here in isolation, but shall we stay away from that stream magic a little? It really adds a lot of unexpected warmup overhead and is generally somewhat unpredictable.
That said, this looks like code we could optimize/design away mostly anyway, not important now :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what you'd prefer here, a loop? What do you mean with optimize/design away? I tried different approaches and this is the only one that worked, sadly. Curious to know how we could do things differently to not require this filtering.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a loop would be cheaper, but actually we should just filter this stuff out right off the bat in the QueryPhaseResultConsumer for one and also exploit our knowledge of the array type there directly and not type check here (this need not be clever, we simply know this once we have the first result).
Now that I look at this again, I'm sorry :) I think this only looks the way it does now because I was lazy when it came to the serialization of partial merge results.

But that said, I also had a kinda cool optimization in mind here. Merging top docs is super cheap actually. We could do it on literally every result and then only register search contexts with the search service if a shard's hits are needed as well as releasing those that go out of the top-hits window directly. That would save the needless complicated logic that deals with this when sending the partial result now and saves heap and such :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried fixing the problem at the root, and serializing things differently, but I did not get very far. In short, in https://github.com/elastic/elasticsearch/blob/main/server/src/main/java/org/elasticsearch/action/search/QueryPhaseResultConsumer.java#L361 we have a null topDocsList, and I was not sure how we can determine what type that actually was. I will merge this as a remediation for the test failure. If there's a better way to fix this with a more extensive change, let's make it as a follow-up.

mergedTopDocs = TopFieldGroups.merge(sort, from, topN, shardTopDocs, false);
} else if (topDocs instanceof TopFieldDocs firstTopDocs) {
final Sort sort = checkSameSortTypes(results, firstTopDocs.fields);
final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[0]);
final TopFieldDocs[] shardTopDocs = results.stream().filter((td -> td != Lucene.EMPTY_TOP_DOCS)).toArray(TopFieldDocs[]::new);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a bit uneasy that this was almost inadvertently raised by a geo distance related integration test. This seems to indicate that we lack some proper unit testing of the merging logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ indeed, we should have caught this deterministically for sure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we track this in a follow-up issue, to add the missing unit tests for the merging of incremental results?

mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs);
} else {
final TopDocs[] shardTopDocs = results.toArray(new TopDocs[numShards]);
Expand Down
Loading