-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Filter out empty top docs results before merging #126385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
e279e60
2e4d9dc
aa9556e
84e0877
e5714ff
de683e2
055eada
b044858
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,6 @@ | ||
| pr: 126385 | ||
| summary: Filter out empty top docs results before merging | ||
| area: Search | ||
| type: bug | ||
| issues: | ||
| - 126118 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -150,11 +150,11 @@ static TopDocs mergeTopDocs(List<TopDocs> results, int topN, int from) { | |
| return topDocs; | ||
| } else if (topDocs instanceof TopFieldGroups firstTopDocs) { | ||
| final Sort sort = new Sort(firstTopDocs.fields); | ||
| final TopFieldGroups[] shardTopDocs = results.toArray(new TopFieldGroups[0]); | ||
| final TopFieldGroups[] shardTopDocs = results.stream().filter(td -> td != Lucene.EMPTY_TOP_DOCS).toArray(TopFieldGroups[]::new); | ||
| mergedTopDocs = TopFieldGroups.merge(sort, from, topN, shardTopDocs, false); | ||
| } else if (topDocs instanceof TopFieldDocs firstTopDocs) { | ||
| final Sort sort = checkSameSortTypes(results, firstTopDocs.fields); | ||
| final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[0]); | ||
| final TopFieldDocs[] shardTopDocs = results.stream().filter((td -> td != Lucene.EMPTY_TOP_DOCS)).toArray(TopFieldDocs[]::new); | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I feel a bit uneasy that this was almost inadvertently raised by a geo distance related integration test. This seems to indicate that we lack some proper unit testing of the merging logic.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ++ indeed, we should have caught this deterministically for sure
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. shall we track this in a follow-up issue, to add the missing unit tests for the merging of incremental results? |
||
| mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs); | ||
| } else { | ||
| final TopDocs[] shardTopDocs = results.toArray(new TopDocs[numShards]); | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's obviously hard to justify this here in isolation, but shall we stay away from that stream magic a little? It really adds a lot of unexpected warmup overhead and is generally somewhat unpredictable.
That said, this looks like code we could optimize/design away mostly anyway, not important now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what you'd prefer here, a loop? What do you mean with optimize/design away? I tried different approaches and this is the only one that worked, sadly. Curious to know how we could do things differently to not require this filtering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a loop would be cheaper, but actually we should just filter this stuff out right off the bat in the
QueryPhaseResultConsumerfor one and also exploit our knowledge of the array type there directly and not type check here (this need not be clever, we simply know this once we have the first result).Now that I look at this again, I'm sorry :) I think this only looks the way it does now because I was lazy when it came to the serialization of partial merge results.
But that said, I also had a kinda cool optimization in mind here. Merging top docs is super cheap actually. We could do it on literally every result and then only register search contexts with the search service if a shard's hits are needed as well as releasing those that go out of the top-hits window directly. That would save the needless complicated logic that deals with this when sending the partial result now and saves heap and such :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried fixing the problem at the root, and serializing things differently, but I did not get very far. In short, in https://github.com/elastic/elasticsearch/blob/main/server/src/main/java/org/elasticsearch/action/search/QueryPhaseResultConsumer.java#L361 we have a
nulltopDocsList, and I was not sure how we can determine what type that actually was. I will merge this as a remediation for the test failure. If there's a better way to fix this with a more extensive change, let's make it as a follow-up.