-
Notifications
You must be signed in to change notification settings - Fork 25.6k
ESQL: Expand HeapAttack for LOOKUP #120754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer.
Collaborator
|
Pinging @elastic/es-analytical-engine (Team:Analytics) |
ivancea
approved these changes
Jan 31, 2025
Contributor
ivancea
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Member
Author
|
Thanks @ivancea ! |
nik9000
added a commit
to nik9000/elasticsearch
that referenced
this pull request
Jan 31, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
This was referenced Jan 31, 2025
nik9000
added a commit
to nik9000/elasticsearch
that referenced
this pull request
Jan 31, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
nik9000
added a commit
to nik9000/elasticsearch
that referenced
this pull request
Jan 31, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
Collaborator
elasticsearchmachine
pushed a commit
that referenced
this pull request
Jan 31, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
elasticsearchmachine
pushed a commit
that referenced
this pull request
Jan 31, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
elasticsearchmachine
pushed a commit
that referenced
this pull request
Feb 3, 2025
* ESQL: Expand HeapAttack for LOOKUP This expands the heap attack tests for LOOKUP. Now there are three flavors: 1. LOOKUP a single geo_point - about 30 bytes or so. 2. LOOKUP a one mb string. 3. LOOKUP no fields - just JOIN to alter cardinality. Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer. The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really. The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer. * updates
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Analytics/ES|QL
AKA ESQL
auto-backport
Automatically create backport pull requests when merged
Team:Analytics
Meta label for analytical engine team (ESQL/Aggs/Geo)
>test
Issues or PRs that are addressing/adding tests
v8.18.0
v8.19.0
v9.0.0
v9.1.0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This expands the heap attack tests for LOOKUP. Now there are three flavors:
Fetching a geo_point is fine with about 500 repeated docs before it circuit breaks which works out to about 256mb of buffered results. That's sensible on our 512mb heap and likely to work ok for most folks. We'll flip to a streaming method eventually and this won't be a problem any more. But for now, we buffer.
The no lookup fields is fine with like 7500 matches per incoming row. That's quite a lot, really.
The 1mb string is trouble! We circuit break properly which is great and safe, but if you join 1mb worth of columns in LOOKUP you are going to need bigger heaps than our test. Again, we'll move from buffering these results to streaming them and it'll work better, but for now we buffer.