Optimize scan performance across the surrealmx scan path#65
Merged
Conversation
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
tobiemh
approved these changes
Feb 17, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Scan operations in surrealmx are slower than they were in SurrealDB v2, which used a radix tree with excellent cache locality. While the underlying skip list is being kept, the layers built on top of it -- the Cursor, the MergeIterator, and the merge queue materialization -- all introduced unnecessary overhead that compounded during range scans. The most critical issue was the
Cursor, which recreated a fullMergeIteratoron every singlenext()/prev()call, making cursor-based iteration O(n²). Several other inefficiencies along the scan path (O(log n) join advancement, eager merge queue cloning, unnecessary value cloning on skip paths) further degraded throughput.Changes Made
1. Persist MergeIterator in Cursor (O(n²) → O(n)) —
src/cursor.rsThe
Cursorpreviously created and destroyed aMergeIteratoron everynext(),prev(), andseek()call -- constructing a fresh BTreeMap, skip list range iterator, and writeset range iterator, consuming one entry, then dropping everything. The Cursor now stores a persistentMergeIteratorand reuses it across sequential calls in the same direction. The iterator is only recreated on direction changes or explicit seeks. This also required switching the skip list range bounds from borrowed (Bound<&'a Bytes>) to owned (Bound<Bytes>) to avoid self-referential lifetime issues.2. Replace O(log n) join advancement with O(1) VecDeque pops —
src/iter.rsMergeIterator::advance_join()previously re-searched the BTreeMap with arange(Excluded(current_key.clone()), Unbounded)query on every step -- an O(log n) lookup plus a key clone. The join source is now stored as aVecDeque(converted from the BTreeMap at construction time), and advancement is a singlepop_front()orpop_back()-- O(1) with no cloning.3. Fast-path skip of merge queue materialization —
src/tx.rs,src/cursor.rsBefore every scan, a
BTreeMapwas eagerly built by cloning all overlapping entries from the merge queue -- even when the queue was empty (the common case, since the merge window is very short). Anis_empty()guard now skips the entire materialization loop when the merge queue has no entries, eliminating unnecessary iteration and allocation in the steady state.4. Eliminate unnecessary clones on skip and key-only paths —
src/iter.rsIn
next_key()'s Committed branch, anOption<Bytes>value was cloned only to call.is_some()-- replaced with reading the boolean before advancing.Test adjustments —
src/db.rs,tests/iterators.rs,tests/iteration_edge.rsFour cursor tests needed
drop(cursor)beforetx.cancel()/tx.commit()because the persistentMergeIteratornow holds aSkipRangewith aDropimpl, keeping the immutable borrow oftxalive through the cursor's destructor.New benchmarks —
benches/operations.rsAdded
cursor_scan,scan_iter_ops, andkeys_iter_opsbenchmark groups that exercise the Cursor, ScanIterator, and KeyIterator paths respectively. The existing benchmarks only covered the directtx.scan()/tx.keys()/tx.total()methods.Benchmark Results
Cursor and iterator-based scans (the paths these changes target):
cursor_fwd(limit 10)cursor_fwd(limit 100)cursor_fwd(limit 1000)scan_iter(limit 1000)keys_iter(limit 1000)All other operations (put, get, exists, delete, direct scan/keys/total, concurrent read/write) are within noise of main (±2%).