Skip to content

Optimize scan performance across the surrealmx scan path#65

Merged
tobiemh merged 4 commits intomainfrom
dev/stu/optimisations
Feb 17, 2026
Merged

Optimize scan performance across the surrealmx scan path#65
tobiemh merged 4 commits intomainfrom
dev/stu/optimisations

Conversation

@ssttuu
Copy link
Contributor

@ssttuu ssttuu commented Feb 16, 2026

Motivation

Scan operations in surrealmx are slower than they were in SurrealDB v2, which used a radix tree with excellent cache locality. While the underlying skip list is being kept, the layers built on top of it -- the Cursor, the MergeIterator, and the merge queue materialization -- all introduced unnecessary overhead that compounded during range scans. The most critical issue was the Cursor, which recreated a full MergeIterator on every single next()/prev() call, making cursor-based iteration O(n²). Several other inefficiencies along the scan path (O(log n) join advancement, eager merge queue cloning, unnecessary value cloning on skip paths) further degraded throughput.

Changes Made

1. Persist MergeIterator in Cursor (O(n²) → O(n))src/cursor.rs

The Cursor previously created and destroyed a MergeIterator on every next(), prev(), and seek() call -- constructing a fresh BTreeMap, skip list range iterator, and writeset range iterator, consuming one entry, then dropping everything. The Cursor now stores a persistent MergeIterator and reuses it across sequential calls in the same direction. The iterator is only recreated on direction changes or explicit seeks. This also required switching the skip list range bounds from borrowed (Bound<&'a Bytes>) to owned (Bound<Bytes>) to avoid self-referential lifetime issues.

2. Replace O(log n) join advancement with O(1) VecDeque popssrc/iter.rs

MergeIterator::advance_join() previously re-searched the BTreeMap with a range(Excluded(current_key.clone()), Unbounded) query on every step -- an O(log n) lookup plus a key clone. The join source is now stored as a VecDeque (converted from the BTreeMap at construction time), and advancement is a single pop_front() or pop_back() -- O(1) with no cloning.

3. Fast-path skip of merge queue materializationsrc/tx.rs, src/cursor.rs

Before every scan, a BTreeMap was eagerly built by cloning all overlapping entries from the merge queue -- even when the queue was empty (the common case, since the merge window is very short). An is_empty() guard now skips the entire materialization loop when the merge queue has no entries, eliminating unnecessary iteration and allocation in the steady state.

4. Eliminate unnecessary clones on skip and key-only pathssrc/iter.rs

In next_key()'s Committed branch, an Option<Bytes> value was cloned only to call .is_some() -- replaced with reading the boolean before advancing.

Test adjustmentssrc/db.rs, tests/iterators.rs, tests/iteration_edge.rs

Four cursor tests needed drop(cursor) before tx.cancel()/tx.commit() because the persistent MergeIterator now holds a SkipRange with a Drop impl, keeping the immutable borrow of tx alive through the cursor's destructor.

New benchmarksbenches/operations.rs

Added cursor_scan, scan_iter_ops, and keys_iter_ops benchmark groups that exercise the Cursor, ScanIterator, and KeyIterator paths respectively. The existing benchmarks only covered the direct tx.scan()/tx.keys()/tx.total() methods.

Benchmark Results

Cursor and iterator-based scans (the paths these changes target):

Benchmark Change
cursor_fwd (limit 10) -39% to -43%
cursor_fwd (limit 100) -62% to -69%
cursor_fwd (limit 1000) -66% to -74%
scan_iter (limit 1000) -57% to -59%
keys_iter (limit 1000) -63% to -65%

All other operations (put, get, exists, delete, direct scan/keys/total, concurrent read/write) are within noise of main (±2%).

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@tobiemh tobiemh merged commit ce3fa91 into main Feb 17, 2026
6 checks passed
@tobiemh tobiemh deleted the dev/stu/optimisations branch February 17, 2026 00:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants