Skip to content

Refactor WAL appends to async background writer with batching and backpressure#12

Merged
bravo1goingdark merged 1 commit intomainfrom
refactor-writeaheadlog-append-for-async-processing
Feb 3, 2026
Merged

Refactor WAL appends to async background writer with batching and backpressure#12
bravo1goingdark merged 1 commit intomainfrom
refactor-writeaheadlog-append-for-async-processing

Conversation

@bravo1goingdark
Copy link
Owner

@bravo1goingdark bravo1goingdark commented Feb 3, 2026

Motivation

  • Reduce contention on the WAL inner mutex by offloading file I/O to a dedicated background writer task and enable batching for higher throughput.
  • Provide immediate per-record logical IDs at enqueue time and expose clear behavior when the WAL becomes overloaded.

Description

  • Replaced the synchronous WalInner/inner mutex model with a bounded tokio::mpsc channel and a WalWriter background task that owns the file and performs batched writes and fsyncs according to WalConfig.
  • Reserve logical IDs at enqueue time using an AtomicU64 (next_id) so append returns the assigned id immediately and append enqueues a WalWriteRequest via try_send, returning a WalError::Backpressure or WalError::WriterStopped on overload/stop.
  • Moved the in-memory index to Arc<Mutex<HashMap<u64, u64>>> and update it from the writer after each batch; flush is implemented by sending a WalMessage::Flush with a oneshot responder to the writer.
  • Moved append_count and bytes_written to atomics updated by the writer, added channel_capacity to WalConfig, and added error variants Backpressure, WriterStopped, and InvalidConfig.

Testing

  • No automated tests were executed on this change (existing unit tests remain in wal::tests but were not run as part of this rollout).

@bravo1goingdark bravo1goingdark merged commit b3b9183 into main Feb 3, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant