Skip to content

Add streaming WAL replay with scheduler yields#16

Closed
bravo1goingdark wants to merge 1 commit intomainfrom
add-async-streaming-iterator-for-replay-wzcnpt
Closed

Add streaming WAL replay with scheduler yields#16
bravo1goingdark wants to merge 1 commit intomainfrom
add-async-streaming-iterator-for-replay-wzcnpt

Conversation

@bravo1goingdark
Copy link
Owner

@bravo1goingdark bravo1goingdark commented Feb 3, 2026

Motivation

  • Reduce long, blocking pauses during WAL-based recovery by processing records incrementally and yielding to the scheduler.
  • Provide a streaming API to consume WAL records without materializing the full Vec, enabling lower memory usage and backpressure-friendly recovery.
  • Preserve the existing iterate_from behavior for tests and callers by implementing it on top of the new streaming API.

Description

  • Add a new WriteAheadLog::iterate_from_stream method that takes a callback (WalRecord) -> Future<Output=Result<(), E>> and streams records from the WAL starting at a given id, mapping WAL I/O errors into the callback error type via E: From<WalError> and allowing incremental processing.
  • Route the existing iterate_from implementation through iterate_from_stream so the old vector-based API remains available for tests and callers.
  • Update Broker::replay_from_wal to use iterate_from_stream and process records one-by-one, calling tokio::task::yield_now().await every yield_every records (set to 1024) to yield to the scheduler and avoid long monopolization of the runtime.
  • Remove an unused WalRecord import from core::lib.rs as the broker now decodes records inside the streaming callback.

Testing

  • No automated tests were executed as part of this change.
  • Existing WAL unit tests remain in the codebase and the public iterate_from behavior is preserved because it delegates to the new streaming API.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant