toyrcu is a Read-Copy-Update (RCU) implementation written in Rust. This project is meant for educational purposes and is not suitable for real-world production use.
Read-Copy-Update (RCU) is a synchronization mechanism that allows multiple readers to access data concurrently while a single writer updates it without blocking the readers. It's particularly useful in scenarios where reads are much more frequent than writes.
Key characteristics of RCU:
- Readers can access data without locks
- Writers make a copy of the data to modify
- Updates become visible to new readers once they complete
- Old versions are cleaned up when no readers are using them
An important aspect of this RCU implementation is memory management:
- When you clone an
Rcu<T>instance (or useread_lock()), old data is not automatically cleaned up until you explicitly callclean()ortry_clean_fast()on a mutable reference. - The
Dropimplementation will attempt to clean up automatically when an instance is dropped, but this is only possible when there are no other active references. - For best results:
- Keep clones short-lived when possible
- Call
clean()ortry_clean_fast()periodically on long-lived clones - Be aware that failing to clean up can result in memory leaks as old versions of the data will be retained
This implementation provides:
- Lock-free read access
- Copy-on-write updates
- Safe memory reclamation
- Thread-safe operation
For short-lived RCU instances, the automatic cleanup in the Drop implementation will handle memory management:
use toyrcu::Rcu;
fn main() {
// Create a new RCU with an initial value
let mut rcu = Rcu::new(1);
// Make some updates
for i in 2..5 {
let mut guard = rcu.assign_pointer().unwrap();
*guard = i;
println!("Updated value to: {}", *guard);
}
// When rcu goes out of scope, Drop will automatically try to clean up
println!("Final value before drop: {}", *rcu);
// End of scope - automatic cleanup happens here
}For long-lived clones, especially in multi-threaded contexts, explicit cleanup is necessary:
use toyrcu::Rcu;
use std::{thread, time::Duration};
fn main() {
// Create a new RCU with an initial value
let rcu = Rcu::new(10);
// Reader thread example
let reader = thread::spawn({
let rcu = rcu.read_lock();
move || {
println!("Reader thread started");
for _ in 0..5 {
// Read the value
let value = *rcu;
println!("Reader: {}", value);
thread::sleep(Duration::from_millis(500));
}
println!("Reader thread finished");
}
});
// Writer thread example
let writer = thread::spawn({
let mut rcu = rcu.read_lock();
move || {
println!("Writer thread started");
for i in 1..6 {
{
let mut rcu_guard = match rcu.assign_pointer() {
Ok(guard) => guard,
Err(e) => {
eprintln!("Error: {}", e);
return;
}
};
*rcu_guard += i;
println!("Writer: updated to {}", *rcu_guard);
}
// IMPORTANT: Explicitly clean up after updates
// This is necessary for long-lived clones
let cleaned = rcu.try_clean_fast();
println!("Cleanup performed: {}", cleaned);
thread::sleep(Duration::from_millis(1000));
}
println!("Writer thread finished");
}
});
reader.join().unwrap();
writer.join().unwrap();
println!("All threads completed. Final value: {}", *rcu);
}This RCU implementation is optimized for:
- Fast, lock-free reads
- Minimal contention between readers and writers
- Efficient memory management with explicit cleanup
- Exponential backoff for high-contention scenarios
Benchmark results on a typical system:
- Sequential reads: ~10-20 million ops/sec
- Sequential writes: ~1 million ops/sec
- Parallel reads (8 threads): ~50-100 million total ops/sec
- Parallel writes (4 threads): ~300-400K total ops/sec with contention handling
cargo build
cargo run
The main program includes several examples and benchmarks that demonstrate RCU usage.
- This is a simplified implementation for learning purposes
- Not optimized for production workloads
- Lacks some features found in industrial-strength RCU implementations
- Requires manual memory management through
clean()ortry_clean_fast()for long-lived clones