Skip to content

Commit e73bedd

Browse files
committed
Readme and no_std support
1 parent df3cff6 commit e73bedd

File tree

5 files changed

+101
-24
lines changed

5 files changed

+101
-24
lines changed

.github/workflows/ci.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,5 +16,6 @@ jobs:
1616
- uses: actions/checkout@v4
1717
- uses: dtolnay/rust-toolchain@stable
1818
- run: cargo test
19+
- run: cargo check --no-default-features
1920
- run: cargo run --example raw
2021
- run: cargo run --example intrusive

Cargo.toml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,8 @@ name = "intrusivelock"
33
version = "0.1.0"
44
edition = "2024"
55

6+
[features]
7+
default = ["std"]
8+
std = []
9+
610
[dependencies]

README.md

Lines changed: 89 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,105 @@
1-
# Intrusive Locks for Rust
1+
# intrusivelock
22

3-
The purpose of this crate is to provide intrusive locks (locks that are contained inside the memory region
4-
they protect). For example, data that's kept in an `mmap` - where you want the lock and the data it protects
5-
to live side-by-side.
3+
Intrusive locks for Rust — locks that live inside the memory region they protect.
64

7-
Currently only `SpinRwLock` is implemented. Unlike rust's canonical locks/mutexes, it does not hold an `UnsafeCell<T>` -
8-
it's a standalone lock object. You can use `IntrusiveSpinRwLock` to encompass the value to protect, which must contain
9-
the lock inside (use `HasIntrusiveSpinRwLock` to return the contained lock).
5+
This is useful when the lock and the data must be co-located in the same allocation, for example inside an `mmap`-backed region, a shared-memory arena, or a custom cache-line-aligned struct.
106

11-
Notes:
12-
* Being a spin lock, it is expected to work under low-to-moderate contention.
13-
* It is not intended for multiprocessing, only multithreading. You need kernel support
7+
## Features
8+
9+
- **`SpinRwLock`** — a standalone, 8-byte reader-writer spin lock (`AtomicU64`). Unlike `std::sync::RwLock`, it does not wrap data in an `UnsafeCell<T>`. You manage the protected data yourself.
10+
- **`IntrusiveSpinRwLock<T>`** — a safe wrapper that pairs a value `T` (which contains a `SpinRwLock`) with RAII read/write guards that `Deref`/`DerefMut` into `T`.
11+
- **`try_read` / `try_write`** — non-blocking variants that return `None` on contention.
12+
- **Writer-preferring** — once a writer is pending, new readers are blocked until the writer completes. This prevents writer starvation under read-heavy workloads.
13+
14+
### Design notes
15+
16+
- Being a spin lock, it is intended for **low-to-moderate contention** with short critical sections.
17+
- It is **not** suitable for multi-process synchronization (which requires kernel-assisted locks like `pthread_rwlock` or futexes).
18+
- There is **no poisoning**: if a thread panics inside a critical section, subsequent lock acquisitions will succeed without any indication of the prior panic.
19+
20+
## Usage
21+
22+
### Intrusive (recommended)
23+
24+
Embed the lock inside your data structure and implement `HasIntrusiveSpinRwLock`:
1425

15-
## Example
1626
```rust
27+
use intrusivelock::spin_rwlock::*;
28+
1729
#[derive(Default)]
18-
#[repr(C, align(64))]
19-
struct _MyCacheLine {
20-
val2: u128,
21-
val1: u64,
30+
struct MyData {
2231
_lock: SpinRwLock,
23-
val3: [u8; 32],
32+
counter: u64,
2433
}
2534

26-
unsafe impl HasIntrusiveSpinRwLock for _MyCacheLine {
27-
fn lock<'a>(&'a self) -> &'a SpinRwLock {
35+
unsafe impl HasIntrusiveSpinRwLock for MyData {
36+
fn lock(&self) -> &SpinRwLock {
2837
&self._lock
2938
}
3039
}
3140

32-
type MyCacheLine = IntrusiveSpinRwLock<_MyCacheLine>;
41+
type Protected = IntrusiveSpinRwLock<MyData>;
42+
43+
let data = Protected::default();
3344

34-
let cache_line = MyCacheLine::new(_MyCacheLine::default());
45+
// Write access — returns a guard that DerefMuts into MyData
46+
data.write().counter += 1;
3547

36-
cache_line.write().val2 = 12345;
37-
assert_eq!(cache_line.read().val2, 12345);
48+
// Read access
49+
assert_eq!(data.read().counter, 1);
50+
51+
// Non-blocking
52+
if let Some(guard) = data.try_write() {
53+
// got exclusive access
54+
}
3855
```
56+
57+
### Standalone lock
58+
59+
Use `SpinRwLock` directly when you need the lock separate from the data (e.g. protecting a `static mut`):
60+
61+
```rust
62+
use intrusivelock::spin_rwlock::SpinRwLock;
63+
64+
let lock = SpinRwLock::new();
65+
66+
{
67+
let _r = lock.read(); // shared access
68+
}
69+
{
70+
let _w = lock.write(); // exclusive access
71+
}
72+
```
73+
74+
## API
75+
76+
### `SpinRwLock`
77+
78+
| Method | Description |
79+
|--------|-------------|
80+
| `new() -> Self` | Create a new unlocked `SpinRwLock` |
81+
| `read() -> ReadGuard` | Acquire shared read access (spins until available) |
82+
| `write() -> WriteGuard` | Acquire exclusive write access (spins until available) |
83+
| `try_read() -> Option<ReadGuard>` | Try to acquire read access; returns `None` if contended |
84+
| `try_write() -> Option<WriteGuard>` | Try to acquire write access; returns `None` if contended |
85+
86+
### `IntrusiveSpinRwLock<T>`
87+
88+
| Method | Description |
89+
|--------|-------------|
90+
| `new(value: T) -> Self` | Wrap a value containing an embedded lock |
91+
| `read() -> IntrusiveReadGuard<T>` | Shared access; guard derefs to `&T` |
92+
| `write() -> IntrusiveWriteGuard<T>` | Exclusive access; guard derefs to `&mut T` |
93+
| `try_read() -> Option<IntrusiveReadGuard<T>>` | Non-blocking read attempt |
94+
| `try_write() -> Option<IntrusiveWriteGuard<T>>` | Non-blocking write attempt |
95+
96+
## `no_std` support
97+
98+
The crate is `no_std`-compatible. The `std` feature is enabled by default (providing `thread::yield_now()` in the spin backoff). To use in a `no_std` environment, disable default features:
99+
100+
```toml
101+
[dependencies]
102+
intrusivelock = { version = "0.1", default-features = false }
103+
```
104+
105+
Under `no_std`, the backoff strategy falls back to `core::hint::spin_loop()` instead of yielding.

src/lib.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,3 @@
1+
#![cfg_attr(not(feature = "std"), no_std)]
2+
13
pub mod spin_rwlock;

src/spin_rwlock.rs

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
use std::{
1+
use core::{
22
cell::UnsafeCell,
33
ops::{Deref, DerefMut},
44
sync::atomic::{AtomicU64, Ordering},
@@ -33,11 +33,14 @@ impl Backoff {
3333
fn spin(&mut self) {
3434
if self.spins < 200 {
3535
// Phase 1: busy-spin for the first 200 attempts.
36-
std::hint::spin_loop();
36+
core::hint::spin_loop();
3737
self.spins += 1;
3838
} else {
3939
// Phase 2: yield then do 10 more spins before yielding again.
40+
#[cfg(feature = "std")]
4041
std::thread::yield_now();
42+
#[cfg(not(feature = "std"))]
43+
core::hint::spin_loop();
4144
self.spins = 190;
4245
}
4346
}

0 commit comments

Comments
 (0)