|
| 1 | +# Spillway |
| 2 | +A high-throughput multi-producer, single-consumer (MPSC) channel. |
| 3 | + |
| 4 | +This channel was made to relieve a performance bottleneck in the |
| 5 | +`protosocket` ecosystem. |
| 6 | + |
| 7 | +## Usage |
| 8 | +```rust |
| 9 | +tokio_test::block_on(async { |
| 10 | + use spillway::channel; |
| 11 | + |
| 12 | + let (sender_1, mut receiver) = channel(); |
| 13 | + |
| 14 | + // Send is synchronous |
| 15 | + sender_1.send(1); |
| 16 | + |
| 17 | + // Cloning a sender results in a different cursor with a separate order |
| 18 | + let sender_2 = sender_1.clone(); |
| 19 | + |
| 20 | + // interleave messages to illustrate ordering |
| 21 | + sender_2.send(2); |
| 22 | + sender_1.send(3); |
| 23 | + sender_2.send(4); |
| 24 | + |
| 25 | + // Only order per-sender is guaranteed |
| 26 | + assert_eq!(Some(1), receiver.next().await, "sender 1 is consumed first"); |
| 27 | + sender_1.send(5); |
| 28 | + assert_eq!(Some(3), receiver.next().await, "sender 1 had 1-3-5, but 5 is in another delivery batch."); |
| 29 | + |
| 30 | + assert_eq!(Some(2), receiver.next().await, "sender 2 is consumed next"); |
| 31 | + assert_eq!(Some(4), receiver.next().await); |
| 32 | + |
| 33 | + let sender_3 = sender_1.clone(); |
| 34 | + sender_3.send(6); |
| 35 | + assert_eq!(Some(6), receiver.next().await, "yes, sender_1 sent 5 first, but lanes are serviced round-robin."); |
| 36 | + assert_eq!(Some(5), receiver.next().await, "and finally, we made it back around to 5"); |
| 37 | +}) |
| 38 | +``` |
| 39 | + |
| 40 | +## About Ordering |
| 41 | +Many MPSC channel implementations guarantee ordering across senders, when |
| 42 | +you externally guarantee happens-before. They might not claim it, but their |
| 43 | +implementations often tend to be fancy OCC loops with final ordering |
| 44 | +determined at `send()` time. |
| 45 | + |
| 46 | +Spillway ordering is only determined per-sender at `send()` time. This is |
| 47 | +how many senders can send with very little contention between them. |
| 48 | + |
| 49 | +For each `Sender`, its messages will appear in order to the `Receiver`. Any |
| 50 | +other `Sender`'s messages may appear before, between, or after this `Sender`'s |
| 51 | +messages. But each `Sender`'s messages will appear only in the order the messages |
| 52 | +were submitted to the `Sender`. |
| 53 | + |
| 54 | +## How it works |
| 55 | +There's no unsafe code in this channel. It's basically a `Vec<Mutex<VecDeque<T>>>`. |
| 56 | +Each `Sender` gets an index `i` at creation time, and that `i` is valid for the outer |
| 57 | +`Vec`. That decides which `Mutex` will block this `Sender`, and which "chute" or "shard" |
| 58 | +or "lane" will order the messages from this `Sender`. |
| 59 | + |
| 60 | +The `Receiver` holds a buffer of messages. When it's empty, it advances to the next |
| 61 | +chute with messages in it, and swaps its empty buffer for the chute's messages. It |
| 62 | +then resumes fulfilling `next()` by `pop_front()` on the current buffered `VecDeque<T>`. |
| 63 | + |
| 64 | +## Performance |
| 65 | +In benchmark tests with various processor architectures, the |
| 66 | +`channel` bench on this repository shows the `tokio` mpsc channel |
| 67 | +producing around 5-10mhz throughput. The `spillway` channel outperforms |
| 68 | +typically by a factor of around 20x. For example, on my older m1 macbook: |
| 69 | + |
| 70 | +||tokio|spillway| |
| 71 | +|-|-|-| |
| 72 | +|throughput|4.5mhz|92.7mhz| |
| 73 | +|latency|219.4ns|10.8ns |
| 74 | +|std deviation|16.6ns|2.5ns |
| 75 | + |
0 commit comments