-
Notifications
You must be signed in to change notification settings - Fork 18
WIP: Add Rust implementation for DABA #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 1 commit
a2ebe95
0ba8f1e
94c0400
f49672e
730481d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,151 @@ | ||
| use crate::FifoWindow; | ||
| use alga::general::AbstractMonoid; | ||
| use alga::general::Operator; | ||
| use std::collections::VecDeque; | ||
| use std::marker::PhantomData; | ||
|
|
||
| #[derive(Clone)] | ||
| pub struct DABA<Value, BinOp> | ||
| where | ||
| Value: AbstractMonoid<BinOp> + Clone, | ||
| BinOp: Operator, | ||
| { | ||
| // ith oldest value in FIFO order stored at vi = vals[i] | ||
| vals: VecDeque<Value>, | ||
| aggs: VecDeque<Value>, | ||
| // 0 ≤ l ≤ r ≤ a ≤ b ≤ aggs.len() | ||
| l: usize, // Left, ∀p ∈ l...r−1 : aggs[p] = vals[p] ⊕ ... ⊕ vals[r−1] | ||
| r: usize, // Right, ∀p ∈ r...a−1 : aggs[p] = vals[R] ⊕ ... ⊕ vals[p] | ||
| a: usize, // Accum, ∀p ∈ a...b−1 : aggs[p] = vals[p] ⊕ ... ⊕ vals[b−1] | ||
| b: usize, // Back, ∀p ∈ b...e−1 : aggs[p] = vals[B] ⊕ ... ⊕ vals[p] | ||
| op: PhantomData<BinOp>, | ||
| } | ||
|
|
||
| impl<Value, BinOp> FifoWindow<Value, BinOp> for DABA<Value, BinOp> | ||
| where | ||
| Value: AbstractMonoid<BinOp> + Clone, | ||
| BinOp: Operator, | ||
| { | ||
| fn new() -> Self { | ||
| Self { | ||
| vals: VecDeque::new(), | ||
| aggs: VecDeque::new(), | ||
| l: 0, | ||
| r: 0, | ||
| a: 0, | ||
| b: 0, | ||
| op: PhantomData, | ||
| } | ||
| } | ||
| fn push(&mut self, v: Value) { | ||
| self.aggs.push_back(self.agg_b().operate(&v)); | ||
| self.vals.push_back(v); | ||
| self.fixup(); | ||
| } | ||
| fn pop(&mut self) { | ||
| if self.vals.pop_front().is_some() { | ||
| self.aggs.pop_front(); | ||
| self.l -= 1; | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure decrementing is the right thing here - it may be, but it should be carefully considered. You've implemented
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In Rust it's possible to overload the indexing operator. The indexing for
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, now I see what you mean. Calling I was wondering if it would be easier to just use an iterator, and I don't think it is. From what I can tell, there's no easy way to "decrement" a standard Rust iterator as is required in the shrink case. So, in effect, you have to do your own iterator management.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think what we want is a linked list of arrays with "cursors".
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, the concept of a cursor is essentially what C++ style iterators are, and how we ended up presenting the algorithm. The chunked array queue is a linked list, it just has many elements in each node. |
||
| self.r -= 1; | ||
| self.a -= 1; | ||
| self.b -= 1; | ||
| self.fixup(); | ||
| } | ||
| } | ||
| fn query(&self) -> Value { | ||
| self.agg_f().operate(&self.agg_b()) | ||
| } | ||
| fn len(&self) -> usize { | ||
| self.vals.len() | ||
| } | ||
| fn is_empty(&self) -> bool { | ||
| self.vals.is_empty() | ||
| } | ||
| } | ||
|
|
||
| impl<Value, BinOp> DABA<Value, BinOp> | ||
| where | ||
| Value: AbstractMonoid<BinOp> + Clone, | ||
| BinOp: Operator, | ||
| { | ||
| #[inline(always)] | ||
| fn agg_f(&self) -> Value { | ||
| if self.aggs.is_empty() { | ||
| Value::identity() | ||
| } else { | ||
| self.aggs.front().unwrap().clone() | ||
| } | ||
| } | ||
| #[inline(always)] | ||
| fn agg_b(&self) -> Value { | ||
| if self.b == self.aggs.len() { | ||
| Value::identity() | ||
| } else { | ||
| self.aggs.back().unwrap().clone() | ||
| } | ||
| } | ||
| #[inline(always)] | ||
| fn agg_l(&self) -> Value { | ||
| if self.l == self.r { | ||
| Value::identity() | ||
| } else { | ||
| self.aggs[self.l].clone() | ||
| } | ||
| } | ||
| #[inline(always)] | ||
| fn agg_r(&self) -> Value { | ||
| if self.r == self.a { | ||
| Value::identity() | ||
| } else { | ||
| self.aggs[self.a - 1].clone() | ||
| } | ||
| } | ||
| #[inline(always)] | ||
| fn agg_a(&self) -> Value { | ||
| if self.a == self.b { | ||
| Value::identity() | ||
| } else { | ||
| self.aggs[self.a].clone() | ||
| } | ||
| } | ||
| fn fixup(&mut self) { | ||
| if self.b == 0 { | ||
| self.singleton() | ||
| } else { | ||
| if self.l == self.b { | ||
| self.flip() | ||
| } | ||
| if self.l == self.r { | ||
| self.shift() | ||
| } else { | ||
| self.shrink() | ||
| } | ||
| } | ||
| } | ||
| #[inline(always)] | ||
| fn singleton(&mut self) { | ||
| self.l = self.aggs.len(); | ||
| self.r = self.l; | ||
| self.a = self.l; | ||
| self.b = self.l; | ||
| } | ||
| #[inline(always)] | ||
| fn flip(&mut self) { | ||
| self.l = 0; | ||
| self.a = self.aggs.len(); | ||
| self.b = self.a; | ||
| } | ||
| #[inline(always)] | ||
| fn shift(&mut self) { | ||
| self.a += 1; | ||
| self.r += 1; | ||
| self.l += 1; | ||
| } | ||
| #[inline(always)] | ||
| fn shrink(&mut self) { | ||
| self.aggs[self.l] = self.agg_l().operate(&self.agg_r()).operate(&self.agg_a()); | ||
| self.l += 1; | ||
| self.aggs[self.a - 1] = self.vals[self.a - 1].operate(&self.agg_a()); | ||
| self.a -= 1; | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We implemented the
valsandaggsusing a single underlying queue. We created an internal struct that wrapped both a value and a partial aggregate, and then made the queue contain that struct. I think that will probably have better performance, both because of locality and by just doing less total work.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, that I should definitely fix
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just loaded up our 2017 paper, and we presented it with parallel queues. :) So I understand why you implemented it that way. Our soon-to-be-submitted journal article presents it in a way closer to our C++ implementation.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still do not think I fully get this algorithm, but thankfully it was pretty easy to implement 😅
Looking forward to see the paper