You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
996: par_bridge: use naive locking of the Iterator r=cuviper a=njaard
Fix issue #795
instead of using crossbeam_deque, just lock the Iterator and get the next item, naively. The original code would spin until more data became available, which would cause par_bridge to not be useful in cases where the source Iterator did slow IO or was computationally costly in itself.
This causes there to be virtually no change in runtime in many cases, but in cases where there are a lot CPUs and even slightly slower IO, significantly less runtime.
In pretty much every case, total CPU time was much lower.
I ran the below test program on a 1.5GiB file from a fast SSD and on a machine with 64 cores and mean runtime went from 44s to 30s. In fact, according to hyperfine, average runtime with this patch was lower than _minimum_ runtime without (25s and 41s, respectively). But more importantly, total CPU time was almost 10x less with this patch.
I'm not sure what realistic usage patterns could be worse with this patch and therefor I'd love to hear other opinions.
I wrote a basic test program:
```
use std::io::{BufReader,BufRead};
use rayon::prelude::*;
use std::sync::atomic::{Ordering,AtomicUsize};
use std::os::unix::io::FromRawFd;
fn main() {
let counter = AtomicUsize::new(0);
let stdin = BufReader::new(unsafe { std::fs::File::from_raw_fd(0) });
stdin.lines().par_bridge()
.for_each( |row| {
let row = row.unwrap();
for _ in 0 .. 1000
{
let c = row.len();
counter.fetch_add(c, Ordering::Relaxed);
}
}
);
println!("{}", counter.load(Ordering::Relaxed));
}
```
Which directly wastes some CPU time (`for _ in 0 .. 1000`). The unsafe is necessary to make the input stream Send. I feed this program a multigigabyte file as part of a test. Once over a relatively slow network and once off an SSD.
Co-authored-by: Charles Samuels <[email protected]>
0 commit comments