-
Notifications
You must be signed in to change notification settings - Fork 1
Description
It was brought up in #4 about using ndarray and a parallelization library to speed up calculations. It makes sense to use an external crate for matrix features instead of rolling our own. I'm not sure how much performance there is to gain so I'd like to see measurements; the algorithms may not be amenable to easy parallelization.
Benching/Testing
I'm fine with using the nightly cargo bench instead of bringing in criterion for now. Experimentation will need to be done wiring up the benches and figuring out what size image makes sense to iterate on (CC0 images preferred). The benches shouldn't take too long to run but hopefully run long enough to make reasonable deductions on performance changes.
Many of the calculations are operating on nested loop indices and not the matrix collection elements themselves. Based on that detail, I'm not sure if an external matrix library would add more overhead or improve performance. For multi-threading, we need to make sure we're not changing calculations that rely on being computed in an order.
The crate could use some better test coverage but the quantization functions are fairly opaque to me.
Things to do
Avenues of exploration
- Add benchmarks, add image file(s), tests
- Exclude the image data folder from Cargo.toml
- External crates like
ndarrayandrayonshould be behind optional feature gates at first - Investigate where parallelization would help in the
quantorquant::utilityfunctions
This comment will be updated with any changes or suggestions.