-
-
Notifications
You must be signed in to change notification settings - Fork 24
Description
Is your feature request related to a problem? Please describe.
Not really a problem, more like a potential optimization (I haven't worked out the details to see if it actually works).
So, if I understood how the algorithm works under the hood, we basically move the points in the dataset, one at a time, then, at each iteration, compute the mean, the standard deviation, and the correlation coefficient of this new dataset.
One thing that stands out performance-wise is that we currently use all of the points to compute the statistics at each step, which seems a bit wasteful.
Describe the solution you'd like
Instead of computing the statistics of the whole dataset, which requires at least iterating over all
where
and probably for the correlation coefficient (or better, its square) as well. This would allow us to compute all of the statistics in basically
There's at least one problem which I haven't worked out yet: is this numerically stable? Since numerical accuracy is paramount for the code to work properly, if the above has a large loss of accuracy, then it's not very useful, but if it's stable, it could be worthwhile to explore implementing it.
Some references that could be of use (regarding both the computation and numerical stability):
- https://changyaochen.github.io/welford/
- https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm
- https://stackoverflow.com/questions/5147378/rolling-variance-algorithm
Describe alternatives you've considered
None.
Additional context
None.