-
-
Notifications
You must be signed in to change notification settings - Fork 35
slight optimization of exp, sqrt, ^ for positive semidefinite output #1359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/symmetric.jl
Outdated
| @inbounds for j ∈ axes(U, 2), i ∈ axes(U, 1) | ||
| U[i, j] *= v[j] | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this may be rmul!(U, Diagonal(v))?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need to handle the case where U is immutable. Perhaps this should only be used when U is a StridedArray.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rmul! is done, thanks for noticing it. I'm not sure what you mean by immutable U, could you show an example where it would fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have an example immediately, but given that these accept generic matrices, we probably shouldn't assume mutability of either the values or the vectors. E.g., an operation like
julia> eigvals(Symmetric(Diagonal(1:4)))
4-element Vector{Float64}:
1.0
2.0
3.0
4.0returns a Vector currently, but it may return a range in the future as a performance enhancement. Similarly,
julia> eigvecs(Symmetric(Diagonal(1:4)))
4×4 Matrix{Float64}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0This is the identity matrix, and e.g. FillArrays may return a non-materialized matrix type here. I'd suggest having a fallback method that preserves the current behavior, and have this fast path only for StridedArrays. This way, we avoid any potential regression.
Such issues, unfortunately, occasionally crop up with infinite matrices, where a special non-materialized type needs to be returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I don't think doing this is a good idea; the code would become more complicated for no benefit, and we wouldn't even have a test for it.
When eigen starts returning immutable eigenvalues or eigenvectors then I think it would make sense. But I suspect that would break a lot of things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I follow: any package may add a method to return an immutable array, and the currently working code that does not assume mutability will break after this PR. We shouldn't rely on packages returning mutable arrays. E.g., Symmetric tridiagonal Toeplitz matrices have a special structure to their eigenvalues and eigenvectors, and the fact that ToeplitzMatrices currently returns mutable arrays is an implementation detail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there has never been a release of LinearAlgebra that worked with immutable objects, I'm certain this PR won't break anybody's code.
But let me put it another way: we can't code for hypothetical failures. We need a concrete case to fail in the tests, otherwise the capability of handling immutable objects will be lost sooner or later. We can't rely on you reviewing every single PR to make sure it doesn't happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, but perhaps StaticArrays are such a (immutable) case? Though they may have their own implementations anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there has never been a release of LinearAlgebra that worked with immutable objects, I'm certain this PR won't break anybody's code.
If I understand correctly, the special functions here already work with immutable arrays. Currently, eigen returns mutable arrays for standard cases, but it's possible for a package to add methods to return immutable arrays. StaticArrays is indeed such an example:
julia> S = Symmetric(@SMatrix [1 2; 3 4])
2×2 Symmetric{Int64, SMatrix{2, 2, Int64, 4}} with indices SOneTo(2)×SOneTo(2):
1 2
2 4
julia> eigen(S)
Eigen{Float64, Float64, SMatrix{2, 2, Float64, 4}, SVector{2, Float64}}
values:
2-element SVector{2, Float64} with indices SOneTo(2):
0.0
5.0
vectors:
2×2 SMatrix{2, 2, Float64, 4} with indices SOneTo(2)×SOneTo(2):
-0.894427 0.447214
0.447214 0.894427This PR currently breaks exp(S).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whereas on 1.12:
julia> H = Hermitian(@SMatrix [1 2 + im; 3 4]); exp(H)
ERROR: setindex!(::SMatrix{2, 2, ComplexF64, 4}, value, ::Int) is not defined.But fair enough, it would start working on 1.13 but my PR would prevent it. Instead of creating a special branch for StridedArray, which would be a horrible mess, I'd rather allocate new arrays, though. The main point of the optimization is using syrk, and that would stay.
And I really think you should add tests for this.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1359 +/- ##
=======================================
Coverage 93.75% 93.75%
=======================================
Files 34 34
Lines 15724 15728 +4
=======================================
+ Hits 14742 14746 +4
Misses 982 982 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
I have no clue what this test failure is about. It is failing type inference with |
|
For small matrices it may be slower because of the extra allocations? |
|
Where do you see extra allocations? I don't think there are any. Note that in the original version I was reducing the number of allocations, but @jishnub complained that this broke immutable arrays, so now I removed that aspect of the optimization. |
|
Oh, you're right, I mis-read the code; same number of allocs. Carry on. |
Co-authored-by: Steven G. Johnson <[email protected]>
|
As it turns out the test failure was real; my code did touch |
|
Ok, now the test failure really has nothing to do with my PR: it's the usual Windows failure, it doesn't even start. Whereas on Linux and Mac all tests pass. |
|
Tests are passing. Let's merge for now, and further optimization can be done once immutable tests are added. |
The speedup is small, my benchmarks show 2-5%, but since these functions are so often used I think it's worth it. Conversely, the same optimization applies to
cosh,acosh,acos, and the positive part of other functions but I didn't do it because I don't think the speedup is worth the complexity.