Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 20 additions & 3 deletions src/symmetric.jl
Original file line number Diff line number Diff line change
Expand Up @@ -833,6 +833,14 @@ function svdvals!(A::RealHermSymComplexHerm)
return sort!(vals, rev = true)
end

#computes U * Diagonal(abs2.(v)) * U', destroying U
function _psd_spectral_product!(v, U)
@inbounds for j ∈ axes(U, 2), i ∈ axes(U, 1)
U[i, j] *= v[j]
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this may be rmul!(U, Diagonal(v))?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to handle the case where U is immutable. Perhaps this should only be used when U is a StridedArray.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rmul! is done, thanks for noticing it. I'm not sure what you mean by immutable U, could you show an example where it would fail?

Copy link
Member

@jishnub jishnub May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have an example immediately, but given that these accept generic matrices, we probably shouldn't assume mutability of either the values or the vectors. E.g., an operation like

julia> eigvals(Symmetric(Diagonal(1:4)))
4-element Vector{Float64}:
 1.0
 2.0
 3.0
 4.0

returns a Vector currently, but it may return a range in the future as a performance enhancement. Similarly,

julia> eigvecs(Symmetric(Diagonal(1:4)))
4×4 Matrix{Float64}:
 1.0  0.0  0.0  0.0
 0.0  1.0  0.0  0.0
 0.0  0.0  1.0  0.0
 0.0  0.0  0.0  1.0

This is the identity matrix, and e.g. FillArrays may return a non-materialized matrix type here. I'd suggest having a fallback method that preserves the current behavior, and have this fast path only for StridedArrays. This way, we avoid any potential regression.

Such issues, unfortunately, occasionally crop up with infinite matrices, where a special non-materialized type needs to be returned.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I don't think doing this is a good idea; the code would become more complicated for no benefit, and we wouldn't even have a test for it.

When eigen starts returning immutable eigenvalues or eigenvectors then I think it would make sense. But I suspect that would break a lot of things.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I follow: any package may add a method to return an immutable array, and the currently working code that does not assume mutability will break after this PR. We shouldn't rely on packages returning mutable arrays. E.g., Symmetric tridiagonal Toeplitz matrices have a special structure to their eigenvalues and eigenvectors, and the fact that ToeplitzMatrices currently returns mutable arrays is an implementation detail.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there has never been a release of LinearAlgebra that worked with immutable objects, I'm certain this PR won't break anybody's code.

But let me put it another way: we can't code for hypothetical failures. We need a concrete case to fail in the tests, otherwise the capability of handling immutable objects will be lost sooner or later. We can't rely on you reviewing every single PR to make sure it doesn't happen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, but perhaps StaticArrays are such a (immutable) case? Though they may have their own implementations anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there has never been a release of LinearAlgebra that worked with immutable objects, I'm certain this PR won't break anybody's code.

If I understand correctly, the special functions here already work with immutable arrays. Currently, eigen returns mutable arrays for standard cases, but it's possible for a package to add methods to return immutable arrays. StaticArrays is indeed such an example:

julia> S = Symmetric(@SMatrix [1 2; 3 4])
2×2 Symmetric{Int64, SMatrix{2, 2, Int64, 4}} with indices SOneTo(2)×SOneTo(2):
 1  2
 2  4

julia> eigen(S)
Eigen{Float64, Float64, SMatrix{2, 2, Float64, 4}, SVector{2, Float64}}
values:
2-element SVector{2, Float64} with indices SOneTo(2):
 0.0
 5.0
vectors:
2×2 SMatrix{2, 2, Float64, 4} with indices SOneTo(2)×SOneTo(2):
 -0.894427  0.447214
  0.447214  0.894427

This PR currently breaks exp(S).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whereas on 1.12:

julia> H = Hermitian(@SMatrix [1 2 + im; 3 4]); exp(H)
ERROR: setindex!(::SMatrix{2, 2, ComplexF64, 4}, value, ::Int) is not defined.

But fair enough, it would start working on 1.13 but my PR would prevent it. Instead of creating a special branch for StridedArray, which would be a horrible mess, I'd rather allocate new arrays, though. The main point of the optimization is using syrk, and that would stay.

And I really think you should add tests for this.

return U * U'
end

# Matrix functions
^(A::SymSymTri{<:Complex}, p::Integer) = sympow(A, p)
^(A::SelfAdjoint, p::Integer) = sympow(A, p)
Expand All @@ -848,7 +856,8 @@ function ^(A::SelfAdjoint, p::Real)
isinteger(p) && return integerpow(A, p)
F = eigen(A)
if all(λ -> λ ≥ 0, F.values)
retmat = (F.vectors * Diagonal((F.values).^p)) * F.vectors'
map!(λ -> λ^0.5p, F.values)
retmat = _psd_spectral_product!(F.values, F.vectors)
return wrappertype(A)(retmat)
else
retmat = (F.vectors * Diagonal(complex.(F.values).^p)) * F.vectors'
Expand All @@ -860,7 +869,7 @@ function ^(A::SymSymTri{<:Complex}, p::Real)
return Symmetric(schurpow(A, p))
end

for func in (:exp, :cos, :sin, :tan, :cosh, :sinh, :tanh, :atan, :asinh, :atanh, :cbrt)
for func in (:cos, :sin, :tan, :cosh, :sinh, :tanh, :atan, :asinh, :atanh, :cbrt)
@eval begin
function ($func)(A::SelfAdjoint)
F = eigen(A)
Expand All @@ -870,6 +879,13 @@ for func in (:exp, :cos, :sin, :tan, :cosh, :sinh, :tanh, :atan, :asinh, :atanh,
end
end

function exp(A::SelfAdjoint)
F = eigen(A)
map!(λ -> exp(0.5λ), F.values)
retmat = _psd_spectral_product!(F.values, F.vectors)
return wrappertype(A)(retmat)
end

function cis(A::SelfAdjoint)
F = eigen(A)
retmat = F.vectors .* cis.(F.values') * F.vectors'
Expand Down Expand Up @@ -929,7 +945,8 @@ function sqrt(A::SelfAdjoint; rtol = eps(real(float(eltype(A)))) * size(A, 1))
F = eigen(A)
λ₀ = -maximum(abs, F.values) * rtol # treat λ ≥ λ₀ as "zero" eigenvalues up to roundoff
if all(λ -> λ ≥ λ₀, F.values)
retmat = (F.vectors * Diagonal(sqrt.(max.(0, F.values)))) * F.vectors'
map!(λ -> λ < 0 ? zero(λ) : fourthroot(λ), F.values)
retmat = _psd_spectral_product!(F.values, F.vectors)
return wrappertype(A)(retmat)
else
retmat = (F.vectors * Diagonal(sqrt.(complex.(F.values)))) * F.vectors'
Expand Down