Skip to content

Blockdiagonal factorizations refactor #166

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

lkdvos
Copy link
Contributor

@lkdvos lkdvos commented Aug 4, 2025

This PR is the start of the refactor required to implement the factorizations by first mapping from blockpermuteddiagonal to blockdiagonal, and then implementing the factorization.

In order to make this work I changed some things around:

I created a second algorithm type: BlockDiagonalAlgorithm and BlockPermutedDiagonalAlgorithm now both exist.
The latter simply permutes, then calls the implementation of the former, and then permutes back.

This approach seems quite elegant and works quite nice, but as a consequence I didn't really figure out how to deal with pre-allocating any output. Therefore now the BlockPermutedDiagonal simply ignores the initialized output, and it is the BlockDiagonalAlgorithm that really does something in initialize_output.

I also went a bit back and forth between wanting to pre-allocate the blocks for U and V for the missing rows/cols or not. It seems that this doesn't really make a big difference, and not doing so definitely simplifies the code a bit so this might be reasonable.

Given this, it might even be reasonable to simply avoid filling up the blocks at all: we could just call svd_*!(@view(A[]), alg) and let the allocation be handled by lower level functions. Given that we currently are not really in a spot to really reuse the memory, and the change with the two algorithms more or less prevents us from really providing output arrays, this could be a reasonable change as well.

I'd appreciate some feedback before I push this through to the other factorizations as well!

Fixes #162 and is a replacement of #164

@lkdvos lkdvos requested a review from mtfishman August 4, 2025 22:19
Copy link

codecov bot commented Aug 4, 2025

Codecov Report

❌ Patch coverage is 87.76119% with 41 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.41%. Comparing base (c96c064) to head (ca1350c).

Files with missing lines Patch % Lines
src/factorizations/svd.jl 72.00% 28 Missing ⚠️
src/factorizations/utility.jl 80.43% 9 Missing ⚠️
src/factorizations/eig.jl 96.22% 2 Missing ⚠️
src/factorizations/truncation.jl 88.88% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #166      +/-   ##
==========================================
- Coverage   75.37%   73.41%   -1.96%     
==========================================
  Files          37       37              
  Lines        2091     2046      -45     
==========================================
- Hits         1576     1502      -74     
- Misses        515      544      +29     
Flag Coverage Δ
docs 7.34% <0.00%> (+0.16%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@lkdvos lkdvos force-pushed the blockdiagonal-factorization branch from aa3d144 to 6a1152e Compare August 4, 2025 22:24
@lkdvos lkdvos force-pushed the blockdiagonal-factorization branch from 36e3865 to d63cc94 Compare August 13, 2025 10:14
@lkdvos lkdvos force-pushed the blockdiagonal-factorization branch from d63cc94 to d926f52 Compare August 15, 2025 10:46
@lkdvos
Copy link
Contributor Author

lkdvos commented Aug 15, 2025

@mtfishman , I think I've more or less covered most functions now.

I've had to step back from the transform_rows closures since this wasn't very easy to make compatible with handling both the matrix and vector D and S for eig/svd, but I think I found a way to keep the option to specialize, by returning custom invperm objects from the blockdiagonalize function and then specializing the transform_rows and transform_cols functions.

I also tackled the truncated decompositions, which I think have simplified as well, since these now also only need to handle blockdiagonal arrays.

I'm now a bit confused about what it means for eig_full! to have D that is not diagonal, since I need to permute the rows for that to make sense, which would give a non-diagonal matrix of eigenvalues which sounds weird to me.

I hope to start tackling the gradedarrays version next week, but in principle I think this is good to go if all tests pass. (or at least ready for another round of review)

@mtfishman
Copy link
Member

That all sounds great, thanks. I'll take a look.

I'm now a bit confused about what it means for eig_full! to have D that is not diagonal, since I need to permute the rows for that to make sense, which would give a non-diagonal matrix of eigenvalues which sounds weird to me.

I thought we would require that the input to eig_full! has to be block diagonal to begin with, so then D would always be block diagonal.

@lkdvos
Copy link
Contributor Author

lkdvos commented Aug 16, 2025

Ok, I'll adapt this too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] SVD of rectangular block structure
2 participants