Skip to content

Generalize sparseL as sparsemat#880

Merged
palday merged 9 commits intomainfrom
db/sparseA
Mar 9, 2026
Merged

Generalize sparseL as sparsemat#880
palday merged 9 commits intomainfrom
db/sparseA

Conversation

@dmbates
Copy link
Collaborator

@dmbates dmbates commented Mar 3, 2026

Should we release your changes right away? If so, bump the version:

  • I've bumped the version appropriately

@dmbates
Copy link
Collaborator Author

dmbates commented Mar 3, 2026

I seem to be in a race condition with the Style-Enforcer. If I run JuliaFormatter.format on the src directory it removes white-space to create expressions like view(A,:,j) then Style-Enforcer wants to put the white-space back to create view(A, :, j). It doesn't really matter to me which style is used but it would be helpful if I could run a version of format that would produce the style that Style-Enforcer wants. @palday Is there something I should change on my local setup?

@dmbates dmbates requested a review from palday March 3, 2026 19:05
@codecov
Copy link

codecov bot commented Mar 3, 2026

Codecov Report

❌ Patch coverage is 93.33333% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 95.60%. Comparing base (7ae8df4) to head (17258ab).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/linearmixedmodel.jl 93.33% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #880      +/-   ##
==========================================
- Coverage   95.62%   95.60%   -0.03%     
==========================================
  Files          38       38              
  Lines        3701     3706       +5     
==========================================
+ Hits         3539     3543       +4     
- Misses        162      163       +1     
Flag Coverage Δ
current 95.27% <93.33%> (-0.03%) ⬇️
minimum 95.54% <93.33%> (-0.03%) ⬇️
nightly 95.27% <93.33%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@dmbates
Copy link
Collaborator Author

dmbates commented Mar 3, 2026

At present the unexported sparsemat utility applies dropzeros! to the returned value. I am beginning to think that is not appropriate for the result of sparseA but does make sense for the result of sparseL. The example using models(:insteval)[2] in test/pls.jl shows that you can get values of zero in the potentially nonzero positions of A or of L. I think these come about because of the 0/1 coding for service in that model, resulting in values that are genuine inner products of columns of the model matrices for the vector-valued random effects but just happen to have all zeros for service for a particular s and d combination.

I am thinking that I should move the dropzeros! call from sparsemat to sparseL, where dropping the zeros makes sense, so that dropping those zeros remove potentially nonzero values that just happen to be zero in the returned value of sparseA. I'm not sure if those potentially nonzero values should be retained in sparseA so that multiplication by Lambda can be done in-place.

@palday Does any of this make sense to you or should I maybe put down my laptop for a few hours and take deep cleansing breaths?

dmbates and others added 2 commits March 4, 2026 09:04
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@palday
Copy link
Member

palday commented Mar 9, 2026

At present the unexported sparsemat utility applies dropzeros! to the returned value. I am beginning to think that is not appropriate for the result of sparseA but does make sense for the result of sparseL. The example using models(:insteval)[2] in test/pls.jl shows that you can get values of zero in the potentially nonzero positions of A or of L. I think these come about because of the 0/1 coding for service in that model, resulting in values that are genuine inner products of columns of the model matrices for the vector-valued random effects but just happen to have all zeros for service for a particular s and d combination.

I am thinking that I should move the dropzeros! call from sparsemat to sparseL, where dropping the zeros makes sense, so that dropping those zeros remove potentially nonzero values that just happen to be zero in the returned value of sparseA. I'm not sure if those potentially nonzero values should be retained in sparseA so that multiplication by Lambda can be done in-place.

@palday Does any of this make sense to you or should I maybe put down my laptop for a few hours and take deep cleansing breaths?

I was slow to reply, but it looks like you did this and I agree. 😄

The caller can always call dropzeros! if they want to, but they can't uncall it.

@palday
Copy link
Member

palday commented Mar 9, 2026

I've changed the version bump to be minor instead of patch -- in a very strict sense, there is some new functionality here (although it's mostly private), so I think it's worthwhile to call it a minor version

@palday palday merged commit b5ba3ec into main Mar 9, 2026
9 checks passed
@palday palday deleted the db/sparseA branch March 9, 2026 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Replace _coord for generating sparseL with SparseArrays.findnz

2 participants