Skip to content

Conversation

@rsenne
Copy link

@rsenne rsenne commented Jul 10, 2025

This PR is in reference to issue #1170. To solve this I have edited the interface to allow a user to pass a generic solver. This in theory would allow one to use any matrix type given the user has that package loaded and there exists a solver designed for that matrix type on board. This also allows one to write a custom solver (i.e., problem specific factorization, strategies for solving) to circumvent the default behavior. I have included some examples of this in testing including:

  1. Different factorizations (would allow users to circumvent the default PositiveFactorizations.jl implementation of cholesky)
  2. Usage of a non-originally supported matrix type (StaticArray)
  3. Test type preservation of Hessian with TwiceDifferentiable object using BlockArrays.jl
  4. A more realistic example of a non-trivial use case (block-tridiagonal hessian) this implementation is not totally optimized as it is a proof of principle

Let me know if there are any other tests you would like me to include or changes to the documentation.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 10, 2025

Benchmark Results

master 69f9d02... master / 69f9d02...
multivariate/solvers/first_order/AdaMax 0.646 ± 0.0079 ms 0.646 ± 0.0077 ms 0.999 ± 0.017
multivariate/solvers/first_order/Adam 0.645 ± 0.0076 ms 0.647 ± 0.0076 ms 0.998 ± 0.017
multivariate/solvers/first_order/BFGS 0.224 ± 0.0043 ms 0.224 ± 0.0041 ms 0.999 ± 0.026
multivariate/solvers/first_order/ConjugateGradient 0.048 ± 0.00066 ms 0.0483 ± 0.00065 ms 0.994 ± 0.019
multivariate/solvers/first_order/GradientDescent 1.71 ± 0.016 ms 1.71 ± 0.013 ms 0.999 ± 0.012
multivariate/solvers/first_order/LBFGS 0.219 ± 0.0038 ms 0.22 ± 0.0036 ms 0.997 ± 0.024
multivariate/solvers/first_order/MomentumGradientDescent 2.51 ± 0.016 ms 2.51 ± 0.016 ms 1 ± 0.0092
multivariate/solvers/first_order/NGMRES 0.554 ± 0.011 ms 0.554 ± 0.01 ms 0.999 ± 0.027
time_to_load 0.505 ± 0.0083 s 0.514 ± 0.0049 s 0.983 ± 0.019

Benchmark Plots

A plot of the benchmark results have been uploaded as an artifact to the workflow run for this PR.
Go to "Actions"->"Benchmark a pull request"->[the most recent run]->"Artifacts" (at the bottom).

@pkofod
Copy link
Member

pkofod commented Jul 17, 2025

Thanks, I'll review :)

@codecov
Copy link

codecov bot commented Jul 17, 2025

Codecov Report

❌ Patch coverage is 90.00000% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 85.75%. Comparing base (6850998) to head (69f9d02).
⚠️ Report is 4 commits behind head on master.

Files with missing lines Patch % Lines
src/multivariate/solvers/second_order/newton.jl 90.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1176      +/-   ##
==========================================
+ Coverage   85.70%   85.75%   +0.04%     
==========================================
  Files          46       46              
  Lines        3596     3601       +5     
==========================================
+ Hits         3082     3088       +6     
+ Misses        514      513       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

g = gradient(d)

# Clean and simple - just call the user's solve function
state.s .= method.solve(H, g)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the default solver, this seems less efficient than the current implementation. AFAICT this would introduce a regression as currently for Arrays both the cholesky decomposition of the Hessian is computed in place and then also the in-place ldiv! solver is used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants