Skip to content

Use gradient!(d, x). in perform_linesearch#1207

Merged
pkofod merged 2 commits intomasterfrom
pkofod-patch-17
Nov 13, 2025
Merged

Use gradient!(d, x). in perform_linesearch#1207
pkofod merged 2 commits intomasterfrom
pkofod-patch-17

Conversation

@pkofod
Copy link
Member

@pkofod pkofod commented Nov 13, 2025

This shouldn't matter, but if somehow happened to be calling the objective function in the callback or some other way this would now produce the correct result. Previously, it used the most recent evaluation. Now, it will try to evaluate at state.x and if that was the last point evaluated we'll just use the cached result. This should always be the case unless someone messed with the objective outside. We could even assert that g_calls is constant around the call to be sure to catch it but then users wouldn't be able to call the gradient outside at their own expense of runtime.

This shouldn't matter, but *if* somehow happened to be calling the objective function in the callback or some other way this would now produce the correct result. Previously, it used the most recent evaluation. Now, it will try to evaluate at state.x and if that was the last point evaluated we'll just use the cached result. This should always be the case unless someone messed with the objective outside. We could even assert that g_calls is constant around the call to be sure to catch it but then users wouldn't be able to call the gradient outside at their own expense of runtime.
@github-actions
Copy link
Contributor

github-actions bot commented Nov 13, 2025

Benchmark Results (Julia vlts)

Time benchmarks
master 5f2a98e... master / 5f2a98e...
multivariate/solvers/first_order/AdaMax 0.543 ± 0.009 ms 0.543 ± 0.0095 ms 1 ± 0.024
multivariate/solvers/first_order/Adam 0.543 ± 0.0092 ms 0.542 ± 0.0093 ms 1 ± 0.024
multivariate/solvers/first_order/BFGS 0.263 ± 0.008 ms 0.263 ± 0.0083 ms 1 ± 0.044
multivariate/solvers/first_order/ConjugateGradient 0.174 ± 0.0028 ms 0.175 ± 0.003 ms 0.999 ± 0.023
multivariate/solvers/first_order/GradientDescent 1.55 ± 0.014 ms 1.55 ± 0.012 ms 0.998 ± 0.012
multivariate/solvers/first_order/LBFGS 0.235 ± 0.0072 ms 0.234 ± 0.0076 ms 1 ± 0.045
multivariate/solvers/first_order/MomentumGradientDescent 2.18 ± 0.016 ms 2.17 ± 0.015 ms 1 ± 0.0099
multivariate/solvers/first_order/NGMRES 0.43 ± 0.011 ms 0.435 ± 0.011 ms 0.989 ± 0.035
time_to_load 0.412 ± 0.0042 s 0.408 ± 0.0036 s 1.01 ± 0.014
Memory benchmarks
master 5f2a98e... master / 5f2a98e...
multivariate/solvers/first_order/AdaMax 0.34 k allocs: 7.16 kB 0.34 k allocs: 7.16 kB 1
multivariate/solvers/first_order/Adam 0.34 k allocs: 7.16 kB 0.34 k allocs: 7.16 kB 1
multivariate/solvers/first_order/BFGS 0.336 k allocs: 15 kB 0.36 k allocs: 15.5 kB 0.964
multivariate/solvers/first_order/ConjugateGradient 0.332 k allocs: 13.5 kB 0.362 k allocs: 14.2 kB 0.95
multivariate/solvers/first_order/GradientDescent 1.89 k allocs: 0.0713 MB 2.09 k allocs: 0.0759 MB 0.94
multivariate/solvers/first_order/LBFGS 0.317 k allocs: 14.2 kB 0.341 k allocs: 14.7 kB 0.962
multivariate/solvers/first_order/MomentumGradientDescent 2.24 k allocs: 0.077 MB 2.44 k allocs: 0.0815 MB 0.944
multivariate/solvers/first_order/NGMRES 1.51 k allocs: 0.117 MB 1.58 k allocs: 0.119 MB 0.986
time_to_load 0.153 k allocs: 14.5 kB 0.153 k allocs: 14.5 kB 1

A plot of the benchmark results has been uploaded as an artifact at .

dphi_0 = real(dot(gradient(d), state.s)) # update after direction reset
dphi_0 = real(dot(gx, state.s)) # update after direction reset
end
phi_0 = value(d)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this could be made safer as well by calling

Suggested change
phi_0 = value(d)
phi_0 = value!(d, state.x)

But possibly you'd want to combine it with the gradient! call above.

@pkofod
Copy link
Member Author

pkofod commented Nov 13, 2025

Interesting that fminbox fails here.. that would indicate that it was doing some funny business in the resets

@codecov
Copy link

codecov bot commented Nov 13, 2025

Codecov Report

❌ Patch coverage is 88.88889% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 87.51%. Comparing base (b367149) to head (5f2a98e).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
src/utilities/perform_linesearch.jl 88.88% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1207      +/-   ##
==========================================
- Coverage   87.56%   87.51%   -0.05%     
==========================================
  Files          45       45              
  Lines        3514     3517       +3     
==========================================
+ Hits         3077     3078       +1     
- Misses        437      439       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@pkofod pkofod merged commit c6f69d6 into master Nov 13, 2025
21 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants