Skip to content

More generic safe similar #550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 24, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/basics/diagnostics_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ solve process. This is controlled by 3 keyword arguments to `solve`:

All the native NonlinearSolve.jl algorithms come with in-built
[TimerOutputs.jl](https://github.com/KristofferC/TimerOutputs.jl) support. However, this
is disabled by default and can be enabled via [`NonlinearSolve.enable_timer_outputs`](@ref).
is disabled by default and can be enabled via [`NonlinearSolveBase.enable_timer_outputs`](@ref).

Note that you will have to restart Julia to disable the timer outputs once enabled.

Expand Down
46 changes: 23 additions & 23 deletions docs/src/basics/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,29 +71,29 @@ differentiate the function based on the input types. However, this function has
`xx = [1.0, 2.0, 3.0, 4.0]` followed by a `xx[1] = var[1] - v_true[1]` where `var` might
be a Dual number. This causes the error. To fix it:

1. Specify the `autodiff` to be `AutoFiniteDiff`

```@example dual_error_faq
sol = solve(prob_oop, LevenbergMarquardt(; autodiff = AutoFiniteDiff());
maxiters = 10000, abstol = 1e-8)
```

This worked but, Finite Differencing is not the recommended approach in any scenario.

2. Rewrite the function to use
[PreallocationTools.jl](https://github.com/SciML/PreallocationTools.jl) or write it as

```@example dual_error_faq
function fff_correct(var, p)
v_true = [1.0, 0.1, 2.0, 0.5]
xx = eltype(var)[1.0, 2.0, 3.0, 4.0]
xx[1] = var[1] - v_true[1]
return xx - v_true
end

prob_oop = NonlinearLeastSquaresProblem{false}(fff_correct, v_init)
sol = solve(prob_oop, LevenbergMarquardt(); maxiters = 10000, abstol = 1e-8)
```
1. Specify the `autodiff` to be `AutoFiniteDiff`
```@example dual_error_faq
sol = solve(prob_oop, LevenbergMarquardt(; autodiff = AutoFiniteDiff());
maxiters = 10000, abstol = 1e-8)
```
This worked but, Finite Differencing is not the recommended approach in any scenario.

2. Rewrite the function to use
[PreallocationTools.jl](https://github.com/SciML/PreallocationTools.jl) or write it as
```@example dual_error_faq
function fff_correct(var, p)
v_true = [1.0, 0.1, 2.0, 0.5]
xx = eltype(var)[1.0, 2.0, 3.0, 4.0]
xx[1] = var[1] - v_true[1]
return xx - v_true
end
prob_oop = NonlinearLeastSquaresProblem{false}(fff_correct, v_init)
sol = solve(prob_oop, LevenbergMarquardt(); maxiters = 10000, abstol = 1e-8)
```

## I thought NonlinearSolve.jl was type-stable and fast. But it isn't, why?

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/iterator_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ to iterate the solver.
The iterator interface supports:

```@docs
step!(nlcache::NonlinearSolve.AbstractNonlinearSolveCache, args...; kwargs...)
step!(nlcache::NonlinearSolveBase.AbstractNonlinearSolveCache, args...; kwargs...)
```

We can perform 10 steps of the Newton-Raphson solver with the following:
Expand Down
3 changes: 2 additions & 1 deletion lib/NonlinearSolveBase/src/NonlinearSolveBase.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ using RecursiveArrayTools: AbstractVectorOfArray, ArrayPartition
using SciMLBase: SciMLBase, ReturnCode, AbstractODEIntegrator, AbstractNonlinearProblem,
AbstractNonlinearAlgorithm, AbstractNonlinearFunction,
NonlinearProblem, NonlinearLeastSquaresProblem, StandardNonlinearProblem,
NonlinearFunction, NullParameters, NLStats, LinearProblem, LinearAliasSpecifier
NonlinearFunction, NullParameters, NLStats, LinearProblem,
LinearAliasSpecifier
using SciMLJacobianOperators: JacobianOperator, StatefulJacobianOperator
using SciMLOperators: AbstractSciMLOperator, IdentityOperator
using SymbolicIndexingInterface: SymbolicIndexingInterface
Expand Down
6 changes: 3 additions & 3 deletions lib/NonlinearSolveBase/src/jacobian.jl
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ function construct_jacobian_cache(
(linsolve === nothing || needs_concrete_A(linsolve)))
needs_jac = linsolve_needs_jac || concrete_jac(alg)

@bb fu_cache = similar(fu)
fu_cache = Utils.safe_similar(fu)

if !has_analytic_jac && needs_jac
if autodiff === nothing
Expand Down Expand Up @@ -80,9 +80,9 @@ function construct_jacobian_cache(
end
else
if eltype(f.jac_prototype) <: Bool
similar(f.jac_prototype, promote_type(eltype(fu), eltype(u)))
Utils.safe_similar(f.jac_prototype, promote_type(eltype(fu), eltype(u)))
else
similar(f.jac_prototype)
Utils.safe_similar(f.jac_prototype)
end
end
end
Expand Down
3 changes: 2 additions & 1 deletion lib/NonlinearSolveBase/src/linear_solve.jl
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,8 @@ function construct_linear_solver(alg, linsolve, A, b, u; stats, kwargs...)
linprob = LinearProblem(A, b; u0 = u_cache, kwargs...)

# unlias here, we will later use these as caches
lincache = init(linprob, linsolve; alias = LinearAliasSpecifier(alias_A = false, alias_b = false))
lincache = init(
linprob, linsolve; alias = LinearAliasSpecifier(alias_A = false, alias_b = false))
return LinearSolveJLCache(lincache, linsolve, nothing, stats)
end

Expand Down
8 changes: 4 additions & 4 deletions lib/NonlinearSolveBase/src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -101,13 +101,13 @@ restructure(y, x) = ArrayInterface.restructure(y, x)

function safe_similar(x, args...; kwargs...)
y = similar(x, args...; kwargs...)
return init_bigfloat_array!!(y)
return init_similar_array!!(y)
end

init_bigfloat_array!!(x) = x
init_similar_array!!(x) = x

function init_bigfloat_array!!(x::AbstractArray{<:BigFloat})
ArrayInterface.can_setindex(x) && fill!(x, BigFloat(0))
function init_similar_array!!(x::AbstractArray{<:T}) where {T}
ArrayInterface.can_setindex(x) && fill!(x, T(0))
return x
end

Expand Down
2 changes: 1 addition & 1 deletion lib/SimpleNonlinearSolve/src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ end

function compute_hvvp(prob, autodiff, _, x::Number, dir::Number)
H = DI.second_derivative(prob.f, autodiff, x, Constant(prob.p))
return H*dir
return H * dir
end
function compute_hvvp(prob, autodiff, fx, x, dir)
jvp_fn = if SciMLBase.isinplace(prob)
Expand Down
3 changes: 2 additions & 1 deletion test/qa_tests.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
@testitem "Aqua" tags=[:misc] begin
using NonlinearSolve, SimpleNonlinearSolve, Aqua

Aqua.test_all(NonlinearSolve; ambiguities = false, piracies = false, stale_deps = false, deps_compat = false)
Aqua.test_all(NonlinearSolve; ambiguities = false, piracies = false,
stale_deps = false, deps_compat = false)
Aqua.test_ambiguities(NonlinearSolve; recursive = false)
Aqua.test_stale_deps(SimpleNonlinearSolve; ignore = [:SciMLJacobianOperators])
Aqua.test_deps_compat(SimpleNonlinearSolve; ignore = [:SciMLJacobianOperators])
Expand Down
Loading