Skip to content

Commit e048bfe

Browse files
format
1 parent 3da7d9f commit e048bfe

File tree

3 files changed

+23
-23
lines changed

3 files changed

+23
-23
lines changed

docs/src/optimization_packages/optimization.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -4,28 +4,28 @@ There are some solvers that are available in the Optimization.jl package directl
44

55
## Methods
66

7-
- `LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints.
8-
9-
This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.
7+
- `LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints.
8+
9+
This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.
1010

11-
- `Sophia`: Based on the recent paper https://arxiv.org/abs/2305.14342. It incorporates second order information in the form of the diagonal of the Hessian matrix hence avoiding the need to compute the complete hessian. It has been shown to converge faster than other first order methods such as Adam and SGD.
11+
- `Sophia`: Based on the recent paper https://arxiv.org/abs/2305.14342. It incorporates second order information in the form of the diagonal of the Hessian matrix hence avoiding the need to compute the complete hessian. It has been shown to converge faster than other first order methods such as Adam and SGD.
12+
13+
+ `solve(problem, Sophia(; η, βs, ϵ, λ, k, ρ))`
1214

13-
+ `solve(problem, Sophia(; η, βs, ϵ, λ, k, ρ))`
14-
15-
+ `η` is the learning rate
16-
+ `βs` are the decay of momentums
17-
+ `ϵ` is the epsilon value
18-
+ `λ` is the weight decay parameter
19-
+ `k` is the number of iterations to re-compute the diagonal of the Hessian matrix
20-
+ `ρ` is the momentum
21-
+ Defaults:
22-
23-
* `η = 0.001`
24-
* `βs = (0.9, 0.999)`
25-
* `ϵ = 1e-8`
26-
* `λ = 0.1`
27-
* `k = 10`
28-
* `ρ = 0.04`
15+
+ `η` is the learning rate
16+
+ `βs` are the decay of momentums
17+
+ `ϵ` is the epsilon value
18+
+ `λ` is the weight decay parameter
19+
+ `k` is the number of iterations to re-compute the diagonal of the Hessian matrix
20+
+ `ρ` is the momentum
21+
+ Defaults:
22+
23+
* `η = 0.001`
24+
* `βs = (0.9, 0.999)`
25+
* `ϵ = 1e-8`
26+
* `λ = 0.1`
27+
* `k = 10`
28+
* `ρ = 0.04`
2929

3030
## Examples
3131

lib/OptimizationOptimJL/src/OptimizationOptimJL.jl

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,13 +38,12 @@ function __map_optimizer_args(cache::OptimizationCache,
3838
abstol::Union{Number, Nothing} = nothing,
3939
reltol::Union{Number, Nothing} = nothing,
4040
kwargs...)
41-
4241
mapped_args = (; extended_trace = true, kwargs...)
4342

4443
if !isnothing(abstol)
4544
mapped_args = (; mapped_args..., f_abstol = abstol)
4645
end
47-
46+
4847
if !isnothing(callback)
4948
mapped_args = (; mapped_args..., callback = callback)
5049
end

test/diffeqfluxtests.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,8 @@ ode_data = Array(solve(prob_trueode, Tsit5(), saveat = tsteps))
7070
dudt2 = Lux.Chain(x -> x .^ 3,
7171
Lux.Dense(2, 50, tanh),
7272
Lux.Dense(50, 2))
73-
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, abstol = 1e-8, reltol = 1e-8)
73+
prob_neuralode = NeuralODE(
74+
dudt2, tspan, Tsit5(), saveat = tsteps, abstol = 1e-8, reltol = 1e-8)
7475
pp, st = Lux.setup(rng, dudt2)
7576
pp = ComponentArray(pp)
7677

0 commit comments

Comments
 (0)