@@ -95,11 +95,7 @@ optf = Optimization.OptimizationFunction((x, p) -> loss_adjoint(x), adtype)
9595
9696optprob = Optimization.OptimizationProblem(optf, θ)
9797res1 = Optimization.solve(
98- optprob, OptimizationOptimisers.Adam(0.01), callback = cb, maxiters = 100)
99-
100- optprob2 = Optimization.OptimizationProblem(optf, res1.u)
101- res2 = Optimization.solve(
102- optprob2, OptimizationOptimJL.BFGS(), callback = cb, maxiters = 100)
98+ optprob, OptimizationOptimisers.Adam(0.01), callback = cb, maxiters = 300)
10399```
104100
105101Now that the system is in a better behaved part of parameter space, we return to
@@ -114,8 +110,8 @@ function loss_adjoint(θ)
114110end
115111optf3 = Optimization.OptimizationFunction((x, p) -> loss_adjoint(x), adtype)
116112
117- optprob3 = Optimization.OptimizationProblem(optf3, res2 .u)
118- res3 = Optimization.solve(optprob3, OptimizationOptimJL.BFGS( ), maxiters = 100)
113+ optprob3 = Optimization.OptimizationProblem(optf3, res1 .u)
114+ res3 = Optimization.solve(optprob3, OptimizationOptimisers.Adam(0.01 ), maxiters = 100)
119115```
120116
121117Now let's see what we received:
0 commit comments