Skip to content

Commit 0cbcc18

Browse files
Merge pull request #1256 from ChrisRackauckas/fix-formatting
Apply JuliaFormatter to fix code formatting
2 parents 3eb1ccc + 9913b4e commit 0cbcc18

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+1531
-858
lines changed

docs/pages.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@ pages = ["index.md",
2121
"examples/sde/SDE_control.md"],
2222
"Delay Differential Equations (DDEs)" => Any["examples/dde/delay_diffeq.md"],
2323
"Partial Differential Equations (PDEs)" => Any[
24-
"examples/pde/pde_constrained.md",
25-
"examples/pde/brusselator.md"
26-
],
24+
"examples/pde/pde_constrained.md",
25+
"examples/pde/brusselator.md"
26+
],
2727
"Hybrid and Jump Equations" => Any["examples/hybrid_jump/hybrid_diffeq.md",
2828
"examples/hybrid_jump/bouncing_ball.md"],
2929
"Bayesian Estimation" => Any["examples/bayesian/turing_bayesian.md"],

docs/src/examples/neural_ode/simplechains.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,8 @@
77
First, we'll need data for training the NeuralODE, which can be obtained by solving the ODE `u' = f(u,p,t)` numerically using the SciML ecosystem in Julia.
88

99
```@example sc_neuralode
10-
import SimpleChains, OrdinaryDiffEq as ODE, SciMLSensitivity as SMS, Optimization as OPT, OptimizationOptimisers as OPO, Plots
10+
import SimpleChains, OrdinaryDiffEq as ODE, SciMLSensitivity as SMS, Optimization as OPT,
11+
OptimizationOptimisers as OPO, Plots
1112
using StaticArrays: @SArray, @SMatrix
1213
1314
u0 = @SArray Float32[2.0, 0.0]

docs/src/examples/optimal_control/optimal_control.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -87,9 +87,11 @@ cb = function (state, l; doplot = true)
8787
ps = CA.ComponentArray(state.u, ax)
8888
8989
if doplot
90-
p = Plots.plot(ODE.solve(ODE.remake(prob, p = state.u), ODE.Tsit5(), saveat = 0.01),
90+
p = Plots.plot(
91+
ODE.solve(ODE.remake(prob, p = state.u), ODE.Tsit5(), saveat = 0.01),
9192
ylim = (-6, 6), lw = 3)
92-
Plots.plot!(p, ts, [first(first(ann([t], ps, st))) for t in ts], label = "u(t)", lw = 3)
93+
Plots.plot!(
94+
p, ts, [first(first(ann([t], ps, st))) for t in ts], label = "u(t)", lw = 3)
9395
display(p)
9496
end
9597
@@ -132,7 +134,8 @@ Now let's see what we received:
132134
```@example neuraloptimalcontrol
133135
l = loss_adjoint(res3.u)
134136
cb(res3, l)
135-
p = Plots.plot(ODE.solve(ODE.remake(prob, p = res3.u), ODE.Tsit5(), saveat = 0.01), ylim = (-6, 6), lw = 3)
137+
p = Plots.plot(ODE.solve(ODE.remake(prob, p = res3.u), ODE.Tsit5(), saveat = 0.01), ylim = (
138+
-6, 6), lw = 3)
136139
Plots.plot!(p, ts, [first(first(ann([t], CA.ComponentArray(res3.u, ax), st))) for t in ts],
137140
label = "u(t)", lw = 3)
138141
```

docs/src/examples/pde/brusselator.md

Lines changed: 43 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ giving us a field tensor of shape $(N, N, 2)$. This structure is flexible and ex
5656

5757
## Finite Difference Laplacian and Forcing
5858

59-
6059
For spatial derivatives, we apply a second-order central difference scheme using a three-point stencil. The Laplacian is discretized as:
6160

6261
$$[\ 1,\ -2,\ 1\ ]$$
@@ -65,19 +64,19 @@ in both the $ x $ and $ y $ directions, forming a tridiagonal structure in both
6564

6665
## Generating Training Data
6766

68-
This provides us with an `ODEProblem` that can be solved to obtain training data.
67+
This provides us with an `ODEProblem` that can be solved to obtain training data.
6968

7069
```@example bruss
7170
import ComponentArrays as CA, Random, Plots, OrdinaryDiffEq as ODE
7271
import SciMLBase
7372
7473
N_GRID = 16
75-
XYD = range(0f0, stop = 1f0, length = N_GRID)
74+
XYD = range(0.0f0, stop = 1.0f0, length = N_GRID)
7675
dx = step(XYD)
7776
T_FINAL = 11.5f0
7877
SAVE_AT = 0.5f0
7978
tspan = (0.0f0, T_FINAL)
80-
t_points = range(tspan[1], stop=tspan[2], step=SAVE_AT)
79+
t_points = range(tspan[1], stop = tspan[2], step = SAVE_AT)
8180
A, B, alpha = 3.4f0, 1.0f0, 10.0f0
8281
8382
brusselator_f(x, y, t) = (((x - 0.3f0)^2 + (y - 0.6f0)^2) <= 0.01f0) * (t >= 1.1f0) * 5.0f0
@@ -88,8 +87,8 @@ function init_brusselator(xyd)
8887
u0 = zeros(Float32, N_GRID, N_GRID, 2)
8988
for I in CartesianIndices((N_GRID, N_GRID))
9089
x, y = xyd[I[1]], xyd[I[2]]
91-
u0[I,1] = 22f0 * (y * (1f0 - y))^(3f0/2f0)
92-
u0[I,2] = 27f0 * (x * (1f0 - x))^(3f0/2f0)
90+
u0[I, 1] = 22.0f0 * (y * (1.0f0 - y))^(3.0f0/2.0f0)
91+
u0[I, 2] = 27.0f0 * (x * (1.0f0 - x))^(3.0f0/2.0f0)
9392
end
9493
println("[Init] Done.")
9594
return u0
@@ -104,22 +103,23 @@ function pde_truth!(du, u, p, t)
104103
x, y = XYD[i], XYD[j]
105104
ip1, im1 = limit(i+1, N_GRID), limit(i-1, N_GRID)
106105
jp1, jm1 = limit(j+1, N_GRID), limit(j-1, N_GRID)
107-
U, V = u[i,j,1], u[i,j,2]
108-
ΔU = u[im1,j,1] + u[ip1,j,1] + u[i,jp1,1] + u[i,jm1,1] - 4f0 * U
109-
ΔV = u[im1,j,2] + u[ip1,j,2] + u[i,jp1,2] + u[i,jm1,2] - 4f0 * V
110-
du[i,j,1] = αdx*ΔU + B + U^2 * V - (A+1f0)*U + brusselator_f(x, y, t)
111-
du[i,j,2] = αdx*ΔV + A*U - U^2 * V
106+
U, V = u[i, j, 1], u[i, j, 2]
107+
ΔU = u[im1, j, 1] + u[ip1, j, 1] + u[i, jp1, 1] + u[i, jm1, 1] - 4.0f0 * U
108+
ΔV = u[im1, j, 2] + u[ip1, j, 2] + u[i, jp1, 2] + u[i, jm1, 2] - 4.0f0 * V
109+
du[i, j, 1] = αdx*ΔU + B + U^2 * V - (A+1.0f0)*U + brusselator_f(x, y, t)
110+
du[i, j, 2] = αdx*ΔV + A*U - U^2 * V
112111
end
113112
end
114113
115114
p_tuple = (A, B, alpha, dx)
116-
@time sol_truth = ODE.solve(ODE.ODEProblem(pde_truth!, u0, tspan, p_tuple), ODE.FBDF(), saveat=t_points)
115+
@time sol_truth = ODE.solve(ODE.ODEProblem(pde_truth!, u0, tspan, p_tuple), ODE.FBDF(), saveat = t_points)
117116
u_true = Array(sol_truth)
118117
```
119118

120119
## Visualizing Mean Concentration Over Time
121120

122121
We can now use this code for training our UDE, and generating time-series plots of the concentrations of species of U and V using the code:
122+
123123
```@example bruss
124124
import Plots, Statistics
125125
@@ -128,9 +128,10 @@ avg_U = [Statistics.mean(snapshot[:, :, 1]) for snapshot in sol_truth.u]
128128
avg_V = [Statistics.mean(snapshot[:, :, 2]) for snapshot in sol_truth.u]
129129
130130
# Plot average concentrations over time
131-
Plots.plot(sol_truth.t, avg_U, label="Mean U", lw=2, xlabel="Time", ylabel="Concentration",
132-
title="Mean Concentration of U and V Over Time")
133-
Plots.plot!(sol_truth.t, avg_V, label="Mean V", lw=2, linestyle=:dash)
131+
Plots.plot(
132+
sol_truth.t, avg_U, label = "Mean U", lw = 2, xlabel = "Time", ylabel = "Concentration",
133+
title = "Mean Concentration of U and V Over Time")
134+
Plots.plot!(sol_truth.t, avg_V, label = "Mean V", lw = 2, linestyle = :dash)
134135
```
135136

136137
With the ground truth data generated and visualized, we are now ready to construct a Universal Differential Equation (UDE) by replacing the nonlinear term $U^2V$ with a neural network. The next section outlines how we define this hybrid model and train it to recover the reaction dynamics from data.
@@ -154,7 +155,8 @@ Here, $\mathcal{N}_\theta(U, V)$ is trained to approximate the true interaction
154155
First, we have to define and configure the neural network that has to be used for the training. The implementation for that is as follows:
155156

156157
```@example bruss
157-
import Lux, Random, Optimization as OPT, OptimizationOptimJL as OOJ, SciMLSensitivity as SMS, Zygote
158+
import Lux, Random, Optimization as OPT, OptimizationOptimJL as OOJ,
159+
SciMLSensitivity as SMS, Zygote
158160
159161
model = Lux.Chain(Lux.Dense(2 => 16, tanh), Lux.Dense(16 => 1))
160162
rng = Random.default_rng()
@@ -166,14 +168,15 @@ We use a simple fully connected neural network with one hidden layer of 16 tanh-
166168

167169
To ensure consistency between the ground truth simulation and the learned Universal Differential Equation (UDE) model, we preserve the same spatial discretization scheme used in the original ODEProblem. This includes:
168170

169-
* the finite difference Laplacian,
170-
* periodic boundary conditions, and
171-
* the external forcing function.
171+
- the finite difference Laplacian,
172+
- periodic boundary conditions, and
173+
- the external forcing function.
172174

173-
The only change lies in the replacement of the known nonlinear term $U^2V$ with a neural network approximation
175+
The only change lies in the replacement of the known nonlinear term $U^2V$ with a neural network approximation
174176
$\mathcal{N}_\theta(U, V)$. This design enables the UDE to learn complex or unknown dynamics from data while maintaining the underlying physical structure of the system.
175177

176178
The function below implements this hybrid formulation:
179+
177180
```@example bruss
178181
function pde_ude!(du, u, ps_nn, t)
179182
αdx = alpha / dx^2
@@ -182,22 +185,24 @@ function pde_ude!(du, u, ps_nn, t)
182185
x, y = XYD[i], XYD[j]
183186
ip1, im1 = limit(i+1, N_GRID), limit(i-1, N_GRID)
184187
jp1, jm1 = limit(j+1, N_GRID), limit(j-1, N_GRID)
185-
U, V = u[i,j,1], u[i,j,2]
186-
ΔU = u[im1,j,1] + u[ip1,j,1] + u[i,jp1,1] + u[i,jm1,1] - 4f0 * U
187-
ΔV = u[im1,j,2] + u[ip1,j,2] + u[i,jp1,2] + u[i,jm1,2] - 4f0 * V
188+
U, V = u[i, j, 1], u[i, j, 2]
189+
ΔU = u[im1, j, 1] + u[ip1, j, 1] + u[i, jp1, 1] + u[i, jm1, 1] - 4.0f0 * U
190+
ΔV = u[im1, j, 2] + u[ip1, j, 2] + u[i, jp1, 2] + u[i, jm1, 2] - 4.0f0 * V
188191
nn_val, _ = model([U, V], ps_nn, st)
189192
val = nn_val[1]
190-
du[i,j,1] = αdx*ΔU + B + val - (A+1f0)*U + brusselator_f(x, y, t)
191-
du[i,j,2] = αdx*ΔV + A*U - val
193+
du[i, j, 1] = αdx*ΔU + B + val - (A+1.0f0)*U + brusselator_f(x, y, t)
194+
du[i, j, 2] = αdx*ΔV + A*U - val
192195
end
193196
end
194197
prob_ude_template = ODE.ODEProblem(pde_ude!, u0, tspan, ps_init)
195198
```
199+
196200
## Loss Function and Optimization
197-
To train the neural network
201+
202+
To train the neural network
198203
$\mathcal{N}_\theta(U, V)$ embedded in the UDE, we define a loss function that measures how closely the solution of the UDE matches the ground truth data generated earlier.
199204

200-
The loss is computed as the sum of squared errors between the predicted solution from the UDE and the true solution at each saved time point. If the solver fails (e.g., due to numerical instability or incorrect parameters), we return an infinite loss to discard that configuration during optimization. We use ```FBDF()``` as the solver due to the stiff nature of the brusselators euqation. Other solvers like ```KenCarp47()``` could also be used.
205+
The loss is computed as the sum of squared errors between the predicted solution from the UDE and the true solution at each saved time point. If the solver fails (e.g., due to numerical instability or incorrect parameters), we return an infinite loss to discard that configuration during optimization. We use `FBDF()` as the solver due to the stiff nature of the brusselators euqation. Other solvers like `KenCarp47()` could also be used.
201206

202207
To efficiently compute gradients of the loss with respect to the neural network parameters, we use an adjoint sensitivity method (`GaussAdjoint`), which performs high-accuracy quadrature-based integration of the adjoint equations. This approach enables scalable and memory-efficient training for stiff PDEs by avoiding full trajectory storage while maintaining accurate gradient estimates.
203208

@@ -206,8 +211,8 @@ The loss function and initial evaluation are implemented as follows:
206211
```@example bruss
207212
println("[Loss] Defining loss function...")
208213
function loss_fn(ps, _)
209-
prob = ODE.remake(prob_ude_template, p=ps)
210-
sol = ODE.solve(prob, ODE.FBDF(), saveat=t_points)
214+
prob = ODE.remake(prob_ude_template, p = ps)
215+
sol = ODE.solve(prob, ODE.FBDF(), saveat = t_points)
211216
# Failed solve
212217
if !SciMLBase.successful_retcode(sol)
213218
return Inf32
@@ -218,7 +223,7 @@ function loss_fn(ps, _)
218223
end
219224
```
220225

221-
Once the loss function is defined, we use the ADAM optimizer to train the neural network. The optimization problem is defined using SciML's ```Optimization.jl``` tools, and gradients are computed via automatic differentiation using ```AutoZygote()``` from ```SciMLSensitivity```:
226+
Once the loss function is defined, we use the ADAM optimizer to train the neural network. The optimization problem is defined using SciML's `Optimization.jl` tools, and gradients are computed via automatic differentiation using `AutoZygote()` from `SciMLSensitivity`:
222227

223228
```@example bruss
224229
println("[Training] Starting optimization...")
@@ -227,7 +232,6 @@ optf = OPT.OptimizationFunction(loss_fn, SMS.AutoZygote())
227232
optprob = OPT.OptimizationProblem(optf, ps_init)
228233
loss_history = Float32[]
229234
230-
231235
callback = (ps, l) -> begin
232236
push!(loss_history, l)
233237
println("Epoch $(length(loss_history)): Loss = $l")
@@ -238,7 +242,7 @@ end
238242
Finally to run everything:
239243

240244
```@example bruss
241-
res = OPT.solve(optprob, OPO.Optimisers.Adam(0.01), callback=callback, maxiters=100)
245+
res = OPT.solve(optprob, OPO.Optimisers.Adam(0.01), callback = callback, maxiters = 100)
242246
```
243247

244248
```@example bruss
@@ -248,22 +252,22 @@ res.objective
248252
```@example bruss
249253
println("[Plot] Final U/V comparison plots...")
250254
center = N_GRID ÷ 2
251-
sol_final = ODE.solve(ODE.remake(prob_ude_template, p=res.u), ODE.FBDF(), saveat=t_points)
255+
sol_final = ODE.solve(ODE.remake(prob_ude_template, p = res.u), ODE.FBDF(), saveat = t_points)
252256
pred = Array(sol_final)
253257
254-
p1 = Plots.plot(t_points, u_true[center,center,1,:], lw=2, label="U True")
255-
Plots.plot!(p1, t_points, pred[center,center,1,:], lw=2, ls=:dash, label="U Pred")
258+
p1 = Plots.plot(t_points, u_true[center, center, 1, :], lw = 2, label = "U True")
259+
Plots.plot!(p1, t_points, pred[center, center, 1, :], lw = 2, ls = :dash, label = "U Pred")
256260
Plots.title!(p1, "Center U Concentration Over Time")
257261
258-
p2 = Plots.plot(t_points, u_true[center,center,2,:], lw=2, label="V True")
259-
Plots.plot!(p2, t_points, pred[center,center,2,:], lw=2, ls=:dash, label="V Pred")
262+
p2 = Plots.plot(t_points, u_true[center, center, 2, :], lw = 2, label = "V True")
263+
Plots.plot!(p2, t_points, pred[center, center, 2, :], lw = 2, ls = :dash, label = "V Pred")
260264
Plots.title!(p2, "Center V Concentration Over Time")
261265
262-
Plots.plot(p1, p2, layout=(1,2), size=(900,400))
266+
Plots.plot(p1, p2, layout = (1, 2), size = (900, 400))
263267
```
264268

265269
## Results and Conclusion
266270

267271
After training the Universal Differential Equation (UDE), we compared the predicted dynamics to the ground truth for both chemical species.
268272

269-
The low training loss shows us that the neural network in the UDE was able to understand the underlying dynamics, and it was able to learn the $U^2V$ term in the partial differential equation.
273+
The low training loss shows us that the neural network in the UDE was able to understand the underlying dynamics, and it was able to learn the $U^2V$ term in the partial differential equation.

docs/src/examples/sde/SDE_control.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,8 @@ NG = DNP.NoiseGrid(myparameters.ts, W1)
164164
p_all = CA.ComponentArray(p_nn = p_nn,
165165
myparameters = [myparameters.Δ, myparameters.Ωmax, myparameters.κ])
166166
# define SDE problem
167-
prob = SDE.SDEProblem{true}(qubit_drift!, qubit_diffusion!, vec(u0[:, 1]), myparameters.tspan,
167+
prob = SDE.SDEProblem{true}(
168+
qubit_drift!, qubit_diffusion!, vec(u0[:, 1]), myparameters.tspan,
168169
p_all,
169170
callback = callback, noise = NG)
170171
@@ -175,7 +176,8 @@ function g(u, p, t)
175176
cdR = @view u[2, :, :]
176177
ceI = @view u[3, :, :]
177178
cdI = @view u[4, :, :]
178-
p[1] * Statistics.mean((cdR .^ 2 + cdI .^ 2) ./ (ceR .^ 2 + cdR .^ 2 + ceI .^ 2 + cdI .^ 2))
179+
p[1] *
180+
Statistics.mean((cdR .^ 2 + cdI .^ 2) ./ (ceR .^ 2 + cdR .^ 2 + ceI .^ 2 + cdI .^ 2))
179181
end
180182
181183
function loss(p_nn; alg = SDE.EM(), sensealg = SMS.BacksolveAdjoint(autojacvec = SMS.ReverseDiffVJP()))
@@ -502,7 +504,8 @@ NG = DNP.NoiseGrid(myparameters.ts, W1)
502504
p_all = CA.ComponentArray(p_nn = p_nn,
503505
myparameters = [myparameters.Δ; myparameters.Ωmax; myparameters.κ])
504506
# define SDE problem
505-
prob = SDE.SDEProblem{true}(qubit_drift!, qubit_diffusion!, vec(u0[:, 1]), myparameters.tspan,
507+
prob = SDE.SDEProblem{true}(
508+
qubit_drift!, qubit_diffusion!, vec(u0[:, 1]), myparameters.tspan,
506509
p_all,
507510
callback = callback, noise = NG)
508511
```
@@ -526,7 +529,8 @@ function g(u, p, t)
526529
cdR = @view u[2, :, :]
527530
ceI = @view u[3, :, :]
528531
cdI = @view u[4, :, :]
529-
p[1] * Statistics.mean((cdR .^ 2 + cdI .^ 2) ./ (ceR .^ 2 + cdR .^ 2 + ceI .^ 2 + cdI .^ 2))
532+
p[1] *
533+
Statistics.mean((cdR .^ 2 + cdI .^ 2) ./ (ceR .^ 2 + cdR .^ 2 + ceI .^ 2 + cdI .^ 2))
530534
end
531535
532536
function loss(p_nn; alg = SDE.EM(), sensealg = SMS.BacksolveAdjoint(autojacvec = SMS.ReverseDiffVJP()))

docs/src/manual/differential_equation_sensitivities.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,8 @@ by which the derivative is computed. For example:
5555

5656
```julia
5757
function loss(u0, p)
58-
sum(ODE.solve(prob, ODE.Tsit5(), u0 = u0, p = p, saveat = 0.1, sensealg = SMS.ForwardSensitivity()))
58+
sum(ODE.solve(prob, ODE.Tsit5(), u0 = u0, p = p, saveat = 0.1,
59+
sensealg = SMS.ForwardSensitivity()))
5960
end
6061
du0, dp = Zygote.gradient(loss, u0, p)
6162
```
@@ -76,7 +77,7 @@ differentiation). Generally:
7677

7778
- Continuous sensitivity analysis methods only support a subset of
7879
equations, which currently includes:
79-
80+
8081
+ ODEProblem (with mass matrices for differential-algebraic equations (DAEs)
8182
+ SDEProblem
8283
+ SteadyStateProblem / NonlinearProblem
@@ -113,7 +114,7 @@ is:
113114
`TrackerAdjoint` with an out-of-place definition may currently be the best option.
114115

115116
!!! note
116-
117+
117118
Compatibility with direct automatic differentiation algorithms (`ForwardDiffSensitivity`,
118119
`ReverseDiffAdjoint`, etc.) can be queried using the
119120
`SciMLBase.isautodifferentiable(::SciMLAlgorithm)` trait function.

docs/src/tutorials/adjoint_continuous_functional.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,8 @@ To get the adjoint sensitivities, we call:
8585
```@example continuousadjoint
8686
prob = ODE.ODEProblem(f, [1.0; 1.0], (0.0, 10.0), p)
8787
sol = ODE.solve(prob, ODE.DP8())
88-
res = SMS.adjoint_sensitivities(sol, ODE.Vern9(), dgdu_continuous = dg, g = g, abstol = 1e-8,
88+
res = SMS.adjoint_sensitivities(
89+
sol, ODE.Vern9(), dgdu_continuous = dg, g = g, abstol = 1e-8,
8990
reltol = 1e-8)
9091
```
9192

@@ -99,7 +100,8 @@ import Calculus
99100
function G(p)
100101
tmp_prob = ODE.remake(prob, p = p)
101102
sol = ODE.solve(tmp_prob, ODE.Vern9(), abstol = 1e-14, reltol = 1e-14)
102-
res, err = QuadGK.quadgk((t) -> sum(sol(t) .^ 2) ./ 2, 0.0, 10.0, atol = 1e-14, rtol = 1e-10)
103+
res,
104+
err = QuadGK.quadgk((t) -> sum(sol(t) .^ 2) ./ 2, 0.0, 10.0, atol = 1e-14, rtol = 1e-10)
103105
res
104106
end
105107
res2 = FD.gradient(G, [1.5, 1.0, 3.0])

docs/src/tutorials/data_parallel.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -170,13 +170,15 @@ solve this in serial with 100 trajectories. Note that `i` will thus run
170170
from `1:100`.
171171

172172
```@example dataparallel
173-
sim = ODE.solve(ensemble_prob, ODE.Tsit5(), ODE.EnsembleSerial(), saveat = 0.1, trajectories = 100)
173+
sim = ODE.solve(
174+
ensemble_prob, ODE.Tsit5(), ODE.EnsembleSerial(), saveat = 0.1, trajectories = 100)
174175
```
175176

176177
and thus running in multithreading would be:
177178

178179
```@example dataparallel
179-
sim = ODE.solve(ensemble_prob, ODE.Tsit5(), ODE.EnsembleThreads(), saveat = 0.1, trajectories = 100)
180+
sim = ODE.solve(
181+
ensemble_prob, ODE.Tsit5(), ODE.EnsembleThreads(), saveat = 0.1, trajectories = 100)
180182
```
181183

182184
This whole mechanism is differentiable, so we then put it in a training
@@ -193,7 +195,8 @@ Changing to distributed computing is very simple as well. The setup is
193195
all the same, except you utilize `EnsembleDistributed` as the ensembler:
194196

195197
```@example dataparallel
196-
sim = ODE.solve(ensemble_prob, ODE.Tsit5(), ODE.EnsembleDistributed(), saveat = 0.1, trajectories = 100)
198+
sim = ODE.solve(
199+
ensemble_prob, ODE.Tsit5(), ODE.EnsembleDistributed(), saveat = 0.1, trajectories = 100)
197200
```
198201

199202
Note that for this to work, you need to ensure that your processes are

docs/src/tutorials/direct_sensitivity.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,8 @@ sensitivities, call:
113113

114114
```@example directsense
115115
ts = 0:0.5:10
116-
res = SMS.adjoint_sensitivities(sol, ODE.Vern9(), t = ts, dgdu_discrete = dg, abstol = 1e-14,
116+
res = SMS.adjoint_sensitivities(
117+
sol, ODE.Vern9(), t = ts, dgdu_discrete = dg, abstol = 1e-14,
117118
reltol = 1e-14)
118119
```
119120

docs/src/tutorials/training_tips/divergence.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ end
2828
A full example making use of this trick is:
2929

3030
```@example divergence
31-
import OrdinaryDiffEq as ODE, SciMLSensitivity as SMS, SciMLBase, Optimization as OPT, OptimizationOptimisers as OPO, Plots
31+
import OrdinaryDiffEq as ODE, SciMLSensitivity as SMS, SciMLBase, Optimization as OPT,
32+
OptimizationOptimisers as OPO, Plots
3233
3334
function lotka_volterra!(du, u, p, t)
3435
rab, wol = u

0 commit comments

Comments
 (0)