Skip to content

Commit b37fe76

Browse files
committed
update from AI feedback
1 parent 28664de commit b37fe76

File tree

1 file changed

+13
-5
lines changed

1 file changed

+13
-5
lines changed

docs/src/symbolic_ude_tutorial.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,17 +29,17 @@ sol_true = solve(oprob_true)
2929
plot(sol_true; lw = 6, idxs = [X, Y])
3030
```
3131

32-
Finally, we generate noisy measured samples from both `X` and `Y` (to which we will fir the UDE).
32+
Finally, we generate noisy measured samples from both `X` and `Y` (to which we will fit the UDE).
3333
```@example symbolic_ude
3434
sample_t = range(0.0, tend; length = 20)
35-
sample_X = [(0.8 + 0.4rand()) * X for X in sol_true(sample_t; idxs = X)]
36-
sample_Y = [(0.8 + 0.4rand()) * Y for Y in sol_true(sample_t; idxs = Y)]
35+
sample_X = [(0.8 + 0.4rand()) * X_sample for X_sample in sol_true(sample_t; idxs = X)]
36+
sample_Y = [(0.8 + 0.4rand()) * Y_sample for Y_sample in sol_true(sample_t; idxs = Y)]
3737
plot!(sample_t, sample_X, seriestype = :scatter, label = "X (data)", color = 1, ms = 6, alpha = 0.7)
3838
plot!(sample_t, sample_Y, seriestype = :scatter, label = "Y (data)", color = 2, ms = 6, alpha = 0.7)
3939
```
4040

4141
### UDE declaration and training
42-
First, we used Lux.jl to declare the neural network we wish to use for our UDE. For this case, we can use a fairly small network. We use `softplus` throughout the network we ensure that the fitted UDE function is positive (for our application this is the case, however, it might not always be true).
42+
First, we use Lux.jl to declare the neural network we wish to use for our UDE. For this case, we can use a fairly small network. We use `softplus` throughout the network we ensure that the fitted UDE function is positive (for our application this is the case, however, it might not always be true).
4343
```@example symbolic_ude
4444
using Lux
4545
nn_arch = Lux.Chain(
@@ -67,7 +67,15 @@ eqs_ude = [
6767

6868
We can now fit our UDE model (including the neural network and the parameter d) to the data. First, we define a loss function which compares the UDE's simulation to the data.
6969
```@example symbolic_ude
70-
70+
function loss(ps, (oprob_base, set_ps, sample_t, sample_X, sample_Y))
71+
p = set_ps(oprob_base, ps)
72+
new_oprob = remake(oprob_base; p)
73+
new_osol = solve(new_oprob; saveat = sample_t, verbose = false, maxiters = 10000)
74+
SciMLBase.successful_retcode(new_osol) || return Inf # Simulation failed -> Inf loss.
75+
x_error = sum((x_sim - x_data)^2 for (x_sim, x_data) in zip(new_osol[X], sample_X))
76+
y_error = sum((y_sim - y_data)^2 for (y_sim, y_data) in zip(new_osol[Y], sample_Y))
77+
return x_error + y_error
78+
end
7179
```
7280

7381
Next, we use Optimization.jl to create an `OptimizationProblem`. This uses a similar syntax to normal parameter inference workflows, however, we need to add the entire neural network parameterisation to the optimization parameter vector.

0 commit comments

Comments
 (0)