You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/symbolic_ude_tutorial.md
+13-5Lines changed: 13 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,17 +29,17 @@ sol_true = solve(oprob_true)
29
29
plot(sol_true; lw = 6, idxs = [X, Y])
30
30
```
31
31
32
-
Finally, we generate noisy measured samples from both `X` and `Y` (to which we will fir the UDE).
32
+
Finally, we generate noisy measured samples from both `X` and `Y` (to which we will fit the UDE).
33
33
```@example symbolic_ude
34
34
sample_t = range(0.0, tend; length = 20)
35
-
sample_X = [(0.8 + 0.4rand()) * X for X in sol_true(sample_t; idxs = X)]
36
-
sample_Y = [(0.8 + 0.4rand()) * Y for Y in sol_true(sample_t; idxs = Y)]
35
+
sample_X = [(0.8 + 0.4rand()) * X_sample for X_sample in sol_true(sample_t; idxs = X)]
36
+
sample_Y = [(0.8 + 0.4rand()) * Y_sample for Y_sample in sol_true(sample_t; idxs = Y)]
37
37
plot!(sample_t, sample_X, seriestype = :scatter, label = "X (data)", color = 1, ms = 6, alpha = 0.7)
38
38
plot!(sample_t, sample_Y, seriestype = :scatter, label = "Y (data)", color = 2, ms = 6, alpha = 0.7)
39
39
```
40
40
41
41
### UDE declaration and training
42
-
First, we used Lux.jl to declare the neural network we wish to use for our UDE. For this case, we can use a fairly small network. We use `softplus` throughout the network we ensure that the fitted UDE function is positive (for our application this is the case, however, it might not always be true).
42
+
First, we use Lux.jl to declare the neural network we wish to use for our UDE. For this case, we can use a fairly small network. We use `softplus` throughout the network we ensure that the fitted UDE function is positive (for our application this is the case, however, it might not always be true).
43
43
```@example symbolic_ude
44
44
using Lux
45
45
nn_arch = Lux.Chain(
@@ -67,7 +67,15 @@ eqs_ude = [
67
67
68
68
We can now fit our UDE model (including the neural network and the parameter d) to the data. First, we define a loss function which compares the UDE's simulation to the data.
69
69
```@example symbolic_ude
70
-
70
+
function loss(ps, (oprob_base, set_ps, sample_t, sample_X, sample_Y))
x_error = sum((x_sim - x_data)^2 for (x_sim, x_data) in zip(new_osol[X], sample_X))
76
+
y_error = sum((y_sim - y_data)^2 for (y_sim, y_data) in zip(new_osol[Y], sample_Y))
77
+
return x_error + y_error
78
+
end
71
79
```
72
80
73
81
Next, we use Optimization.jl to create an `OptimizationProblem`. This uses a similar syntax to normal parameter inference workflows, however, we need to add the entire neural network parameterisation to the optimization parameter vector.
0 commit comments