You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To train the model, we'll sample 1000 points from the domain using [`NeuralPDE.QuasiRandomTraining`](https://docs.sciml.ai/NeuralPDE/stable/manual/training_strategies/#NeuralPDE.QuasiRandomTraining).
148
+
See the [NeuralPDE.jl docs](https://docs.sciml.ai/NeuralPDE/stable/) for more on how the `PDESystem` will be converted into an `OptimizationProblem`.
149
+
146
150
```@example SHO
147
151
using NeuralPDE
148
152
@@ -156,28 +160,40 @@ We now define our Lyapunov candidate structure along with the form of the Lyapun
which structurally enforces nonnegativity, but doesn't guarantee ``V([0, 0]) = 0``.
162
-
We therefore don't need a term in the loss function enforcing ``V(x) > 0 \, \forall x \ne 0``, but we do need something enforcing ``V([0, 0]) = 0``.
163
-
So, we use [`DontCheckNonnegativity(check_fixed_point = true)`](@ref).
164
-
165
-
To train for exponential stability we use [`ExponentialStability`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.
166
165
167
166
```@example SHO
168
167
using NeuralLyapunov
169
168
170
169
# Define neural Lyapunov structure and corresponding minimization condition
To train for exponential stability we use [`ExponentialStability`](@ref), but we must specify the rate of exponential decrease, which we know in this case to be ``\zeta \omega_0``.
182
+
183
+
```@example SHO
174
184
# Define Lyapunov decrease condition
175
185
# Damped SHO has exponential stability at a rate of k = ζ * ω_0, so we train to certify that
Copy file name to clipboardExpand all lines: docs/src/demos/policy_search.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -132,23 +132,26 @@ Random.seed!(200)
132
132
133
133
In this example, we'll use the [`Pendulum`](@ref) model in [NeuralLyaupnovProblemLibrary.jl](../lib.md).
134
134
135
-
Since the angle ``\theta`` is periodic with period ``2\pi``, our box domain will be one period in ``\theta`` and an interval in ``\frac{d\theta}{dt}``.
136
-
137
135
```@example policy_search
138
136
using ModelingToolkit, NeuralLyapunovProblemLibrary
Since the angle ``\theta`` is periodic with period ``2\pi``, our box domain will be one period in ``\theta`` and an interval in ``\frac{d\theta}{dt}``.
145
+
146
+
```@example policy_search
147
+
upright_equilibrium = [π, 0.0]
148
+
144
149
θ, ω = unknowns(pendulum)
145
150
146
151
bounds = [
147
152
θ ∈ (0, 2π),
148
153
ω ∈ (-2.0, 2.0)
149
154
]
150
-
151
-
upright_equilibrium = [π, 0.0]
152
155
```
153
156
154
157
We'll use an architecture that's ``2\pi``-periodic in ``\theta`` so that we can train on just one period of ``\theta`` and don't need to add any periodic boundary conditions.
which structurally enforces positive definiteness.
148
149
We therefore use [`DontCheckNonnegativity()`](@ref).
149
150
150
-
We only require asymptotic stability in this example, but we use [`make_RoA_aware`](@ref) to only penalize positive values of ``\dot{V}(x)`` when ``V(x) \le 1``.
We only require asymptotic stability in this example, but we use [`make_RoA_aware`](@ref) to only penalize positive values of ``\dot{V}(x)`` when ``V(x) \le 1``.
0 commit comments