You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/lqg.jl
+45-13Lines changed: 45 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,11 @@ Several functions are defined for instances of `LQGProblem`
68
68
- [`observer_controller`](@ref)
69
69
70
70
A video tutorial on how to use the LQG interface is available [here](https://youtu.be/NuAxN1mGCPs)
71
+
72
+
## Introduction of references
73
+
The most principled way of introducing references is to add references as measured inputs to the extended statespace model, and to let the performance output `z` be the differences between the references and the outputs for which references are provided.
74
+
75
+
A less cumbersome way is not not consider references when constructing the `LQGProblem`, and instead pass the `z` keyword arugment to [`extended_controller`](@ref) in order to obtain a closed-loop system from state references to controlled outputs, and use some form of inverse of the DC gain of this system (or one of its subsystems) to pre-compensate the reference input.
71
76
"""
72
77
struct LQGProblem
73
78
sys::ExtendedStateSpace
@@ -266,15 +271,15 @@ function extended_controller(K::AbstractStateSpace)
266
271
end
267
272
268
273
"""
269
-
extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l))
274
+
extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l); z = nothing)
270
275
271
-
Returns an expression for the controller that is obtained when state-feedback `u = -L(xᵣ-x̂)` is combined with a Kalman filter with gain `K` that produces state estimates x̂. The controller is an instance of `ExtendedStateSpace` where `C2 = -L, D21 = L` and `B2 = K`.
276
+
Returns a statespace system representing the controller that is obtained when state-feedback `u = L(xᵣ-x̂)` is combined with a Kalman filter with gain `K` that produces state estimates x̂. The controller is an instance of `ExtendedStateSpace` where `C2 = -L, D21 = L` and `B2 = K`.
272
277
273
-
The returned system has *inputs* `[xᵣ; y]` and outputs the control signal `u`. If a reference model `R` is used to generate state references `xᵣ`, the controller from `e = ry - y -> u` is given by
278
+
The returned system has *inputs* `[xᵣ; y]` and outputs the control signal `u`. If a reference model `R` is used to generate state references `xᵣ`, the controller from `(ry, y) -> u` where `ry - y = e` is given by
274
279
```julia
275
280
Ce = extended_controller(l)
276
-
Ce = named_ss(Ce; x = :xC, y = :u, u = [R.y; :y^l.ny]) # Name the inputs of Ce the same as the outputs of `R`.
Since the negative part of the feedback is built into the returned system, we have
@@ -283,20 +288,31 @@ C = observer_controller(l)
283
288
Ce = extended_controller(l)
284
289
system_mapping(Ce) == -C
285
290
```
291
+
292
+
Please note, without the reference pre-filter, the DC gain from references to controlled outputs may not be identity. If a vector of output indices is provided through the keyword argument `z`, the closed-loop system from state reference `xᵣ` to outputs `z` is returned as a second return argument. The inverse of the DC-gain of this closed-loop system may be useful to compensate for the DC-gain of the controller.
286
293
"""
287
-
functionextended_controller(l::LQGProblem, L =lqr(l), K =kalman(l))
observer_controller(l::LQGProblem, L = lqr(l), K = kalman(l))
302
318
@@ -339,23 +355,39 @@ end
339
355
Return the feedforward controller ``C_{ff}`` that maps references to plant inputs:
340
356
``u = C_{fb}y + C_{ff}r``
341
357
358
+
The following should hold
359
+
```
360
+
Cff = RobustAndOptimalControl.ff_controller(l)
361
+
Cfb = observer_controller(l)
362
+
Gcl = feedback(system_mapping(l), Cfb) * Cff # Note the comma in feedback, P/(I + PC) * Cff
363
+
dcgain(Gcl) ≈ I # Or some I-like non-square matrix
364
+
```
365
+
366
+
Note, if [`extended_controller`](@ref) is used, the DC-gain compensation above cannot be used. The [`extended_controller`](@ref) assumes that the references enter like `u = L(xᵣ - x̂)`.
367
+
342
368
See also [`observer_controller`](@ref).
343
369
"""
344
-
functionff_controller(l::LQGProblem, L =lqr(l), K =kalman(l))
370
+
functionff_controller(l::LQGProblem, L =lqr(l), K =kalman(l); comp_dc =true)
closedloop(l::LQGProblem, L = lqr(l), K = kalman(l))
355
387
356
388
Closed-loop system as defined in Glad and Ljung eq. 8.28. Note, this definition of closed loop is not the same as lft(P, K), which has B1 instead of B2 as input matrix. Use `lft(l)` to get the system from disturbances to controlled variables `w -> z`.
357
389
358
-
The return value will be the closed loop from reference only, other disturbance signals (B1) are ignored. See [`feedback`](@ref) for a more advanced option.
390
+
The return value will be the closed loop from filtred reference only, other disturbance signals (B1) are ignored. See [`feedback`](@ref) for a more advanced option. This function assumes that the control signal is computed as `u = r̃ - Lx̂` (not `u = L(xᵣ - x̂)`), i.e., the feedforward signal `r̃` is added directly to the plant input. `r̃` must thus be produced by an inverse-like model that takes state references and output the feedforward signal.
359
391
360
392
Use `static_gain_compensation` to adjust the gain from references acting on the input B2, `dcgain(closedloop(l))*static_gain_compensation(l) ≈ I`
0 commit comments