Skip to content

Commit bf00432

Browse files
committed
add support for computing closed loop in extended_controller
1 parent 07dd5e9 commit bf00432

File tree

3 files changed

+164
-8
lines changed

3 files changed

+164
-8
lines changed

src/lqg.jl

Lines changed: 29 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,11 @@ Several functions are defined for instances of `LQGProblem`
6868
- [`observer_controller`](@ref)
6969
7070
A video tutorial on how to use the LQG interface is available [here](https://youtu.be/NuAxN1mGCPs)
71+
72+
## Introduction of references
73+
The most principled way of introducing references is to add references as measured inputs to the extended statespace model, and to let the performance output `z` be the differences between the references and the outputs for which references are provided.
74+
75+
A less cumbersome way is not not consider references when constructing the `LQGProblem`, and instead pass the `z` keyword arugment to [`extended_controller`](@ref) in order to obtain a closed-loop system from state references to controlled outputs, and use some form of inverse of the DC gain of this system (or one of its subsystems) to pre-compensate the reference input.
7176
"""
7277
struct LQGProblem
7378
sys::ExtendedStateSpace
@@ -266,9 +271,9 @@ function extended_controller(K::AbstractStateSpace)
266271
end
267272

268273
"""
269-
extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l))
274+
extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l); z = nothing)
270275
271-
Returns an expression for the controller that is obtained when state-feedback `u = L(xᵣ-x̂)` is combined with a Kalman filter with gain `K` that produces state estimates x̂. The controller is an instance of `ExtendedStateSpace` where `C2 = -L, D21 = L` and `B2 = K`.
276+
Returns a statespace system representing the controller that is obtained when state-feedback `u = L(xᵣ-x̂)` is combined with a Kalman filter with gain `K` that produces state estimates x̂. The controller is an instance of `ExtendedStateSpace` where `C2 = -L, D21 = L` and `B2 = K`.
272277
273278
The returned system has *inputs* `[xᵣ; y]` and outputs the control signal `u`. If a reference model `R` is used to generate state references `xᵣ`, the controller from `(ry, y) -> u` where `ry - y = e` is given by
274279
```julia
@@ -284,22 +289,30 @@ Ce = extended_controller(l)
284289
system_mapping(Ce) == -C
285290
```
286291
287-
Please note, without the reference pre-filter, the DC gain from references to controlled outputs may not be identity.
292+
Please note, without the reference pre-filter, the DC gain from references to controlled outputs may not be identity. If a vector of output indices is provided through the keyword argument `z`, the closed-loop system from state reference `xᵣ` to outputs `z` is returned as a second return argument. The inverse of the DC-gain of this closed-loop system may be useful to compensate for the DC-gain of the controller.
288293
"""
289-
function extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l))
294+
function extended_controller(l::LQGProblem, L::AbstractMatrix = lqr(l), K::AbstractMatrix = kalman(l); z::Union{Nothing, AbstractVector} = nothing)
290295
P = system_mapping(l)
291296
A,B,C,D = ssdata(P)
292297
Ac = A - B*L - K*C + K*D*L # 8.26b
293-
nx = l.nx
298+
(; nx, nu, ny) = P
294299
B1 = zeros(nx, nx) # dynamics not affected by r
295300
# l.D21 does not appear here, see comment in kalman
296301
B2 = K # input y
297302
D21 = L # L*xᵣ # should be D21?
298303
C2 = -L # - L*x̂
299304
C1 = zeros(0, nx)
300-
ss(Ac, B1, B2, C1, C2; D21, Ts = l.timeevol)
305+
Ce0 = ss(Ac, B1, B2, C1, C2; D21, Ts = l.timeevol)
306+
if z === nothing
307+
return Ce0
308+
end
309+
r = 1:nx
310+
Ce = ss(Ce0)
311+
cl = feedback(P, Ce, Z1 = z, Z2=[], U2=(1:ny) .+ nx, Y1 = :, W2=r, W1=[])
312+
Ce0, cl
301313
end
302314

315+
303316
"""
304317
observer_controller(l::LQGProblem, L = lqr(l), K = kalman(l))
305318
@@ -342,6 +355,16 @@ end
342355
Return the feedforward controller ``C_{ff}`` that maps references to plant inputs:
343356
``u = C_{fb}y + C_{ff}r``
344357
358+
The following should hold
359+
```
360+
Cff = RobustAndOptimalControl.ff_controller(l)
361+
Cfb = observer_controller(l)
362+
Gcl = feedback(system_mapping(l), Cfb) * Cff # Note the comma in feedback, P/(I + PC) * Cff
363+
dcgain(Gcl) ≈ I # Or some I-like non-square matrix
364+
```
365+
366+
Note, if [`extended_controller`](@ref) is used, the DC-gain compensation above cannot be used. The [`extended_controller`](@ref) assumes that the references enter like `u = L(xᵣ - x̂)`.
367+
345368
See also [`observer_controller`](@ref).
346369
"""
347370
function ff_controller(l::LQGProblem, L = lqr(l), K = kalman(l); comp_dc = true)

test/test_extendedstatespace.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
using RobustAndOptimalControl
22
using ControlSystemsBase
33
using ControlSystemsBase: balance_statespace
4+
using Test
45

56
G = ssrand(3,4,5)
67

test/test_lqg.jl

Lines changed: 134 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
using RobustAndOptimalControl, ControlSystemsBase, LinearAlgebra
1+
using RobustAndOptimalControl, ControlSystemsBase, LinearAlgebra, Test
22
# NOTE: Glad, Ljung chap 9 contains several numerical examples that can be used as test cases
33

44

@@ -402,4 +402,136 @@ predicted_costf = dot([x0; zeros(nu)], QNf, [x0; zeros(nu)])
402402
@test predicted_costf predicted_cost
403403

404404
# @show (actual_cost - predicted_cost) / actual_cost
405-
@test actual_cost predicted_cost rtol=1e-10
405+
@test actual_cost predicted_cost rtol=1e-10
406+
407+
## Double mass model with output penalty
408+
409+
# One input, to z outputs
410+
P = DemoSystems.double_mass_model(outputs = 1:4)
411+
# Pe = partition(P, y = 1:2, u = 1)
412+
Pe = ExtendedStateSpace(P, C1 = P.C[3:4, :], C2 = P.C[1:1, :], B1 = P.C[[2,4], :]')
413+
414+
Q1 = 1.0I(Pe.nz)
415+
Q2 = 0.1I(Pe.nu)
416+
R1 = 1.0I(Pe.nw)
417+
R2 = 0.1I(Pe.ny)
418+
419+
G = LQGProblem(Pe, Q1, Q2, R1, R2)
420+
Cfb = observer_controller(G)
421+
Ce = extended_controller(G)
422+
423+
@test Cfb.nu == Pe.ny
424+
@test Cfb.ny == Pe.nu
425+
426+
427+
Cff = RobustAndOptimalControl.ff_controller(G) # TODO: need to add argument that determins whether to use state references or references of z
428+
@test Cff.nu == Pe.nz
429+
@test Cff.ny == Pe.nu
430+
431+
Gcl2 = feedback(system_mapping(Pe), Cfb) * Cff
432+
@test size(Gcl2) == (1, 2)
433+
@test isstable(Gcl2)
434+
@test dcgain(Gcl2) [1.0 0] atol=1e-6
435+
436+
437+
438+
## One input, one z output
439+
P = DemoSystems.double_mass_model(outputs = 1:4)
440+
# Pe = partition(P, y = 1:2, u = 1)
441+
Pe = ExtendedStateSpace(P, C1 = P.C[3:3, :], C2 = P.C[1:1, :], B1 = P.C[[2,4], :]')
442+
443+
Q1 = 1.0I(Pe.nz)
444+
Q2 = 0.1I(Pe.nu)
445+
R1 = 1.0I(Pe.nw)
446+
R2 = 0.1I(Pe.ny)
447+
448+
G = LQGProblem(Pe, Q1, Q2, R1, R2)
449+
Cfb = observer_controller(G)
450+
Ce = extended_controller(G)
451+
452+
@test Cfb.nu == Pe.ny
453+
@test Cfb.ny == Pe.nu
454+
455+
456+
Cff = RobustAndOptimalControl.ff_controller(G)
457+
@test Cff.nu == Pe.nz
458+
@test Cff.ny == Pe.nu
459+
460+
Gcl = feedback(system_mapping(Pe) * Cfb)
461+
@test size(Gcl) == (1, 1)
462+
@test isstable(Gcl)
463+
@test dcgain(Gcl)[] 1.0
464+
465+
Gcl2 = feedback(system_mapping(Pe), Cfb) * Cff
466+
@test size(Gcl2) == (1, 1)
467+
@test isstable(Gcl2)
468+
@test dcgain(Gcl2)[] 1.0
469+
470+
## Multibody cartpole tests
471+
472+
lsys = let
473+
lsysA = [0.0 0.0 0.0 1.0; 0.0 0.0 1.0 0.0; 4.474102070258828 0.0 0.0 0.0; 39.683507165892394 0.0 0.0 0.0]
474+
lsysB = [0.0; 0.0; 0.8415814526118872; 2.3385955754360115;;]
475+
lsysC = [0.0 1.0 0.0 0.0; 1.0 0.0 0.0 0.0; 0.0 0.0 1.0 0.0; 0.0 0.0 0.0 1.0]
476+
lsysD = [0.0; 0.0; 0.0; 0.0;;]
477+
named_ss(ss(lsysA, lsysB, lsysC, lsysD), x=[:revolute₊phi, :prismatic₊s, :prismatic₊v, :revolute₊w], u=[Symbol("u(t)")], y=[Symbol("x(t)"), Symbol("phi(t)"), Symbol("v(t)"), Symbol("w(t)")])
478+
end
479+
480+
C = lsys.C
481+
Q1 = Diagonal([10, 10, 10, 1])
482+
Q2 = Diagonal([0.1])
483+
484+
R1 = lsys.B*Diagonal([1])*lsys.B'
485+
R2 = Diagonal([0.01, 0.01])
486+
Pn = lsys[[:x, :phi], :]
487+
P = ss(Pn)
488+
489+
lqg = LQGProblem(P, Q1, Q2, R1, R2)
490+
491+
# Below we test that the closed-loop DC gain from references to cart position is 1
492+
Ce = extended_controller(lqg)
493+
494+
495+
# Method 1: compute DC gain compensation using ff_controller, this is used as reference for the others. I feel uneasy about the (undocumented) comp_dc = false argument here, ideally a more generally applicable function ff_controller would be implemented that can handle both state and output references etc.
496+
dc_gain_compensation = dcgain(RobustAndOptimalControl.ff_controller(lqg, comp_dc = false))[]
497+
498+
# Method 2, using the observer controller and static_gain_compensation
499+
Cfb = observer_controller(lqg)
500+
@test inv(dcgain((feedback(P, Cfb)*static_gain_compensation(lqg))[1,2]))[] dc_gain_compensation rtol=1e-8
501+
502+
# Method 3, using the observer controller and the ff_controller
503+
cl = feedback(system_mapping(lqg), observer_controller(lqg))*RobustAndOptimalControl.ff_controller(lqg, comp_dc = true)
504+
@test dcgain(cl)[1,2] 1 rtol=1e-8
505+
506+
# Method 4: Build compensation into R and compute the closed-loop DC gain, should be 1
507+
R = named_ss(ss(dc_gain_compensation*I(4)), "R") # Reference filter
508+
Ce = named_ss(ss(Ce); x = :xC, y = :u, u = [R.y; :y^lqg.ny])
509+
510+
Cry = RobustAndOptimalControl.connect([R, Ce]; u1 = R.y, y1 = R.y, w1 = [R.u; :y^lqg.ny], z1=[:u])
511+
512+
connections = [
513+
:u => :u
514+
[:x, :phi] .=> [:y1, :y2]
515+
]
516+
cl = RobustAndOptimalControl.connect([lsys, Cry], connections; w1 = R.u, z1 = [:x, :phi])
517+
@test inv(dcgain(cl)[1,2]) 1 rtol=1e-8
518+
519+
520+
# Method 5: close the loop manually with reference as input and position as output
521+
522+
R = named_ss(ss(I(4)), "R") # Reference filter, used for signal names only
523+
Ce = named_ss(ss(extended_controller(lqg)); x = :xC, y = :u, u = [:Ry^4; :y^lqg.ny])
524+
cl = feedback(lsys, Ce, z1 = [:x], z2=[], u2=:y^2, y1 = [:x, :phi], w2=[:Ry2], w1=[])
525+
@test inv(dcgain(cl)[]) dc_gain_compensation rtol=1e-8
526+
527+
cl = feedback(lsys, Ce, z1 = [:x, :phi], z2=[], u2=:y^2, y1 = [:x, :phi], w2=[:Ry2], w1=[])
528+
@test pinv(dcgain(cl)) [dc_gain_compensation 0] atol=1e-8
529+
530+
# Method 6: use the z argument to extended_controller to compute the closed-loop TF
531+
532+
Ce, cl = extended_controller(lqg, z=[1, 2])
533+
@test pinv(dcgain(cl)[1,2]) dc_gain_compensation atol=1e-8
534+
535+
Ce, cl = extended_controller(lqg, z=[1])
536+
@test pinv(dcgain(cl)[1,2]) dc_gain_compensation atol=1e-8
537+

0 commit comments

Comments
 (0)