Skip to content

Commit c669c04

Browse files
Merge pull request #63 from jwestrenen3/add-class07-materials
Add all Class07 lecture materials
2 parents 6247a79 + 98f6c60 commit c669c04

14 files changed

+10899
-0
lines changed

class07/Kalman_Filtering.jl

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
## Kalman Filtering
2+
# Goal: Estimate the true state of a system when measurements are noisy.
3+
#
4+
# In many practical systems, we cannot directly observe the full state x_t.
5+
# Instead, we have noisy measurements y_t. The Kalman filter provides the
6+
# optimal linear estimate of the hidden state, assuming linear dynamics
7+
# and Gaussian noise.
8+
9+
using LinearAlgebra
10+
11+
# The Kalman filter operates in two steps: prediction and update.
12+
13+
## Prediction Step:
14+
# Forecast the next state based on the previous estimate and the system model.
15+
# x_pred = A * x_prev + B * u_prev
16+
# - This uses the known dynamics of the system (A, B) and the previous control input u_prev
17+
# P_pred = A * P_prev * A' + Q
18+
# - The uncertainty (covariance) is propagated forward
19+
# - Q represents process noise: uncertainty in how the system evolves
20+
# - Larger P_pred means we are less confident in the prediction
21+
22+
## Update Step:
23+
# Correct the prediction using the latest measurement y
24+
# K = P_pred * C' * inv(C * P_pred * C' + R)
25+
# - The Kalman gain K determines how much we trust the measurement vs. prediction
26+
# x_hat = x_pred + K * (y - C * x_pred)
27+
# - Innovation (y - C*x_pred) adjusts our estimate based on new information
28+
# P = (I - K * C) * P_pred
29+
# - Covariance decreases after incorporating measurement, increasing confidence
30+
31+
function kalman_filter(A, B, C, Q, R, y, u=nothing)
32+
n = size(A,1)
33+
N = length(y)
34+
x_hat = zeros(n, N) # estimated states
35+
P = zeros(n,n) # covariance
36+
I = Matrix(I, n, n)
37+
38+
# Handle optional control input
39+
u = isnothing(u) ? zeros(size(B,2), N) : u
40+
41+
for k in 1:N
42+
# Prediction
43+
if k == 1
44+
x_pred = zeros(n) # initial guess for first step
45+
P_pred = P + Q
46+
else
47+
x_pred = A * x_hat[:,k-1] + B * u[:,k-1]
48+
P_pred = A * P * A' + Q
49+
end
50+
51+
# Update
52+
K = P_pred * C' * inv(C * P_pred * C' + R)
53+
x_hat[:,k] = x_pred + K * (y[k] - C * x_pred)
54+
P = (I - K * C) * P_pred
55+
end
56+
57+
return x_hat
58+
end
59+
60+
## Example usage:
61+
# A simple demonstration with a small system. This shows how the filter
62+
# reconstructs the true state from noisy measurements.
63+
64+
if abspath(PROGRAM_FILE) == @__FILE__
65+
# Define system matrices (example: 2-state thermal system)
66+
A = [1.0 1.0; 0.0 1.0] # state transition
67+
B = [0.0; 1.0] # control input
68+
C = [1.0 0.0] # measurement matrix
69+
Q = 0.01 * I(2) # process noise covariance
70+
R = 0.1 # measurement noise covariance
71+
72+
# Simulated noisy measurements
73+
true_states = [20.0, 21.0, 23.0, 22.0]
74+
y = true_states .+ randn(length(true_states)) * sqrt(R)
75+
76+
# Run Kalman filter
77+
x_hat = kalman_filter(A, B, C, Q, R, y)
78+
79+
println("Noisy measurements: ", y)
80+
println("Estimated states: ", x_hat)
81+
end

class07/LQG_Control.jl

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Linear Quadratic Gaussian (LQG) Control
2+
#
3+
# LQG control is about designing an optimal controller for linear systems
4+
# when there is uncertainty in both the system dynamics and the measurements.
5+
#
6+
# System setup:
7+
# x_{t+1} = A * x_t + B * u_t + w_t
8+
# y_t = C * x_t + v_t
9+
# Here, w_t and v_t are Gaussian process and measurement noise.
10+
#
11+
# The goal is to minimize a quadratic cost function:
12+
# J = E[ sum(x_t' * Q * x_t + u_t' * R * u_t) ]
13+
# which balances performance (state tracking) and control effort.
14+
15+
using LinearAlgebra
16+
17+
# Linear Quadratic Regulator (LQR)
18+
#
19+
# The LQR computes the optimal control law for a deterministic linear system
20+
# when the full state is known. For continuous-time systems:
21+
# dx/dt = A*x + B*u
22+
# The quadratic cost is:
23+
# J = ∫ (x'Qx + u'Ru) dt
24+
# The optimal control law is:
25+
# u = -K*x
26+
# where K is obtained by solving the continuous-time Riccati equation:
27+
# A'*P + P*A - P*B*R^-1*B'*P + Q = 0
28+
#
29+
# Here, we implement a discrete-time version.
30+
31+
function dlqr(A, B, Q, R)
32+
P = copy(Q)
33+
maxiter = 1000
34+
tol = 1e-8
35+
for i in 1:maxiter
36+
P_new = A' * P * A - A' * P * B * inv(R + B' * P * B) * B' * P * A + Q
37+
if norm(P_new - P) < tol
38+
P = P_new
39+
break
40+
end
41+
P = P_new
42+
end
43+
K = inv(R + B' * P * B) * B' * P * A
44+
return K, P
45+
end
46+
47+
# Kalman Filter (State Estimation)
48+
#
49+
# In LQG, the controller does not know the true state x_t.
50+
# Instead, it receives noisy measurements y_t and uses a Kalman filter
51+
# to produce an optimal estimate x_hat_t.
52+
#
53+
# The filter updates the estimate according to:
54+
# x_hat_{t+1} = A*x_hat_t + B*u_t + L*(y_t - C*x_hat_t)
55+
# where L is the Kalman gain chosen to minimize estimation error variance.
56+
57+
function kalman_filter(A, B, C, Q, R, y)
58+
n = size(A, 1)
59+
x_hat = zeros(n, length(y))
60+
P = zeros(n, n)
61+
for t in 1:length(y)
62+
# Prediction step
63+
if t == 1
64+
x_pred = zeros(n)
65+
P_pred = P + Q
66+
else
67+
x_pred = A * x_hat[:, t-1]
68+
P_pred = A * P * A' + Q
69+
end
70+
# Update step
71+
K = P_pred * C' * inv(C * P_pred * C' + R)
72+
x_hat[:, t] = x_pred + K * (y[t] - C * x_pred)
73+
P = (I - K * C) * P_pred
74+
end
75+
return x_hat
76+
end
77+
78+
# Separation Principle
79+
#
80+
# A key property of LQG control is the separation principle:
81+
# The design of the controller (LQR) and the estimator (Kalman filter)
82+
# can be done independently. Once K and L are computed, the control input is:
83+
# u_t = -K * x_hat_t
84+
# This allows the system to act as if the true state were known, using only the estimated state.

0 commit comments

Comments
 (0)