Skip to content

Commit b8a4a60

Browse files
authored
Update to optimal growth lecture (#145)
* In progress on change to type-safety + named tuples * Commented out old one * Fixed exercise solution code * Fixed typo
1 parent 64ce8f7 commit b8a4a60

File tree

1 file changed

+79
-155
lines changed

1 file changed

+79
-155
lines changed

lectures/dynamic_programming/optgrowth.md

Lines changed: 79 additions & 155 deletions
Original file line numberDiff line numberDiff line change
@@ -464,8 +464,8 @@ The next figure illustrates piecewise linear interpolation of an arbitrary funct
464464
tags: [hide-output]
465465
---
466466
using LinearAlgebra, Statistics
467-
using Plots, QuantEcon, Interpolations, NLsolve, Optim, Random
468-
467+
using Plots, QuantEcon, Interpolations, NLsolve, Optim, Random, Parameters
468+
using Optim: maximum, maximizer
469469
```
470470

471471
```{code-cell} julia
@@ -498,21 +498,19 @@ Another advantage of piecewise linear interpolation is that it preserves useful
498498
Here's a function that implements the Bellman operator using linear interpolation
499499

500500
```{code-cell} julia
501-
using Optim
502-
503-
function T(w, grid, β, u, f, shocks; compute_policy = false)
504-
w_func = LinearInterpolation(grid, w)
505-
# objective for each grid point
506-
objectives = (c -> u(c) + β * mean(w_func.(f(y - c) .* shocks)) for y in grid)
507-
results = maximize.(objectives, 1e-10, grid) # solver result for each grid point
508-
509-
Tw = Optim.maximum.(results)
510-
if compute_policy
511-
σ = Optim.maximizer.(results)
512-
return Tw, σ
501+
function T(w;p, tol = 1e-10)
502+
@unpack β, u, f, ξ, y = p # unpack parameters
503+
w_func = LinearInterpolation(y, w)
504+
505+
Tw = similar(w)
506+
σ = similar(w)
507+
for (i, y_val) in enumerate(y)
508+
# solve maximization for each point in y, using y itself as initial condition.
509+
results = maximize(c -> u(c;p) + β * mean(w_func.(f(y_val - c;p) .* ξ)), tol, y_val)
510+
Tw[i] = maximum(results)
511+
σ[i] = maximizer(results)
513512
end
514-
515-
return Tw
513+
return (;w = Tw, σ) # returns named tuple of results
516514
end
517515
```
518516

@@ -561,76 +559,34 @@ $$
561559

562560
Let's code this up now so we can test against it below
563561

564-
```{code-cell} julia
565-
α = 0.4
566-
β = 0.96
567-
μ = 0
568-
s = 0.1
569-
570-
c1 = log(1 - α * β) / (1 - β)
571-
c2 = (μ + α * log(α * β)) / (1 - α)
572-
c3 = 1 / (1 - β)
573-
c4 = 1 / (1 - α * β)
574-
575-
# Utility
576-
u(c) = log(c)
577-
578-
∂u∂c(c) = 1 / c
579-
580-
# Deterministic part of production function
581-
f(k) = k^α
582-
583-
f′(k) = α * k^(α - 1)
584-
585-
# True optimal policy
586-
c_star(y) = (1 - α * β) * y
587-
588-
# True value function
589-
v_star(y) = c1 + c2 * (c3 - c4) + c4 * log(y)
590-
```
562+
In addition to the model parameters, we need a grid and some shock draws for Monte Carlo integration.
591563

592564
```{code-cell} julia
593-
---
594-
tags: [remove-cell]
595-
---
596-
@testset "Primitives Tests" begin
597-
@test [c1, c2, c3, c4] ≈ [-12.112707886215421, -0.6380751509296068, 24.99999999999998,
598-
1.6233766233766234]
599-
@test ∂u∂c(c1) ≈ -0.08255792258789846
600-
@test v_star(3) ≈ -25.245288867900843
565+
Random.seed!(42) # for reproducible results
566+
u(c;p) = log(c) # utility
567+
f(k;p) = k^p.α # deterministic part of production function
568+
OptimalGrowthModel = @with_kw (α = 0.4, β = 0.96, μ = 0.0, s = 0.1,
569+
u = u, f = f, # defaults defined above
570+
y = range(1e-5, 4.0, length = 200), # grid on y
571+
ξ = exp.(μ .+ s * randn(250)) # monte carlo shocks
572+
) # named tuples defaults
573+
574+
# True value and policy function
575+
function v_star(y;p)
576+
@unpack α, μ, β = p
577+
c1 = log(1 - α * β) / (1 - β)
578+
c2 = (μ + α * log(α * β)) / (1 - α)
579+
c3 = 1 / (1 - β)
580+
c4 = 1 / (1 - α * β)
581+
return c1 + c2 * (c3 - c4) + c4 * log(y)
601582
end
583+
c_star(y;p) = (1 - p.α * p.β) * y
602584
```
603585

604586
### A First Test
605587

606588
To test our code, we want to see if we can replicate the analytical solution numerically, using fitted value function iteration.
607589

608-
We need a grid and some shock draws for Monte Carlo integration.
609-
610-
```{code-cell} julia
611-
---
612-
tags: [hide-output]
613-
---
614-
using Random
615-
Random.seed!(42) # For reproducible results.
616-
617-
grid_max = 4 # Largest grid point
618-
grid_size = 200 # Number of grid points
619-
shock_size = 250 # Number of shock draws in Monte Carlo integral
620-
621-
grid_y = range(1e-5, grid_max, length = grid_size)
622-
shocks = exp.(μ .+ s * randn(shock_size))
623-
```
624-
625-
```{code-cell} julia
626-
---
627-
tags: [remove-cell]
628-
---
629-
@testset "Shock Invariance Tests" begin
630-
#test shocks[4] ≈ 0.9704956010607036 && length(shocks) == 250
631-
end
632-
```
633-
634590
Now let's do some tests.
635591

636592
As one preliminary test, let's see what happens when we apply our Bellman operator to the exact solution $v^*$.
@@ -640,11 +596,14 @@ In theory, the resulting function should again be $v^*$.
640596
In practice we expect some small numerical error.
641597

642598
```{code-cell} julia
643-
w = T(v_star.(grid_y), grid_y, β, log, k -> k^α, shocks)
599+
p = OptimalGrowthModel() # use all default parameters from named tuple
600+
w_star = v_star.(p.y; p) # evaluate closed form value along grid
601+
602+
w = T(w_star; p).w # evaluate operator, access Tw results
644603
645604
plt = plot(ylim = (-35,-24))
646-
plot!(plt, grid_y, w, linewidth = 2, alpha = 0.6, label = "T(v_star)")
647-
plot!(plt, v_star, grid_y, linewidth = 2, alpha=0.6, label = "v_star")
605+
plot!(plt, p.y, w, linewidth = 2, alpha = 0.6, label = "T(v_star)")
606+
plot!(plt, p.y, w_star, linewidth = 2, alpha=0.6, label = "v_star")
648607
plot!(plt, legend = :bottomright)
649608
```
650609

@@ -656,20 +615,20 @@ from an arbitrary initial condition.
656615
The initial condition we'll start with is $w(y) = 5 \ln (y)$
657616

658617
```{code-cell} julia
659-
w = 5 * log.(grid_y) # An initial condition -- fairly arbitrary
618+
w = 5 * log.(p.y) # An initial condition -- fairly arbitrary
660619
n = 35
661620
662-
plot(xlim = (extrema(grid_y)), ylim = (-50, 10))
621+
plot(xlim = (extrema(p.y)), ylim = (-50, 10))
663622
lb = "initial condition"
664-
plt = plot(grid_y, w, color = :black, linewidth = 2, alpha = 0.8, label = lb)
623+
plt = plot(p.y, w, color = :black, linewidth = 2, alpha = 0.8, label = lb)
665624
for i in 1:n
666-
w = T(w, grid_y, β, log, k -> k^α, shocks)
667-
plot!(grid_y, w, color = RGBA(i/n, 0, 1 - i/n, 0.8), linewidth = 2, alpha = 0.6,
625+
w = T(w; p).w
626+
plot!(p.y, w, color = RGBA(i/n, 0, 1 - i/n, 0.8), linewidth = 2, alpha = 0.6,
668627
label = "")
669628
end
670629
671630
lb = "true value function"
672-
plot!(plt, v_star, grid_y, color = :black, linewidth = 2, alpha = 0.8, label = lb)
631+
plot!(plt, p.y, v_star.(p.y; p), color = :black, linewidth = 2, alpha = 0.8, label = lb)
673632
plot!(plt, legend = :bottomright)
674633
```
675634

@@ -678,7 +637,7 @@ plot!(plt, legend = :bottomright)
678637
tags: [remove-cell]
679638
---
680639
@testset begin
681-
#test v_star(grid_y[2]) ≈ -33.370496456772266
640+
#test v_star(grid_y[2];OptimalGrowthModel()) ≈ -33.370496456772266
682641
end
683642
```
684643

@@ -694,25 +653,32 @@ We are clearly getting closer.
694653
We can write a function that computes the exact fixed point
695654

696655
```{code-cell} julia
697-
function solve_optgrowth(initial_w; tol = 1e-6, max_iter = 500)
698-
fixedpoint(w -> T(w, grid_y, β, u, f, shocks), initial_w).zero # gets returned
656+
function solve_optgrowth(initial_w; p, iterations = 500, m = 3, show_trace = false)
657+
results = fixedpoint(w -> T(w;p).w, initial_w; iterations, m, show_trace) # Anderson iteration
658+
v_star = results.zero
659+
σ = T(results.zero;p).σ
660+
return (;v_star, σ, results)
699661
end
700662
```
701663

702664
We can check our result by plotting it against the true value
703665

704666
```{code-cell} julia
705-
initial_w = 5 * log.(grid_y)
706-
v_star_approx = solve_optgrowth(initial_w)
667+
initial_w = 5 * log.(p.y)
668+
sol = solve_optgrowth(initial_w;p)
669+
v_star_approx = sol.v_star
670+
println("Converged in $(sol.results.iterations) to an ||residuals||_∞ of $(sol.results.residual_norm)")
707671
708672
plt = plot(ylim = (-35, -24))
709-
plot!(plt, grid_y, v_star_approx, linewidth = 2, alpha = 0.6,
673+
plot!(plt, p.y, v_star_approx, linewidth = 2, alpha = 0.6,
710674
label = "approximate value function")
711-
plot!(plt, v_star, grid_y, linewidth = 2, alpha = 0.6, label = "true value function")
675+
plot!(plt, p.y, v_star.(p.y;p), linewidth = 2, alpha = 0.6, label = "true value function")
712676
plot!(plt, legend = :bottomright)
713677
```
714678

715-
The figure shows that we are pretty much on the money.
679+
The figure shows that we are pretty much on the money
680+
681+
Note that this converges in fewer than the 36 iterations printed above because it is using Anderson iteration - where the $m=0$ parameter is naive fixed point iteration
716682

717683
### The Policy Function
718684

@@ -726,12 +692,8 @@ The next figure compares the result to the exact solution, which, as mentioned
726692
above, is $\sigma(y) = (1 - \alpha \beta) y$.
727693

728694
```{code-cell} julia
729-
Tw, σ = T(v_star_approx, grid_y, β, log, k -> k^α, shocks;
730-
compute_policy = true)
731-
cstar = (1 - α * β) * grid_y
732-
733-
plt = plot(grid_y, σ, lw=2, alpha=0.6, label = "approximate policy function")
734-
plot!(plt, grid_y, cstar, lw = 2, alpha = 0.6, label = "true policy function")
695+
plt = plot(p.y, T(v_star_approx; p).σ, lw=2, alpha=0.6, label = "approximate policy function")
696+
plot!(plt, p.y, c_star.(p.y; p), lw = 2, alpha = 0.6, label = "true policy function")
735697
plot!(plt, legend = :bottomright)
736698
```
737699

@@ -740,7 +702,7 @@ plot!(plt, legend = :bottomright)
740702
tags: [remove-cell]
741703
---
742704
@testset begin
743-
#test cstar[102] ≈ 1.2505758978894472
705+
#test c_star.(p.y; p)[102] ≈ 1.2505758978894472
744706
end
745707
```
746708

@@ -776,17 +738,8 @@ Random.seed!(42);
776738
---
777739
tags: [hide-output]
778740
---
779-
s = 0.05
780-
shocks = exp.(μ .+ s * randn(shock_size))
781-
```
782-
783-
```{code-cell} julia
784-
---
785-
tags: [remove-cell]
786-
---
787-
@testset begin
788-
#test shocks[25] ≈ 0.8281356207080574
789-
end
741+
p = OptimalGrowthModel(s = 0.05)
742+
p.ξ
790743
```
791744

792745
Otherwise, the parameters and primitives are the same as the log linear model discussed earlier in the lecture.
@@ -802,57 +755,28 @@ Replicate the figure modulo randomness.
802755
Here's one solution (assuming as usual that you've executed everything above)
803756

804757
```{code-cell} julia
805-
function simulate_og(σ, y0 = 0.1, ts_length=100)
758+
function simulate_og(σ, p, y0, ts_length)
806759
y = zeros(ts_length)
807-
ξ = randn(ts_length-1)
808760
y[1] = y0
809761
for t in 1:(ts_length-1)
810-
y[t+1] = (y[t] - σ(y[t]))^α * exp(μ + s * ξ[t])
762+
y[t+1] = (y[t] - σ(y[t]))^p.α * exp(p.μ + p.s * randn())
811763
end
812764
return y
813765
end
814766
767+
β_vals = [0.9 0.94 0.98]
768+
ts_length = 100
769+
y0 = 0.1
815770
plt = plot()
816771
817-
for β in (0.9, 0.94, 0.98)
818-
Tw = similar(grid_y)
819-
initial_w = 5 * log.(grid_y)
820-
821-
v_star_approx = fixedpoint(w -> T(w, grid_y, β, u, f, shocks),
822-
initial_w).zero
823-
Tw, σ = T(v_star_approx, grid_y, β, log, k -> k^α, shocks,
824-
compute_policy = true)
825-
σ_func = LinearInterpolation(grid_y, σ)
826-
y = simulate_og(σ_func)
772+
for β in β_vals
773+
p = OptimalGrowthModel(;β) # change β from default
774+
initial_w = 5 * log.(p.y)
775+
sol = solve_optgrowth(initial_w;p)
776+
σ_func = LinearInterpolation(p.y, sol.σ)
777+
y = simulate_og(σ_func, p,y0, ts_length)
827778
828-
plot!(plt, y, lw = 2, alpha = 0.6, label = label = "beta = $β")
779+
plot!(plt, 0:(ts_length-1), y, lw = 2, alpha = 0.6, label = "beta = $β")
829780
end
830-
831-
plot!(plt, legend = :bottomright)
832-
```
833-
834-
```{code-cell} julia
835-
---
836-
tags: [remove-cell]
837-
---
838-
using Random
839-
Random.seed!(42)
840-
β = 0.94
841-
842-
Tw = similar(grid_y)
843-
initial_w = 5 * log.(grid_y)
844-
v_star_approx = fixedpoint(w -> T(w, grid_y, β, u, f, shocks),
845-
initial_w).zero
846-
Tw, σ = T(v_star_approx, grid_y, β, log, k -> k^α, shocks,
847-
compute_policy = true)
848-
σ_func = LinearInterpolation(grid_y, σ)
849-
y = simulate_og(σ_func);
850-
851-
@testset begin
852-
##test y[5] ≈ 0.4896390574930345
853-
##test σ[3] ≈ 0.025084788719798193 atol = 1e-6
854-
##test Tw[4] ≈ -22.24681487036426
855-
##test v_star_approx[50] ≈ -17.76952387641302
856-
end
857-
```
858-
781+
plt
782+
```

0 commit comments

Comments
 (0)