You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/dynamic_programming/smoothing.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ But for each version of the consumption-smoothing model there is a natural tax-s
62
62
* relabeling consumption as tax collections and nonfinancial income as government expenditures
63
63
* relabeling the consumer's debt as the government's *assets*
64
64
65
-
For elaborations on this theme, please see {doc}`Optimal Savings II: LQ Techniques <dynamic_programming/perm_income_cons>` and later parts of this lecture.
65
+
For elaborations on this theme, please see {doc}`Optimal Savings II: LQ Techniques <perm_income_cons>` and later parts of this lecture.
66
66
67
67
We'll consider two closely related alternative assumptions about the consumer's
68
68
exogenous nonfinancial income process (or in the tax-smoothing
@@ -77,7 +77,7 @@ We'll spend most of this lecture studying the finite-state Markov specification,
77
77
78
78
### Relationship to Other Lectures
79
79
80
-
This lecture can be viewed as a followup to {doc}`Optimal Savings II: LQ Techniques <dynamic_programming/perm_income_cons>` and a warm up for a model of tax smoothing described in {doc}`opt_tax_recur <../dynamic_programming_squared/opt_tax_recur>`.
80
+
This lecture can be viewed as a followup to {doc}`Optimal Savings II: LQ Techniques <perm_income_cons>` and a warm up for a model of tax smoothing described in {doc}`opt_tax_recur <../dynamic_programming_squared/opt_tax_recur>`.
81
81
82
82
Linear-quadratic versions of the Lucas-Stokey tax-smoothing model are described in {doc}`lqramsey <../dynamic_programming_squared/lqramsey>`.
Copy file name to clipboardExpand all lines: lectures/time_series_models/classical_filtering.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -389,7 +389,7 @@ Thus, we have
389
389
\, X_t
390
390
```
391
391
392
-
This formula is useful in solving stochastic versions of problem 1 of lecture {doc}`Classical Control with Linear Algebra <time_series_models/lu_tricks>` in which the randomness emerges because $\{a_t\}$ is a stochastic
392
+
This formula is useful in solving stochastic versions of problem 1 of lecture {doc}`Classical Control with Linear Algebra <lu_tricks>` in which the randomness emerges because $\{a_t\}$ is a stochastic
393
393
process.
394
394
395
395
The problem is to maximize
@@ -575,7 +575,7 @@ component not in this space.
575
575
576
576
### Implementation
577
577
578
-
Code that computes solutions to LQ control and filtering problems using the methods described here and in {doc}`Classical Control with Linear Algebra <time_series_models/lu_tricks>` can be found in the file [control_and_filter.jl](https://github.com/QuantEcon/QuantEcon.lectures.code/blob/master/lu_tricks/control_and_filter.jl).
578
+
Code that computes solutions to LQ control and filtering problems using the methods described here and in {doc}`Classical Control with Linear Algebra <lu_tricks>` can be found in the file [control_and_filter.jl](https://github.com/QuantEcon/QuantEcon.lectures.code/blob/master/lu_tricks/control_and_filter.jl).
Copy file name to clipboardExpand all lines: lectures/tools_and_techniques/numerical_linear_algebra.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ The methods in this section are called direct methods, and they are qualitativel
40
40
The list of specialized packages for these tasks is enormous and growing, but some of the important organizations to
41
41
look at are [JuliaMatrices](https://github.com/JuliaMatrices) , [JuliaSparse](https://github.com/JuliaSparse), and [JuliaMath](https://github.com/JuliaMath)
42
42
43
-
*NOTE*: As this section uses advanced Julia techniques, you may wish to review multiple-dispatch and generic programming in {doc}`introduction to types <../getting_starting_julia/introduction_to_types>`, and consider further study on {doc}`generic programming <../more_julia/generic_programming>`.
43
+
*NOTE*: As this section uses advanced Julia techniques, you may wish to review multiple-dispatch and generic programming in {doc}`introduction to types <../getting_started_julia/introduction_to_types>`, and consider further study on {doc}`generic programming <../more_julia/generic_programming>`.
44
44
45
45
The theme of this lecture, and numerical linear algebra in general, comes down to three principles:
46
46
@@ -455,7 +455,7 @@ benchmark_solve(A_dense, b)
455
455
456
456
### QR Decomposition
457
457
458
-
{ref}`Previously <qr_decomposition>`, we learned about applications of the QR decomposition to solving the linear least squares.
458
+
Previously, we learned about applications of the QR decomposition to solving the linear least squares.
459
459
460
460
While in principle the solution to the least-squares problem
461
461
@@ -468,7 +468,7 @@ is $x = (A'A)^{-1}A'b$, in practice note that $A'A$ becomes dense and calculatin
468
468
The QR decomposition is a decomposition $A = Q R$ where $Q$ is an orthogonal matrix (i.e., $Q'Q = Q Q' = I$) and $R$ is
469
469
an upper triangular matrix.
470
470
471
-
Given the {ref}`previous derivation <qr_decomposition>`, we showed that we can write the least-squares problem as
471
+
Given the previous derivation, we showed that we can write the least-squares problem as
472
472
the solution to
473
473
474
474
$$
@@ -567,7 +567,7 @@ Keep in mind that a real matrix may have complex eigenvalues and eigenvectors, s
567
567
568
568
## Continuous-Time Markov Chains (CTMCs)
569
569
570
-
In the previous lecture on {doc}`discrete-time Markov chains <mc>`, we saw that the transition probability
570
+
In the previous lecture on {doc}`discrete-time Markov chains <finite_markov>`, we saw that the transition probability
571
571
between state $x$ and state $y$ was summarized by the matrix $P(x, y) := \mathbb P \{ X_{t+1} = y \,|\, X_t = x \}$.
572
572
573
573
As a brief introduction to continuous time processes, consider the same state space as in the discrete
0 commit comments