Skip to content

Commit 8fcc20d

Browse files
authored
Merge branch 'main' into claude/test-theme-feature-01CNYzUjCAYt4SXAupAkiDx9
2 parents 29ad9cf + a1cd90c commit 8fcc20d

21 files changed

+112
-112
lines changed

lectures/ak2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ Units of the rental rates are:
209209
* for $r_t$, output at time $t$ per unit of capital at time $t$
210210
211211
212-
We take output at time $t$ as *numeraire*, so the price of output at time $t$ is one.
212+
We take output at time $t$ as **numeraire**, so the price of output at time $t$ is one.
213213
214214
The firm's profits at time $t$ are
215215

lectures/cake_eating_stochastic.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -164,13 +164,13 @@ In summary, the agent's aim is to select a path $c_0, c_1, c_2, \ldots$ for cons
164164
1. nonnegative,
165165
1. feasible in the sense of {eq}`outcsdp0`,
166166
1. optimal, in the sense that it maximizes {eq}`texs0_og2` relative to all other feasible consumption sequences, and
167-
1. *adapted*, in the sense that the action $c_t$ depends only on
167+
1. **adapted**, in the sense that the action $c_t$ depends only on
168168
observable outcomes, not on future outcomes such as $\xi_{t+1}$.
169169

170170
In the present context
171171

172-
* $x_t$ is called the *state* variable --- it summarizes the "state of the world" at the start of each period.
173-
* $c_t$ is called the *control* variable --- a value chosen by the agent each period after observing the state.
172+
* $x_t$ is called the **state** variable --- it summarizes the "state of the world" at the start of each period.
173+
* $c_t$ is called the **control** variable --- a value chosen by the agent each period after observing the state.
174174

175175
### The Policy Function Approach
176176

lectures/cake_eating_time_iter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ whenever $\sigma \in \mathscr P$.
237237
It is possible to prove that there is a tight relationship between iterates of
238238
$K$ and iterates of the Bellman operator.
239239

240-
Mathematically, the two operators are *topologically conjugate*.
240+
Mathematically, the two operators are **topologically conjugate**.
241241

242242
Loosely speaking, this means that if iterates of one operator converge then
243243
so do iterates of the other, and vice versa.

lectures/career.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,8 +66,8 @@ from matplotlib import cm
6666

6767
In what follows we distinguish between a career and a job, where
6868

69-
* a *career* is understood to be a general field encompassing many possible jobs, and
70-
* a *job* is understood to be a position with a particular firm
69+
* a **career** is understood to be a general field encompassing many possible jobs, and
70+
* a **job** is understood to be a position with a particular firm
7171

7272
For workers, wages can be decomposed into the contribution of job and career
7373

lectures/cass_fiscal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,8 +147,8 @@ $$ (eq:gov_budget)
147147
148148
Given a budget-feasible government policy $\{g_t\}_{t=0}^\infty$ and $\{\tau_{ct}, \tau_{kt}, \tau_{nt}, \tau_{ht}\}_{t=0}^\infty$ subject to {eq}`eq:gov_budget`,
149149
150-
- *Household* chooses $\{c_t\}_{t=0}^\infty$, $\{n_t\}_{t=0}^\infty$, and $\{k_{t+1}\}_{t=0}^\infty$ to maximize utility{eq}`eq:utility` subject to budget constraint{eq}`eq:house_budget`, and
151-
- *Frim* chooses sequences of capital $\{k_t\}_{t=0}^\infty$ and $\{n_t\}_{t=0}^\infty$ to maximize profits
150+
- **Household** chooses $\{c_t\}_{t=0}^\infty$, $\{n_t\}_{t=0}^\infty$, and $\{k_{t+1}\}_{t=0}^\infty$ to maximize utility{eq}`eq:utility` subject to budget constraint{eq}`eq:house_budget`, and
151+
- **Firm** chooses sequences of capital $\{k_t\}_{t=0}^\infty$ and $\{n_t\}_{t=0}^\infty$ to maximize profits
152152
153153
$$
154154
\sum_{t=0}^\infty q_t [F(k_t, n_t) - \eta_t k_t - w_t n_t]

lectures/kalman.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ One way to summarize our knowledge is a point prediction $\hat x$
8585
* Then it is better to summarize our initial beliefs with a bivariate probability density $p$
8686
* $\int_E p(x)dx$ indicates the probability that we attach to the missile being in region $E$.
8787

88-
The density $p$ is called our *prior* for the random variable $x$.
88+
The density $p$ is called our **prior** for the random variable $x$.
8989

9090
To keep things tractable in our example, we assume that our prior is Gaussian.
9191

@@ -317,7 +317,7 @@ We have obtained probabilities for the current location of the state (missile) g
317317
This is called "filtering" rather than forecasting because we are filtering
318318
out noise rather than looking into the future.
319319

320-
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the *filtering distribution*
320+
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the **filtering distribution**
321321

322322
But now let's suppose that we are given another task: to predict the location of the missile after one unit of time (whatever that may be) has elapsed.
323323

@@ -331,7 +331,7 @@ Let's suppose that we have one, and that it's linear and Gaussian. In particular
331331
x_{t+1} = A x_t + w_{t+1}, \quad \text{where} \quad w_t \sim N(0, Q)
332332
```
333333

334-
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new *predictive* distribution for the location in one unit of time.
334+
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new **predictive** distribution for the location in one unit of time.
335335

336336
In view of {eq}`kl_xdynam`, all we have to do is introduce a random vector $x^F \sim N(\hat x^F, \Sigma^F)$ and work out the distribution of $A x^F + w$ where $w$ is independent of $x^F$ and has distribution $N(0, Q)$.
337337

@@ -356,7 +356,7 @@ $$
356356
$$
357357

358358
The matrix $A \Sigma G' (G \Sigma G' + R)^{-1}$ is often written as
359-
$K_{\Sigma}$ and called the *Kalman gain*.
359+
$K_{\Sigma}$ and called the **Kalman gain**.
360360

361361
* The subscript $\Sigma$ has been added to remind us that $K_{\Sigma}$ depends on $\Sigma$, but not $y$ or $\hat x$.
362362

@@ -373,7 +373,7 @@ Our updated prediction is the density $N(\hat x_{new}, \Sigma_{new})$ where
373373
\end{aligned}
374374
```
375375

376-
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the *predictive distribution*
376+
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the **predictive distribution**
377377

378378
The predictive distribution is the new density shown in the following figure, where
379379
the update has used parameters.

lectures/likelihood_bayes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,8 +129,8 @@ $$
129129
where we use the conventions
130130
that $f(w^t) = f(w_1) f(w_2) \ldots f(w_t)$ and $g(w^t) = g(w_1) g(w_2) \ldots g(w_t)$.
131131

132-
Notice that the likelihood process satisfies the *recursion* or
133-
*multiplicative decomposition*
132+
Notice that the likelihood process satisfies the **recursion** or
133+
**multiplicative decomposition**
134134

135135
$$
136136
L(w^t) = \ell (w_t) L (w^{t-1}) .

lectures/linear_algebra.md

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ from scipy.linalg import inv, solve, det, eig
8585
```{index} single: Linear Algebra; Vectors
8686
```
8787

88-
A *vector* of length $n$ is just a sequence (or array, or tuple) of $n$ numbers, which we write as $x = (x_1, \ldots, x_n)$ or $x = [x_1, \ldots, x_n]$.
88+
A **vector** of length $n$ is just a sequence (or array, or tuple) of $n$ numbers, which we write as $x = (x_1, \ldots, x_n)$ or $x = [x_1, \ldots, x_n]$.
8989

9090
We will write these sequences either horizontally or vertically as we please.
9191

@@ -225,15 +225,15 @@ x + y
225225
```{index} single: Vectors; Norm
226226
```
227227

228-
The *inner product* of vectors $x,y \in \mathbb R ^n$ is defined as
228+
The **inner product** of vectors $x,y \in \mathbb R ^n$ is defined as
229229

230230
$$
231231
x' y := \sum_{i=1}^n x_i y_i
232232
$$
233233

234-
Two vectors are called *orthogonal* if their inner product is zero.
234+
Two vectors are called **orthogonal** if their inner product is zero.
235235

236-
The *norm* of a vector $x$ represents its "length" (i.e., its distance from the zero vector) and is defined as
236+
The **norm** of a vector $x$ represents its "length" (i.e., its distance from the zero vector) and is defined as
237237

238238
$$
239239
\| x \| := \sqrt{x' x} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}
@@ -273,7 +273,7 @@ np.linalg.norm(x) # Norm of x, take three
273273

274274
Given a set of vectors $A := \{a_1, \ldots, a_k\}$ in $\mathbb R ^n$, it's natural to think about the new vectors we can create by performing linear operations.
275275

276-
New vectors created in this manner are called *linear combinations* of $A$.
276+
New vectors created in this manner are called **linear combinations** of $A$.
277277

278278
In particular, $y \in \mathbb R ^n$ is a linear combination of $A := \{a_1, \ldots, a_k\}$ if
279279

@@ -282,9 +282,9 @@ y = \beta_1 a_1 + \cdots + \beta_k a_k
282282
\text{ for some scalars } \beta_1, \ldots, \beta_k
283283
$$
284284

285-
In this context, the values $\beta_1, \ldots, \beta_k$ are called the *coefficients* of the linear combination.
285+
In this context, the values $\beta_1, \ldots, \beta_k$ are called the **coefficients** of the linear combination.
286286

287-
The set of linear combinations of $A$ is called the *span* of $A$.
287+
The set of linear combinations of $A$ is called the **span** of $A$.
288288

289289
The next figure shows the span of $A = \{a_1, a_2\}$ in $\mathbb R ^3$.
290290

@@ -349,7 +349,7 @@ plt.show()
349349
If $A$ contains only one vector $a_1 \in \mathbb R ^2$, then its
350350
span is just the scalar multiples of $a_1$, which is the unique line passing through both $a_1$ and the origin.
351351

352-
If $A = \{e_1, e_2, e_3\}$ consists of the *canonical basis vectors* of $\mathbb R ^3$, that is
352+
If $A = \{e_1, e_2, e_3\}$ consists of the **canonical basis vectors** of $\mathbb R ^3$, that is
353353

354354
$$
355355
e_1 :=
@@ -399,8 +399,8 @@ The condition we need for a set of vectors to have a large span is what's called
399399

400400
In particular, a collection of vectors $A := \{a_1, \ldots, a_k\}$ in $\mathbb R ^n$ is said to be
401401

402-
* *linearly dependent* if some strict subset of $A$ has the same span as $A$.
403-
* *linearly independent* if it is not linearly dependent.
402+
* **linearly dependent** if some strict subset of $A$ has the same span as $A$.
403+
* **linearly independent** if it is not linearly dependent.
404404

405405
Put differently, a set of vectors is linearly independent if no vector is redundant to the span and linearly dependent otherwise.
406406

@@ -469,19 +469,19 @@ Often, the numbers in the matrix represent coefficients in a system of linear eq
469469

470470
For obvious reasons, the matrix $A$ is also called a vector if either $n = 1$ or $k = 1$.
471471

472-
In the former case, $A$ is called a *row vector*, while in the latter it is called a *column vector*.
472+
In the former case, $A$ is called a **row vector**, while in the latter it is called a **column vector**.
473473

474-
If $n = k$, then $A$ is called *square*.
474+
If $n = k$, then $A$ is called **square**.
475475

476-
The matrix formed by replacing $a_{ij}$ by $a_{ji}$ for every $i$ and $j$ is called the *transpose* of $A$ and denoted $A'$ or $A^{\top}$.
476+
The matrix formed by replacing $a_{ij}$ by $a_{ji}$ for every $i$ and $j$ is called the **transpose** of $A$ and denoted $A'$ or $A^{\top}$.
477477

478-
If $A = A'$, then $A$ is called *symmetric*.
478+
If $A = A'$, then $A$ is called **symmetric**.
479479

480-
For a square matrix $A$, the $i$ elements of the form $a_{ii}$ for $i=1,\ldots,n$ are called the *principal diagonal*.
480+
For a square matrix $A$, the $i$ elements of the form $a_{ii}$ for $i=1,\ldots,n$ are called the **principal diagonal**.
481481

482-
$A$ is called *diagonal* if the only nonzero entries are on the principal diagonal.
482+
$A$ is called **diagonal** if the only nonzero entries are on the principal diagonal.
483483

484-
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then $A$ is called the *identity matrix* and denoted by $I$.
484+
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then $A$ is called the **identity matrix** and denoted by $I$.
485485

486486
### Matrix Operations
487487

@@ -641,9 +641,9 @@ See [here](https://python-programming.quantecon.org/numpy.html#matrix-multiplica
641641

642642
Each $n \times k$ matrix $A$ can be identified with a function $f(x) = Ax$ that maps $x \in \mathbb R ^k$ into $y = Ax \in \mathbb R ^n$.
643643

644-
These kinds of functions have a special property: they are *linear*.
644+
These kinds of functions have a special property: they are **linear**.
645645

646-
A function $f \colon \mathbb R ^k \to \mathbb R ^n$ is called *linear* if, for all $x, y \in \mathbb R ^k$ and all scalars $\alpha, \beta$, we have
646+
A function $f \colon \mathbb R ^k \to \mathbb R ^n$ is called **linear** if, for all $x, y \in \mathbb R ^k$ and all scalars $\alpha, \beta$, we have
647647

648648
$$
649649
f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)
@@ -773,7 +773,7 @@ In particular, the following are equivalent
773773
1. The columns of $A$ are linearly independent.
774774
1. For any $y \in \mathbb R ^n$, the equation $y = Ax$ has a unique solution.
775775

776-
The property of having linearly independent columns is sometimes expressed as having *full column rank*.
776+
The property of having linearly independent columns is sometimes expressed as having **full column rank**.
777777

778778
#### Inverse Matrices
779779

@@ -788,7 +788,7 @@ solution is $x = A^{-1} y$.
788788
A similar expression is available in the matrix case.
789789

790790
In particular, if square matrix $A$ has full column rank, then it possesses a multiplicative
791-
*inverse matrix* $A^{-1}$, with the property that $A A^{-1} = A^{-1} A = I$.
791+
**inverse matrix** $A^{-1}$, with the property that $A A^{-1} = A^{-1} A = I$.
792792

793793
As a consequence, if we pre-multiply both sides of $y = Ax$ by $A^{-1}$, we get $x = A^{-1} y$.
794794

@@ -800,11 +800,11 @@ This is the solution that we're looking for.
800800
```
801801

802802
Another quick comment about square matrices is that to every such matrix we
803-
assign a unique number called the *determinant* of the matrix --- you can find
803+
assign a unique number called the **determinant** of the matrix --- you can find
804804
the expression for it [here](https://en.wikipedia.org/wiki/Determinant).
805805

806806
If the determinant of $A$ is not zero, then we say that $A$ is
807-
*nonsingular*.
807+
**nonsingular**.
808808

809809
Perhaps the most important fact about determinants is that $A$ is nonsingular if and only if $A$ is of full column rank.
810810

@@ -929,8 +929,8 @@ $$
929929
A v = \lambda v
930930
$$
931931

932-
then we say that $\lambda$ is an *eigenvalue* of $A$, and
933-
$v$ is an *eigenvector*.
932+
then we say that $\lambda$ is an **eigenvalue** of $A$, and
933+
$v$ is an **eigenvector**.
934934

935935
Thus, an eigenvector of $A$ is a vector such that when the map $f(x) = Ax$ is applied, $v$ is merely scaled.
936936

@@ -1034,7 +1034,7 @@ to one.
10341034

10351035
### Generalized Eigenvalues
10361036

1037-
It is sometimes useful to consider the *generalized eigenvalue problem*, which, for given
1037+
It is sometimes useful to consider the **generalized eigenvalue problem**, which, for given
10381038
matrices $A$ and $B$, seeks generalized eigenvalues
10391039
$\lambda$ and eigenvectors $v$ such that
10401040

@@ -1076,10 +1076,10 @@ $$
10761076
$$
10771077

10781078
The norms on the right-hand side are ordinary vector norms, while the norm on
1079-
the left-hand side is a *matrix norm* --- in this case, the so-called
1080-
*spectral norm*.
1079+
the left-hand side is a **matrix norm** --- in this case, the so-called
1080+
**spectral norm**.
10811081

1082-
For example, for a square matrix $S$, the condition $\| S \| < 1$ means that $S$ is *contractive*, in the sense that it pulls all vectors towards the origin [^cfn].
1082+
For example, for a square matrix $S$, the condition $\| S \| < 1$ means that $S$ is **contractive**, in the sense that it pulls all vectors towards the origin [^cfn].
10831083

10841084
(la_neumann)=
10851085
#### {index}`Neumann's Theorem <single: Neumann's Theorem>`
@@ -1112,7 +1112,7 @@ $$
11121112
\rho(A) = \lim_{k \to \infty} \| A^k \|^{1/k}
11131113
$$
11141114

1115-
Here $\rho(A)$ is the *spectral radius*, defined as $\max_i |\lambda_i|$, where $\{\lambda_i\}_i$ is the set of eigenvalues of $A$.
1115+
Here $\rho(A)$ is the **spectral radius**, defined as $\max_i |\lambda_i|$, where $\{\lambda_i\}_i$ is the set of eigenvalues of $A$.
11161116

11171117
As a consequence of Gelfand's formula, if all eigenvalues are strictly less than one in modulus,
11181118
there exists a $k$ with $\| A^k \| < 1$.
@@ -1128,8 +1128,8 @@ Let $A$ be a symmetric $n \times n$ matrix.
11281128

11291129
We say that $A$ is
11301130

1131-
1. *positive definite* if $x' A x > 0$ for every $x \in \mathbb R ^n \setminus \{0\}$
1132-
1. *positive semi-definite* or *nonnegative definite* if $x' A x \geq 0$ for every $x \in \mathbb R ^n$
1131+
1. **positive definite** if $x' A x > 0$ for every $x \in \mathbb R ^n \setminus \{0\}$
1132+
1. **positive semi-definite** or **nonnegative definite** if $x' A x \geq 0$ for every $x \in \mathbb R ^n$
11331133

11341134
Analogous definitions exist for negative definite and negative semi-definite matrices.
11351135

0 commit comments

Comments
 (0)