diff --git a/environment.yml b/environment.yml index fbc5d16..5955384 100644 --- a/environment.yml +++ b/environment.yml @@ -7,11 +7,11 @@ dependencies: - pip - pip: - jupyter-book==1.0.4post1 - - quantecon-book-theme==0.8.3 + - quantecon-book-theme==0.10.0 - sphinx-tojupyter==0.4.0 - - sphinx-proof==0.2.1 + - sphinx-proof==0.3.0 - sphinxext-rediraffe==0.2.7 - - sphinx-exercise==1.0.1 + - sphinx-exercise==1.2.1 - sphinxcontrib-youtube==1.4.1 - sphinx-togglebutton==0.3.2 diff --git a/lectures/ergodicity.md b/lectures/ergodicity.md index 83e45e9..047f392 100644 --- a/lectures/ergodicity.md +++ b/lectures/ergodicity.md @@ -638,6 +638,12 @@ Let $(P_t)$ be a Markov semigroup. True or false: for this semigroup, every state $x$ is accessible from itself. ``` +```{solution} ergodicity-ex-1 +:class: dropdown + +The statement is true. With $t=0$ we have $P_t(x,x) = I(x,x) = 1 > 0$. +``` + ```{exercise} :label: ergodicity-ex-2 Let $(\lambda_k)$ be a bounded non-increasing sequence in $(0, \infty)$. @@ -658,41 +664,30 @@ Show that $(P_t)$, the corresponding Markov semigroup, has no stationary distribution. ``` - -```{exercise} -:label: ergodicity-ex-3 -Confirm that {prf:ref}`sdrift` implies {prf:ref}`sfinite`. -``` - -## Solutions - -```{solution} ergodicity-ex-1 -The statement is true. With $t=0$ we have $P_t(x,x) = I(x,x) = 1 > 0$. -``` - - ```{solution} ergodicity-ex-2 +:class: dropdown + Suppose to the contrary that $\phi \in \dD$ and $\phi Q = 0$. Then, for any $j \geq 1$, $$ - (\phi Q)(j) - = \sum_{i \geq 0} \phi(i) Q(i, j) - = - \lambda_j \phi(j) + \lambda_{j-1} \phi(j-1) - = 0 +(\phi Q)(j) += \sum_{i \geq 0} \phi(i) Q(i, j) += - \lambda_j \phi(j) + \lambda_{j-1} \phi(j-1) += 0 $$ Since $(\lambda_k)$ is non-increasing, it follows that $$ - \frac{\phi(j)}{\phi(j-1)} = \frac{\lambda_{j-1}}{\lambda_j} \geq 1 +\frac{\phi(j)}{\phi(j-1)} = \frac{\lambda_{j-1}}{\lambda_j} \geq 1 $$ Therefore, for any $j\geq 1$, it must be: $$ - \phi(j) \geq \phi(j-1) +\phi(j) \geq \phi(j-1) $$ It follows that $\phi$ is non-decreasing on $\ZZ_+$. @@ -704,7 +699,14 @@ Contradiction. ``` +```{exercise} +:label: ergodicity-ex-3 +Confirm that {prf:ref}`sdrift` implies {prf:ref}`sfinite`. +``` + ```{solution} ergodicity-ex-3 +:class: dropdown + Let $(P_t)$ be an irreducible UC Markov semigroup and let $S$ be finite. Pick any positive constants $M, \epsilon$ and set $v = M$ and $F = S$. @@ -712,9 +714,9 @@ Pick any positive constants $M, \epsilon$ and set $v = M$ and $F = S$. We then have $$ - \sum_y Q(x, y) v(y) - = M \sum_y Q(x, y) - = 0 +\sum_y Q(x, y) v(y) += M \sum_y Q(x, y) += 0 $$ Hence the drift condition in {prf:ref}`sdrift` holds and $(P_t)$ is diff --git a/lectures/generators.md b/lectures/generators.md index 278129c..91a8506 100644 --- a/lectures/generators.md +++ b/lectures/generators.md @@ -432,6 +432,25 @@ example, Chapter 7 of {cite}`bobrowski2005functional`. Prove that {eq}`expdiffer` holds for all $A \in \linop$. ``` +```{solution} ergodicity-ex-1 +:class: dropdown + +To show the first equality, fix $t \in \RR_+$, take $h > 0$ and observe that + +$$ +e^{(t+h)A} - e^{tA} - e^{tA} A += e^{tA} (e^{hA} - I - A) +$$ + +Since the norm on $\linop$ is submultiplicative, it suffices to show that +$\| e^{hA} - I - A \| = o(h)$ as $h \to 0$. + +Using the definition of the exponential, this is easily verified, +completing the proof of the first equality in {eq}`expdiffer`. + +The proof of the second equality is similar. +``` + ```{exercise} :label: generators-ex-2 @@ -439,7 +458,7 @@ In many texts, a $C_0$ semigroup is defined as an evolution semigroup $(U_t)$ such that $$ - U_t g \to g \text{ as } t \to 0 \text{ for any } g \in \BB +U_t g \to g \text{ as } t \to 0 \text{ for any } g \in \BB $$ (czsg2) Our aim is to show that {eq}`czsg2` implies continuity at every point $t$, as @@ -448,54 +467,16 @@ in the definition we used above. The [Banach--Steinhaus Theorem](https://en.wikipedia.org/wiki/Uniform_boundedness_principle) can be used to show that, for an evolution semigroup $(U_t)$ satisfying {eq}`czsg2`, there exist finite constants $\omega$ and $M$ such that $$ - \| U_t \| \leq e^{t\omega} M - \quad \text{for all } \; t \geq 0 +\| U_t \| \leq e^{t\omega} M +\quad \text{for all } \; t \geq 0 $$ (sgbound) Using this and {eq}`czsg2`, show that, for any $g \in \BB$, the map $t \mapsto U_t g$ is continuous at all $t$. ``` -```{exercise} -:label: generators-ex-3 - -Following on from the previous exercise, -a UC semigroup is often defined as an evolution semigroup $(U_t)$ -such that - -$$ - \| U_t - I \| \to 0 \text{ as } t \to 0 -$$ (czsg3) - -Show that {eq}`czsg3` implies norm continuity at every point $t$, as -in the definition we used above. - -In particular, show that, for any $t_n \to t$, we have -$\| U_{t_n} - U_t \| \to 0$ as $n \to \infty$. -``` - - -## Solutions - -```{solution} ergodicity-ex-1 - -To show the first equality, fix $t \in \RR_+$, take $h > 0$ and observe that - -$$ - e^{(t+h)A} - e^{tA} - e^{tA} A - = e^{tA} (e^{hA} - I - A) -$$ - -Since the norm on $\linop$ is submultiplicative, it suffices to show that -$\| e^{hA} - I - A \| = o(h)$ as $h \to 0$. - -Using the definition of the exponential, this is easily verified, -completing the proof of the first equality in {eq}`expdiffer`. - -The proof of the second equality is similar. -``` - ```{solution} ergodicity-ex-2 +:class: dropdown Let $(U_t)$ be an evolution semigroup satisfying {eq}`czsg2` and let $\omega$ and $M$ be as in {eq}`sgbound`. @@ -507,16 +488,36 @@ On one hand, $U_{t+ h_n} g = U_{h_n} U_t g \to U_t g$ by {eq}`czsg2`. On the other hand, from {eq}`sgbound` and the definition of the operator norm, $$ - \| U_{t-h_n} g - U_t g\| - = \| U_{t-h_n} ( g - U_{h_n} g) \| - \leq e^{(t-h_n)\omega} M \| g - U_{h_n} g\| - \to 0 +\| U_{t-h_n} g - U_t g\| += \| U_{t-h_n} ( g - U_{h_n} g) \| +\leq e^{(t-h_n)\omega} M \| g - U_{h_n} g\| +\to 0 $$ as $n \to \infty$. This completes the proof. ``` + +```{exercise} +:label: generators-ex-3 + +Following on from the previous exercise, +a UC semigroup is often defined as an evolution semigroup $(U_t)$ +such that + +$$ +\| U_t - I \| \to 0 \text{ as } t \to 0 +$$ (czsg3) + +Show that {eq}`czsg3` implies norm continuity at every point $t$, as +in the definition we used above. + +In particular, show that, for any $t_n \to t$, we have +$\| U_{t_n} - U_t \| \to 0$ as $n \to \infty$. +``` + ```{solution} ergodicity-ex-3 +:class: dropdown The solution is similar to that of the previous exercise. @@ -529,9 +530,9 @@ On the other hand, from the submultiplicative property of the operator norm and {eq}`sgbound`, $$ - \| U_{t-h_n} - U_t \| - = \| U_{t-h_n} ( I - U_{h_n}) \| - \leq e^{(t-h_n)\omega} M \| I - U_{h_n} \| +\| U_{t-h_n} - U_t \| += \| U_{t-h_n} ( I - U_{h_n}) \| +\leq e^{(t-h_n)\omega} M \| I - U_{h_n} \| $$ This converges to 0 as $n \to \infty$, completing our proof. diff --git a/lectures/kolmogorov_bwd.md b/lectures/kolmogorov_bwd.md index b632aa8..23a66cf 100644 --- a/lectures/kolmogorov_bwd.md +++ b/lectures/kolmogorov_bwd.md @@ -522,34 +522,11 @@ Try to generate the same figure using {eq}`psolq` instead, modifying code from {doc}`our lecture ` on the Markov property. ```` -```{exercise} -:label: kolmogorov-bwd-2 - -Prove that differentiating {eq}`kbinteg` at each $(x, y)$ yields {eq}`kolbackeq`. -``` - -```{exercise} -:label: kolmogorov-bwd-3 - -We claimed above that the solution $P_t = e^{t Q}$ is the unique -Markov semigroup satisfying the backward equation $P'_t = Q P_t$. - -Try to supply a proof. - -(This is not an easy exercise but worth thinking about in any case.) +```{solution-start} kolmogorov-bwd-1 +:class: dropdown ``` -## Solutions - -```{note} -code is currently not supported in `sphinx-exercise` -so code-cell solutions are immediately after this -solution block. -``` - -```{solution} kolmogorov-bwd-1 Here is one solution: -``` ```{code-cell} ipython3 α = 0.6 @@ -594,55 +571,75 @@ ax.set_xlabel("inventory", fontsize=14) plt.show() ``` +```{solution-end} +``` + +```{exercise} +:label: kolmogorov-bwd-2 + +Prove that differentiating {eq}`kbinteg` at each $(x, y)$ yields {eq}`kolbackeq`. +``` ```{solution} kolmogorov-bwd-2 +:class: dropdown One can easily verify that, when $f$ is a differentiable function and $\alpha > 0$, we have $$ - g(t) = e^{- t \alpha} f(t) - \quad \implies \quad - g'(t) = e^{- t \alpha} f'(t) - \alpha g(t) +g(t) = e^{- t \alpha} f(t) +\quad \implies \quad +g'(t) = e^{- t \alpha} f'(t) - \alpha g(t) $$ (gdiff) Note also that, with the change of variable $s = t - \tau$, we can rewrite {eq}`kbinteg` as $$ - P_t(x, y) = - e^{-t \lambda(x)} - \left\{ - I(x, y) - + \lambda(x) - \int_0^t (K P_s)(x, y) e^{s \lambda(x)} d s - \right\} +P_t(x, y) = +e^{-t \lambda(x)} +\left\{ + I(x, y) + + \lambda(x) + \int_0^t (K P_s)(x, y) e^{s \lambda(x)} d s +\right\} $$ (kbinteg2) Applying {eq}`gdiff` yields $$ - P'_t(x, y) - = e^{-t \lambda(x)} - \left\{ - \lambda(x) - (K P_t)(x, y) e^{t \lambda(x)} - \right\} - - \lambda(x) P_t(x, y) +P'_t(x, y) += e^{-t \lambda(x)} + \left\{ + \lambda(x) + (K P_t)(x, y) e^{t \lambda(x)} + \right\} + - \lambda(x) P_t(x, y) $$ After minor rearrangements this becomes $$ - P'_t(x, y) - = \lambda(x) [ (K - I) P_t](x, y) +P'_t(x, y) += \lambda(x) [ (K - I) P_t](x, y) $$ which is identical to {eq}`kolbackeq`. ``` +```{exercise} +:label: kolmogorov-bwd-3 + +We claimed above that the solution $P_t = e^{t Q}$ is the unique +Markov semigroup satisfying the backward equation $P'_t = Q P_t$. + +Try to supply a proof. + +(This is not an easy exercise but worth thinking about in any case.) +``` ```{solution} kolmogorov-bwd-3 +:class: dropdown Here is one proof of uniqueness. @@ -657,15 +654,14 @@ Note that $V_0 = \hat P_t$ and $V_t = P_t$. Note also that $s \mapsto V_s$ is differentiable, with derivative $$ - V'_s - = P'_s \hat P_{t-s} - P_s \hat P'_{t-s} - = P_s Q \hat P_{t-s} - P_s Q \hat P_{t-s} - = 0 +V'_s += P'_s \hat P_{t-s} - P_s \hat P'_{t-s} += P_s Q \hat P_{t-s} - P_s Q \hat P_{t-s} += 0 $$ where, in the second last equality, we used {eq}`expoderiv`. - Hence $V_s$ is constant, so our previous observations $V_0 = \hat P_t$ and $V_t = P_t$ now yield $\hat P_t = P_t$. diff --git a/lectures/kolmogorov_fwd.md b/lectures/kolmogorov_fwd.md index 54c249b..0cb1fd0 100644 --- a/lectures/kolmogorov_fwd.md +++ b/lectures/kolmogorov_fwd.md @@ -497,52 +497,22 @@ differentiable at all $t \geq 0$ and $(x, y) \in S \times S$. Define (pointwise, at each $(x,y)$), $$ - Q := P'_0 = \lim_{h \downarrow 0} \frac{P_h - I}{h} +Q := P'_0 = \lim_{h \downarrow 0} \frac{P_h - I}{h} $$ (genfl) Assuming that this limit exists, and hence $Q$ is well-defined, show that $$ - P'_t = P_t Q - \quad \text{and} \quad - P'_t = Q P_t +P'_t = P_t Q +\quad \text{and} \quad +P'_t = Q P_t $$ both hold. (These are the Kolmogorov forward and backward equations.) ``` -```{exercise} -:label: kolmogorov-fwd-2 - -Recall {ref}`our model ` of jump chains with state-dependent jump -intensities given by rate function $x \mapsto \lambda(x)$. - -After a wait time with exponential rate $\lambda(x) \in (0, \infty)$, the -state transitions from $x$ to $y$ with probability $K(x,y)$. - -We found that the associated semigroup $(P_t)$ satisfies the Kolmogorov -backward equation $P'_t = Q P_t$ with - -$$ - Q(x, y) := \lambda(x) (K(x, y) - I(x, y)) -$$ (qeqagain) - -Show that $Q$ is an intensity matrix and that {eq}`genfl` holds. -``` - -```{exercise} -:label: kolmogorov-fwd-3 - -Prove {prf:ref}`intvsmk` by adapting the arguments in {prf:ref}`jctosg`. -(This is nontrivial but worth at least trying.) - -Hint: The constant $m$ in the proof can be set to $\max_x |Q(x, x)|$. -``` - - -## Solutions - ```{solution} kolmogorov-fwd-1 +:class: dropdown Let $(P_t)$ be a Markov semigroup and let $Q$ be as defined in the statement of the exercise. @@ -552,9 +522,9 @@ Fix $t \geq 0$ and $h > 0$. Combining the semigroup property and linearity with the restriction $P_0 = I$, we get $$ - \frac{P_{t+h} - P_t}{h} - = \frac{P_t P_h - P_t}{h} - = \frac{P_t (P_h - I)}{h} +\frac{P_{t+h} - P_t}{h} += \frac{P_t P_h - P_t}{h} += \frac{P_t (P_h - I)}{h} $$ Taking $h \downarrow 0$ and using the definition of $Q$ give $P_t' = P_t Q$, @@ -563,16 +533,35 @@ which is the Kolmogorov forward equation. For the backward equation we observe that $$ - \frac{P_{t+h} - P_t}{h} - = \frac{P_h P_t - P_t}{h} - = \frac{(P_h - I) P_t}{h} +\frac{P_{t+h} - P_t}{h} += \frac{P_h P_t - P_t}{h} += \frac{(P_h - I) P_t}{h} $$ also holds. Taking $h \downarrow 0$ gives the Kolmogorov backward equation. ``` +```{exercise} +:label: kolmogorov-fwd-2 + +Recall {ref}`our model ` of jump chains with state-dependent jump +intensities given by rate function $x \mapsto \lambda(x)$. + +After a wait time with exponential rate $\lambda(x) \in (0, \infty)$, the +state transitions from $x$ to $y$ with probability $K(x,y)$. + +We found that the associated semigroup $(P_t)$ satisfies the Kolmogorov +backward equation $P'_t = Q P_t$ with + +$$ +Q(x, y) := \lambda(x) (K(x, y) - I(x, y)) +$$ (qeqagain) + +Show that $Q$ is an intensity matrix and that {eq}`genfl` holds. +``` ```{solution} kolmogorov-fwd-2 +:class: dropdown Let $Q$ be as defined in {eq}`qeqagain`. @@ -585,15 +574,24 @@ For the second, we use the fact that $K$ is a Markov matrix, so that, with $1$ as a column vector of ones, $$ - Q 1 - = \lambda (K 1 - 1) - = \lambda (1 - 1) - = 0 +Q 1 += \lambda (K 1 - 1) += \lambda (1 - 1) += 0 $$ ``` +```{exercise} +:label: kolmogorov-fwd-3 + +Prove {prf:ref}`intvsmk` by adapting the arguments in {prf:ref}`jctosg`. +(This is nontrivial but worth at least trying.) + +Hint: The constant $m$ in the proof can be set to $\max_x |Q(x, x)|$. +``` ```{solution} kolmogorov-fwd-3 +:class: dropdown Suppose that $Q$ is an intensity matrix, fix $t \geq 0$ and set $P_t = e^{tQ}$. @@ -618,12 +616,12 @@ Because $P_t$ has unit row sums and differentiation is linear, we can employ the Kolmogorov backward equation to obtain $$ - Q 1 - = Q P_t 1 - = \left( \frac{d}{d t} P_t \right) 1 - = \frac{d}{d t} (P_t 1) - = \frac{d}{d t} 1 - = 0 +Q 1 + = Q P_t 1 + = \left( \frac{d}{d t} P_t \right) 1 + = \frac{d}{d t} (P_t 1) + = \frac{d}{d t} 1 + = 0 $$ Hence $Q$ has zero row sums. @@ -632,7 +630,7 @@ We can use the definition of the matrix exponential to obtain, for any $x, y$ and $t \geq 0$, $$ - P_t(x, y) = \mathbb 1\{x = y\} + t Q(x, y) + o(t) +P_t(x, y) = \mathbb 1\{x = y\} + t Q(x, y) + o(t) $$ (otp) From this equality and the assumption that $P_t$ is a Markov matrix for all diff --git a/lectures/markov_prop.md b/lectures/markov_prop.md index 0d96371..72de0d4 100644 --- a/lectures/markov_prop.md +++ b/lectures/markov_prop.md @@ -913,6 +913,18 @@ probability $0.5$. Construct two different random variables with this distribution. ``` +```{solution} markov-prop-1 +:class: dropdown + +One example is to take $U$ to be uniform on $(0, 1)$ and set $X=0$ if $U < +0.5$ and $1$ otherwise. + +Then $X$ has the desired distribution. + +Alternatively, we could take $Z$ to be standard normal and set $X=0$ if $Z < +0$ and $1$ otherwise. +``` + ```{exercise} :label: markov-prop-2 @@ -926,57 +938,9 @@ Hints * Consider using the [binomial formula](https://en.wikipedia.org/wiki/Binomial_theorem). ``` - -```{exercise} -:label: markov-prop-3 - -Consider the distribution over $S^{n+1}$ previously shown in {eq}`mathjointd`, which is - -$$ - \mathbf P_\psi^n(x_0, x_1, \ldots, x_n) - = \psi(x_0) - P(x_0, x_1) - \times \cdots \times - P(x_{n-1}, x_n) -$$ - -Show that, for any Markov chain $(X_t)$ satisfying {eq}`markovpropd` -and $X_0 \sim \psi$, the restriction $(X_0, \ldots, X_n)$ has joint -distribution $\mathbf P_\psi^n$. -``` - - -```{exercise} -:label: markov-prop-4 - -Try to produce your own version of the figure {ref}`flow_fig` - -The initial condition is ``ψ_0 = binom.pmf(states, n, 0.25)`` where ``n = b + 1``. -``` - -## Solutions - -```{note} -code is currently not supported in `sphinx-exercise` -so code-cell solutions are immediately after this -solution block. -``` - -```{solution} markov-prop-1 - -This is easy. - -One example is to take $U$ to be uniform on $(0, 1)$ and set $X=0$ if $U < -0.5$ and $1$ otherwise. - -Then $X$ has the desired distribution. - -Alternatively, we could take $Z$ to be standard normal and set $X=0$ if $Z < -0$ and $1$ otherwise. -``` - - ```{solution} markov-prop-2 +:class: dropdown + Fixing $s, t \in \RR_+$ and $j \leq k$, we have $$ @@ -1015,8 +979,27 @@ $$ Hence {eq}`chapkol_ct2` holds, and the semigroup property is satisfied. ``` +```{exercise} +:label: markov-prop-3 + +Consider the distribution over $S^{n+1}$ previously shown in {eq}`mathjointd`, which is + +$$ +\mathbf P_\psi^n(x_0, x_1, \ldots, x_n) + = \psi(x_0) + P(x_0, x_1) + \times \cdots \times + P(x_{n-1}, x_n) +$$ + +Show that, for any Markov chain $(X_t)$ satisfying {eq}`markovpropd` +and $X_0 \sim \psi$, the restriction $(X_0, \ldots, X_n)$ has joint +distribution $\mathbf P_\psi^n$. +``` ```{solution} markov-prop-3 +:class: dropdown + Let $(X_t)$ be a Markov chain satisfying {eq}`markovpropd` and $X_0 \sim \psi$. When $n=0$, we have $\mathbf P_\psi^n = \mathbf P_\psi^0 = \psi$, and this @@ -1029,35 +1012,45 @@ defined above. Then $$ - \PP \{X_0 = x_0, \ldots, X_n = x_n\} - = \PP \{X_n = x_n \,|\, X_0 = x_0, \ldots, X_{n-1} = x_{n-1} \} - \\ - \times \PP \{X_0 = x_0, \ldots, X_{n-1} = x_{n-1}\} +\PP \{X_0 = x_0, \ldots, X_n = x_n\} += \PP \{X_n = x_n \,|\, X_0 = x_0, \ldots, X_{n-1} = x_{n-1} \} +\\ + \times \PP \{X_0 = x_0, \ldots, X_{n-1} = x_{n-1}\} $$ From the Markov property and the induction hypothesis, the right hand side is $$ +P (x_{n-1}, x_n ) +\mathbf P_\psi^{n-1}(x_0, x_1, \ldots, x_{n-1}) += P (x_{n-1}, x_n ) - \mathbf P_\psi^{n-1}(x_0, x_1, \ldots, x_{n-1}) - = - P (x_{n-1}, x_n ) - \psi(x_0) - P(x_0, x_1) - \times \cdots \times - P(x_{n-2}, x_{n-1}) + \psi(x_0) + P(x_0, x_1) + \times \cdots \times + P(x_{n-2}, x_{n-1}) $$ The last expression equals $\mathbf P_\psi^n$, which concludes the proof. ``` -```{solution} markov-prop-4 +```{exercise} +:label: markov-prop-4 + +Try to produce your own version of the figure {ref}`flow_fig` + +The initial condition is ``ψ_0 = binom.pmf(states, n, 0.25)`` where ``n = b + 1``. +``` + +```{solution-start} markov-prop-4 +:class: dropdown +``` + Here is one approach. (The statements involving ``glue`` are specific to this book and can be deleted by most readers. They store the output so it can be displayed elsewhere.) -``` ```{code-cell} ipython3 α = 0.6 @@ -1106,3 +1099,6 @@ plt.savefig("_static/lecture_specific/markov_prop/flow_fig.png") plt.show() ``` + +```{solution-end} +``` \ No newline at end of file diff --git a/lectures/memoryless.md b/lectures/memoryless.md index f92532f..249a9b6 100644 --- a/lectures/memoryless.md +++ b/lectures/memoryless.md @@ -413,30 +413,8 @@ In particular, consider the random variable $X$ defined as follows: Show that $X \sim \Exp(\lambda)$. ``` -```{exercise} -:label: memoryless-ex-2 - -Fix $\lambda = 0.5$ and $s=1.0$. - -Simulate 1,000 draws of $X$ using the algorithm above. - -Plot the fraction of the sample exceeding $t$ for each $t \geq 0$ (on a grid) -and compare to $t \mapsto e^{-\lambda t}$. - -Is the fit good? How about if the number of draws is increased? - -Are the results in line with those of the previous exercise? -``` - -## Solutions - -```{note} -code is currently not supported in `sphinx-exercise` -so code-cell solutions are immediately after this -solution block. -``` - ```{solution} memoryless-ex-1 +:class: dropdown Let $X$ be constructed as in the statement of the exercise and fix $t > 0$. @@ -445,27 +423,43 @@ Notice that $X > s + t$ if and only if $Y > s$ and $Z > t$. As a result of this fact and independence, $$ - \PP\{X > s + t\} - = \PP\{Y > s \} \PP\{Z > t\} - = e^{-\lambda(s + t)} +\PP\{X > s + t\} += \PP\{Y > s \} \PP\{Z > t\} += e^{-\lambda(s + t)} $$ At the same time, $X > s-t$ if and only if $Y > s-t$, so $$ - \PP\{X > s - t\} - = \PP\{Y > s - t \} - = e^{-\lambda(s - t)} +\PP\{X > s - t\} += \PP\{Y > s - t \} += e^{-\lambda(s - t)} $$ Either way, we have $X \sim \Exp(\lambda)$, as was to be shown. ``` +```{exercise} +:label: memoryless-ex-2 -```{solution} memoryless-ex-2 -Here's one solution, starting with 1,000 draws. +Fix $\lambda = 0.5$ and $s=1.0$. + +Simulate 1,000 draws of $X$ using the algorithm above. + +Plot the fraction of the sample exceeding $t$ for each $t \geq 0$ (on a grid) +and compare to $t \mapsto e^{-\lambda t}$. + +Is the fit good? How about if the number of draws is increased? + +Are the results in line with those of the previous exercise? ``` +```{solution-start} memoryless-ex-2 +:class: dropdown +``` + +Here's one solution, starting with 1,000 draws. + ```{code-cell} ipython3 λ = 0.5 np.random.seed(1234) @@ -494,24 +488,19 @@ ax.legend() plt.show() ``` -```{solution} memoryless-ex-2 -**Solution Continued:** - -The fit is already very close, which matches with the theory in Exercise 1. +The fit is already very close, which matches with the theory in [](memoryless-ex-1). The two lines become indistinguishable as $n$ is increased further. -``` ```{code-cell} ipython3 - fig, ax = plt.subplots() draws = draw_X(n=10_000) empirical_exceedance = [np.mean(draws > t) for t in t_grid] ax.plot(t_grid, np.exp(- λ * t_grid), label='exponential exceedance') ax.plot(t_grid, empirical_exceedance, label='empirical exceedance') ax.legend() - plt.show() ``` - +```{solution-end} +``` diff --git a/lectures/poisson.md b/lectures/poisson.md index db4aa0a..5ae0dce 100644 --- a/lectures/poisson.md +++ b/lectures/poisson.md @@ -433,32 +433,15 @@ distribution with rate $T \lambda$. Try first with $\lambda = 0.5$ and $T=10$. ``` - -```{exercise} -:label: poisson-ex-2 - -In the lecture we used the fact that $\Binomial(n, \theta) \approx \Poisson(n \theta)$ when $n$ is large and $\theta$ is small. - -Investigate this relationship by plotting the distributions side by side. - -Experiment with different values of $n$ and $\theta$. -``` - -## Solutions - -```{note} -code is currently not supported in `sphinx-exercise` -so code-cell solutions are immediately after this -solution block. +```{solution-start} poisson-ex-1 +:class: dropdown ``` -```{solution} poisson-ex-1 Here is one solution. The figure shows that the fit is already good with a modest sample size. Increasing the sample size will further improve the fit. -``` ```{code-cell} ipython3 λ = 0.5 @@ -503,11 +486,26 @@ ax.legend(fontsize=12) plt.show() ``` +```{solution-end} +``` + + +```{exercise} +:label: poisson-ex-2 + +In the lecture we used the fact that $\Binomial(n, \theta) \approx \Poisson(n \theta)$ when $n$ is large and $\theta$ is small. + +Investigate this relationship by plotting the distributions side by side. + +Experiment with different values of $n$ and $\theta$. +``` + +```{solution-start} poisson-ex-2 +:class: dropdown +``` -```{solution} poisson-ex-2 Here is one solution. It shows that the approximation is good when $n$ is large and $\theta$ is small. -``` ```{code-cell} ipython3 def binomial(k, n, p): @@ -534,4 +532,6 @@ fig.tight_layout() plt.show() ``` +```{solution-end} +``` diff --git a/lectures/uc_mc_semigroups.md b/lectures/uc_mc_semigroups.md index b3a0f51..2fcc02f 100644 --- a/lectures/uc_mc_semigroups.md +++ b/lectures/uc_mc_semigroups.md @@ -19,8 +19,6 @@ kernelspec: In our previous lecture we covered some of the general theory of operator semigroups. - - Next we translate these results into the setting of Markov semigroups. The Markov semigroups are defined on a countable set $S$. @@ -506,52 +504,19 @@ Let $P$ be a Markov matrix on $S$ and identify it with the linear operator in {eq}`mmismo`. Verify the claims in {eq}`propp`. ``` -```{exercise} -:label: uc-mc-semigroups-ex-2 - -Prove the claim in {prf:ref}`scintcon`. -``` - -```{exercise} -:label: uc-mc-semigroups-ex-3 - -Confirm that $Q$ defined in {eq}`poissonq` induces a bounded linear operator on -$\ell_1$ via {eq}`imislo`. -``` - -```{exercise} -:label: uc-mc-semigroups-ex-4 - -Let $K$ be defined on $\ZZ_+ \times \ZZ_+$ by $K(i, j) = \mathbb 1\{j = i + 1\}$. - -Show that, with $K^m$ representing the $m$-th matrix product of $K$ with itself, -we have $K^m(i, j) = \mathbb 1\{j = i + m\}$ for any $i, j \in \ZZ_+$. -``` - -```{exercise} -:label: uc-mc-semigroups-ex-5 - -Let $Q$ be any intensity matrix on $S$. - -Prove that the jump chain decomposition of $Q$ is in fact a jump chain pair. - -Prove that, in addition, this decomposition $(\lambda, K)$ satisfies {eq}`jcinmat`. -``` - - -## Solutions - ```{solution} uc-mc-semigroups-ex-1 +:class: dropdown + To determine the norm of $P$, we use the definition in {eq}`norml`. If $f \in \ell_1$ and $\| f \| \leq 1$, then $$ - \| f P \| - \leq \sum_y \sum_x |f(x)| P(x, y) - = \sum_x |f(x)| \sum_y P(x, y) - = \sum_x |f(x)| - = \| f \| +\| f P \| +\leq \sum_y \sum_x |f(x)| P(x, y) += \sum_x |f(x)| \sum_y P(x, y) += \sum_x |f(x)| += \| f \| $$ Hence $\| P \| \leq 1$. @@ -564,17 +529,24 @@ Now pick any $\phi \in \dD$. Clearly $\phi P \geq 0$, and $$ - \sum_y (\phi P)(y) - =\sum_y \sum_x \phi (x) P(x, y) - =\sum_x \phi (x) \sum_y P(x, y) - = 1 +\sum_y (\phi P)(y) +=\sum_y \sum_x \phi (x) P(x, y) +=\sum_x \phi (x) \sum_y P(x, y) += 1 $$ Hence $\phi P \in \dD$ as claimed. ``` +```{exercise} +:label: uc-mc-semigroups-ex-2 + +Prove the claim in {prf:ref}`scintcon`. +``` ```{solution} uc-mc-semigroups-ex-2 +:class: dropdown + Here is one solution. Let $Q$ be an intensity matrix on $S$. @@ -603,27 +575,34 @@ Let $f \in \ell_1$ be defined by $f(z) = \mathbb 1\{z = x\}$. Since $\|f\| = 1$, we have $$ - \| Q \| - \geq \| f Q \| - = \sum_y \left| \sum_z f(z) Q(z, y) \right| - = \sum_y | Q(x, y) | - \geq | Q(x, x) | +\| Q \| +\geq \| f Q \| += \sum_y \left| \sum_z f(z) Q(z, y) \right| += \sum_y | Q(x, y) | +\geq | Q(x, x) | $$ Contradiction. ``` +```{exercise} +:label: uc-mc-semigroups-ex-3 + +Confirm that $Q$ defined in {eq}`poissonq` induces a bounded linear operator on +$\ell_1$ via {eq}`imislo`. +``` ```{solution} uc-mc-semigroups-ex-3 +:class: dropdown Linearity is obvious so we focus on boundedness. For any $f \in \ell_1$ and this choice of $Q$, we have $$ - \sum_y |(fQ)(y)| - \leq \sum_y \sum_x |f(x) Q(x, y)| - \leq \lambda \sum_y \sum_x |f(y) - f(y+1)| +\sum_y |(fQ)(y)| +\leq \sum_y \sum_x |f(x) Q(x, y)| +\leq \lambda \sum_y \sum_x |f(y) - f(y+1)| $$ Applying the triangle inequality, we see that the right hand side is dominated @@ -634,7 +613,17 @@ as required. ``` +```{exercise} +:label: uc-mc-semigroups-ex-4 + +Let $K$ be defined on $\ZZ_+ \times \ZZ_+$ by $K(i, j) = \mathbb 1\{j = i + 1\}$. + +Show that, with $K^m$ representing the $m$-th matrix product of $K$ with itself, +we have $K^m(i, j) = \mathbb 1\{j = i + m\}$ for any $i, j \in \ZZ_+$. +``` + ```{solution} uc-mc-semigroups-ex-4 +:class: dropdown The statement $K^m(i, j) = \mathbb 1\{j = i + m\}$ holds by definition when $m=1$. @@ -653,8 +642,18 @@ $$ Applying the definition $K(i, j) = \mathbb 1\{j = i + 1\}$ completes verification of the claim. ``` +```{exercise} +:label: uc-mc-semigroups-ex-5 + +Let $Q$ be any intensity matrix on $S$. + +Prove that the jump chain decomposition of $Q$ is in fact a jump chain pair. + +Prove that, in addition, this decomposition $(\lambda, K)$ satisfies {eq}`jcinmat`. +``` ```{solution} uc-mc-semigroups-ex-5 +:class: dropdown Let $Q$ be an intensity matrix and let $(\lambda, K)$ be the jump chain decomposition of $Q$. @@ -668,10 +667,10 @@ $\lambda(x) > 0$. Then $$ - \sum_y K(x, y) - = \sum_{y \not= x} K(x,y) - = \sum_{y \not= x} \frac{Q(x,y)}{\lambda(x)} - = 1 +\sum_y K(x, y) += \sum_{y \not= x} K(x,y) += \sum_{y \not= x} \frac{Q(x,y)}{\lambda(x)} += 1 $$ If, on the other hand, $\lambda(x) = 0$, then