Skip to content

Commit 96337ab

Browse files
committed
Post-lecture update
1 parent cc0b245 commit 96337ab

File tree

1 file changed

+33
-26
lines changed

1 file changed

+33
-26
lines changed

src/08_Eigenvalue_problems.jl

Lines changed: 33 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -209,9 +209,12 @@ If $z_n \neq 0$ then
209209
\cdots
210210
+ \left|\frac{λ_{n-1}}{λ_n}\right|^k \, |z_{n-1}| \, \|\textbf v_{n-1}\|
211211
```
212-
Now, each of the terms $\left|\frac{λ_j}{λ_n}\right|^k$
213-
for $j = 1, \ldots, {n-1}$ goes to zero for $k \to \infty$ due to the
214-
ascending eigenvalue ordering of Equation (1).
212+
A consequence of the ascending eigenvalue ordering of equation (1) is that
213+
```math
214+
\left|\frac{λ_{j}}{λ_n}\right| < 1 \qquad \text{for all $j = 1, \ldots, n-1$}
215+
```
216+
and therefore that each of the terms $\left|\frac{λ_j}{λ_n}\right|^k$
217+
for $j = 1, \ldots, {n-1}$ goes to zero for $k \to \infty$.
215218
Therefore overall
216219
```math
217220
\left\| \frac{\mathbf{A}^k \mathbf{x}^{(1)}}{λ_n^k} - z_n \textbf v_n \right\|
@@ -322,39 +325,43 @@ md"We also note in passing that $\|x\|_\infty$ seems to converge to the dominant
322325

323326
# ╔═╡ b69a8d6c-364a-4951-afd9-24588ac10b64
324327
md"""
325-
Based on this idea we formulate the algorithm
328+
Note that $\|x\|_\infty = \max_{i=1,\ldots n} |x_i|$
329+
implies that there exists an index $m \in \{1,\ldots n\}$,
330+
such that $\|x\|_\infty = |x_m|$.
331+
332+
Keeping this in mind we formulate the algorithm
326333
327334
!!! info "Algorithm 1: Power iterations"
328335
Given a diagonalisable matrix $\mathbf A \in \mathbb R^{n\times n}$
329336
and an initial guess $\mathbf x^{(1)} \in \mathbb R^n$ we iterate
330337
for $k = 1, 2, \ldots$:
331338
1. Set $\mathbf y^{(k)} = \mathbf A \, \mathbf x^{(k)}$
332-
2. Find the index $m$ such that $\left|y_m^{(k)}\right|$ is the largest (by magnitude) element of $\textbf{y}^{(k)}$, i.e.
333-
```math
334-
m = \textrm{argmax}_i \left|y_i^{(k)}\right|
335-
```
336-
3. Set ^{(k)} = \frac{1}{y^{(k)}_m}$ and ^{(k)} = \frac{y^{(k)}_m}{x^{(k)}_m}$.
337-
4. Set $\mathbf x^{(k+1)} = α^{(k)} \mathbf y^{(k)}$
339+
2. Find the index $m$ such that $\left|y_m^{(k)}\right| = \left\|y^{(k)}\right\|_\infty$
340+
3. Compute ^{(k)} = \frac{1}{y^{(k)}_m}$ and set ^{(k)} = \frac{y^{(k)}_m}{x^{(k)}_m}$ *(see below why)*
341+
4. Set $\mathbf x^{(k+1)} = α^{(k)} \mathbf y^{(k)}$ *(Normalisation)*
338342
343+
We obtain ^{(k)}$ as the estimate to the dominant eigenvalue
344+
and $\mathbf x^{(k)}$ as the estimate of the corresponding eigenvector.
339345
"""
340346

341347
# ╔═╡ 4ebfc860-e179-4c76-8fc5-8c1089301078
342348
md"""
343-
Note that in this algorithm
344-
$y_m^{(k)} = \| y^{(k)} \|_\infty$
345-
and ^{(k)} = 1 / \| y^{(k)} \|_\infty$.
349+
In this algorithm ^{(k)} = \frac{1}{y^{(k)}_m} = \frac{1}{\| y^{(k)} \|_\infty}$.
346350
Step 4 is thus performing the normalisation we developed above.
347-
Furthermore instead of employing $\| x \|_\infty$
348-
as the eigenvalue estimate it employs
349-
^{(k)}$, which is just a scaled version of $\| x \|_\infty$.
350-
To see why this should be expected to be an estimate of the
351-
dominant eigenvalue $λ_n$ assume that
352-
$\textbf{x}^{(k)}$ is already close to the associated eigenvector $\mathbf v_n$.
353-
Then $\mathbf A\, \mathbf{x}^{(k)}$ is almost $\lambda_n \mathbf{x}^{(k)}$,
354-
such that the ratio ^{(k)} = \frac{y^{(k)}_m}{x^{(k)}_m}$
355-
becomes close to $λ_n$ itself.
356-
357-
An implementation of this power method algorithm in:
351+
352+
Furthermore ^{(k)}$ is now computed as the eigenvalue estimate
353+
instead of $\| x \|_\infty$. The idea is
354+
that if $\textbf{x}^{(k)}$ is already close to the eigenvector $\mathbf v_n$
355+
associated to the dominant eigenvalue $λ_n$, then
356+
```math
357+
\mathbf{y}^{(k)} = \mathbf A\, \mathbf{x}^{(k)} \approx \mathbf A\, \mathbf{v}_n = λ_n \mathbf{x}^{(k)}
358+
\quad \Rightarrow \quad
359+
\mathbf{y}^{(k)}_m \approx λ_n \mathbf{x}^{(k)}_m
360+
\quad \Rightarrow \quad
361+
λ_n \approx \frac{\mathbf{y}^{(k)}_m}{\mathbf{x}^{(k)}_m} = β^{(k)}
362+
```
363+
364+
An implementation of this power method algorithm is:
358365
"""
359366

360367
# ╔═╡ 01186225-602f-4637-99f2-0a6dd569a703
@@ -752,7 +759,7 @@ enables us to find the **eigenvalue of $\mathbf A$ closest to $σ$**.
752759
md"""
753760
A naive application of Algorithm 1 would first compute $\mathbf{P} = (\mathbf A - σ \mathbf I)^{-1}$ and then apply
754761
```math
755-
\mathbf y^{(k)} = (\mathbf A - σ \mathbf I)^{-1} \mathbf x^{(k)} \mathbf{P} \mathbf x^{(k)}
762+
\mathbf y^{(k)} = \mathbf{P} \mathbf x^{(k)}
756763
```
757764
in each step of the power iteration.
758765
However, for many problems the explicit computation of the inverse
@@ -779,7 +786,7 @@ md"""
779786
we iterate for $k = 1, 2, \ldots$:
780787
1. $\textcolor{brown}{\text{Solve }(\mathbf A - σ \mathbf I) \mathbf y^{(k)} = \mathbf x^{(k)}
781788
\text{ for }\mathbf y^{(k)}}$.
782-
2. Find the index $m$ such that $y^{(k)}_m = \|\mathbf y^{(k)}\|$
789+
2. Find the index $m$ such that $|y^{(k)}_m| = \|\mathbf y^{(k)}\|_\infty$
783790
3. Set ^{(k)} = \frac{1}{y^{(k)}_m}$ and $\textcolor{brown}{β^{(k)} = σ + \frac{x^{(k)}_m}{y^{(k)}_m}}$.
784791
4. Set $\mathbf x^{(k+1)} = α^{(k)} \mathbf y^{(k)}$
785792
"""

0 commit comments

Comments
 (0)