@@ -209,9 +209,12 @@ If $z_n \neq 0$ then
209209\c dots
210210+ \l eft|\f rac{λ_{n-1}}{λ_n}\r ight|^k \, |z_{n-1}| \, \|\t extbf v_{n-1}\|
211211```
212- Now, each of the terms $\l eft|\f rac{λ_j}{λ_n}\r ight|^k$
213- for $j = 1, \l dots, {n-1}$ goes to zero for $k \t o \i nfty$ due to the
214- ascending eigenvalue ordering of Equation (1).
212+ A consequence of the ascending eigenvalue ordering of equation (1) is that
213+ ```math
214+ \l eft|\f rac{λ_{j}}{λ_n}\r ight| < 1 \q quad \t ext{for all $j = 1, \l dots, n-1$}
215+ ```
216+ and therefore that each of the terms $\l eft|\f rac{λ_j}{λ_n}\r ight|^k$
217+ for $j = 1, \l dots, {n-1}$ goes to zero for $k \t o \i nfty$.
215218Therefore overall
216219```math
217220\l eft\| \f rac{\m athbf{A}^k \m athbf{x}^{(1)}}{λ_n^k} - z_n \t extbf v_n \r ight\|
@@ -322,39 +325,43 @@ md"We also note in passing that $\|x\|_\infty$ seems to converge to the dominant
322325
323326# ╔═╡ b69a8d6c-364a-4951-afd9-24588ac10b64
324327md"""
325- Based on this idea we formulate the algorithm
328+ Note that $\| x\| _\i nfty = \m ax_{i=1,\l dots n} |x_i|$
329+ implies that there exists an index $m \i n \{ 1,\l dots n\} $,
330+ such that $\| x\| _\i nfty = |x_m|$.
331+
332+ Keeping this in mind we formulate the algorithm
326333
327334!!! info "Algorithm 1: Power iterations"
328335 Given a diagonalisable matrix $\m athbf A \i n \m athbb R^{n\t imes n}$
329336 and an initial guess $\m athbf x^{(1)} \i n \m athbb R^n$ we iterate
330337 for $k = 1, 2, \l dots$:
331338 1. Set $\m athbf y^{(k)} = \m athbf A \, \m athbf x^{(k)}$
332- 2. Find the index $m $ such that $\l eft|y_m^{(k)}\r ight|$ is the largest (by magnitude) element of $\t extbf{y}^{(k)}$, i.e.
333- ```math
334- m = \t extrm{argmax}_i \l eft|y_i^{(k)}\r ight|
335- ```
336- 3. Set $α ^{(k)} = \f rac{1}{y^{(k)}_m}$ and $β ^{(k)} = \f rac{y^{(k)}_m}{x^{(k)}_m}$.
337- 4. Set $\m athbf x^{(k+1)} = α^{(k)} \m athbf y^{(k)}$
339+ 2. Find the index $m $ such that $\l eft|y_m^{(k)}\r ight| = \l eft\| y^{(k)}\r ight\| _\i nfty$
340+ 3. Compute $α ^{(k)} = \f rac{1}{y^{(k)}_m}$ and set $β ^{(k)} = \f rac{y^{(k)}_m}{x^{(k)}_m}$ *(see below why)*
341+ 4. Set $\m athbf x^{(k+1)} = α^{(k)} \m athbf y^{(k)}$ *(Normalisation)*
338342
343+ We obtain $β ^{(k)}$ as the estimate to the dominant eigenvalue
344+ and $\m athbf x^{(k)}$ as the estimate of the corresponding eigenvector.
339345"""
340346
341347# ╔═╡ 4ebfc860-e179-4c76-8fc5-8c1089301078
342348md"""
343- Note that in this algorithm
344- $y_m ^{(k)} = \| y^{(k)} \| _\i nfty$
345- and $α ^{(k)} = 1 / \| y^{(k)} \| _\i nfty$.
349+ In this algorithm $α ^{(k)} = \f rac{1}{y^{(k)}_m} = \f rac{1}{\| y^{(k)} \| _\i nfty}$.
346350Step 4 is thus performing the normalisation we developed above.
347- Furthermore instead of employing $\| x \| _\i nfty$
348- as the eigenvalue estimate it employs
349- $β ^{(k)}$, which is just a scaled version of $\| x \| _\i nfty$.
350- To see why this should be expected to be an estimate of the
351- dominant eigenvalue $λ_n $ assume that
352- $\t extbf{x}^{(k)}$ is already close to the associated eigenvector $\m athbf v_n$.
353- Then $\m athbf A\, \m athbf{x}^{(k)}$ is almost $\l ambda_n \m athbf{x}^{(k)}$,
354- such that the ratio $β ^{(k)} = \f rac{y^{(k)}_m}{x^{(k)}_m}$
355- becomes close to $λ_n $ itself.
356-
357- An implementation of this power method algorithm in:
351+
352+ Furthermore $β ^{(k)}$ is now computed as the eigenvalue estimate
353+ instead of $\| x \| _\i nfty$. The idea is
354+ that if $\t extbf{x}^{(k)}$ is already close to the eigenvector $\m athbf v_n$
355+ associated to the dominant eigenvalue $λ_n $, then
356+ ```math
357+ \m athbf{y}^{(k)} = \m athbf A\, \m athbf{x}^{(k)} \a pprox \m athbf A\, \m athbf{v}_n = λ_n \m athbf{x}^{(k)}
358+ \q uad \R ightarrow \q uad
359+ \m athbf{y}^{(k)}_m \a pprox λ_n \m athbf{x}^{(k)}_m
360+ \q uad \R ightarrow \q uad
361+ λ_n \a pprox \f rac{\m athbf{y}^{(k)}_m}{\m athbf{x}^{(k)}_m} = β^{(k)}
362+ ```
363+
364+ An implementation of this power method algorithm is:
358365"""
359366
360367# ╔═╡ 01186225-602f-4637-99f2-0a6dd569a703
@@ -752,7 +759,7 @@ enables us to find the **eigenvalue of $\mathbf A$ closest to $σ$**.
752759md"""
753760A naive application of Algorithm 1 would first compute $\m athbf{P} = (\m athbf A - σ \m athbf I)^{-1}$ and then apply
754761```math
755- \m athbf y^{(k)} = ( \m athbf A - σ \m athbf I)^{-1} \m athbf x^{(k)} \m athbf{P} \m athbf x^{(k)}
762+ \m athbf y^{(k)} = \m athbf{P} \m athbf x^{(k)}
756763```
757764in each step of the power iteration.
758765However, for many problems the explicit computation of the inverse
@@ -779,7 +786,7 @@ md"""
779786 we iterate for $k = 1, 2, \l dots$:
780787 1. $\t extcolor{brown}{\t ext{Solve }(\m athbf A - σ \m athbf I) \m athbf y^{(k)} = \m athbf x^{(k)}
781788 \t ext{ for }\m athbf y^{(k)}}$.
782- 2. Find the index $m $ such that $y ^{(k)}_m = \|\m athbf y^{(k)}\| $
789+ 2. Find the index $m $ such that $| y^{(k)}_m| = \|\m athbf y^{(k)}\| _ \i nfty $
783790 3. Set $α ^{(k)} = \f rac{1}{y^{(k)}_m}$ and $\t extcolor{brown}{β^{(k)} = σ + \f rac{x^{(k)}_m}{y^{(k)}_m}}$.
784791 4. Set $\m athbf x^{(k+1)} = α^{(k)} \m athbf y^{(k)}$
785792"""
0 commit comments