|
40 | 40 | md""" |
41 | 41 | This lecture will provide a selective overview of standard diagonalisation algorithms, contrasting their respective scope of applicability and the main ingredients. Our main angle will be to understand questions related to numerical stability in the key ingredients of the algorithms. Instead of taking a proof-guided approach, we will mostly follow a computational approach using the increased-precision and interval techniques we discussed in previous lectures. |
42 | 42 |
|
43 | | -A more comprehensive treatment of the topic can be found in the book [Numerical Methods for Large Eigenvalue Problems](https://epubs.siam.org/doi/book/10.1137/1.9781611970739) by Youssef Saad as well as the [Lecture notes on Large Scale Eigenvalue Problems](https://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf) by Peter Arbenz. |
| 43 | +For a more gentle introduction see the [chapter on eigenvalue problems in the Numerical Analysis lecture](https://teaching.matmat.org/numerical-analysis/11_Eigenvalue_problems.html). |
| 44 | +For a more comprehensive treatment of the topic see the book [Numerical Methods for Large Eigenvalue Problems](https://epubs.siam.org/doi/book/10.1137/1.9781611970739) by Youssef Saad as well as the [Lecture notes on Large Scale Eigenvalue Problems](https://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf) by Peter Arbenz. |
44 | 45 |
|
45 | 46 | In our discussion we will always take $A \in \mathbb{C}^{N\times N}$ to be a Hermitian matrix and we will seek approximations to the eigenpairs $(\lambda_i, x_i) \in \mathbb{R} \times \mathbb{C}^N$, i.e. |
46 | 47 | ```math |
@@ -467,6 +468,10 @@ Suppose now that $\{v_1, v_2, \ldots, v_m\}$ is an orthonormal basis for $\mathc |
467 | 468 | ```math |
468 | 469 | \tilde{x}_i = V \tilde{y}_i |
469 | 470 | ``` |
| 471 | +equivalent to |
| 472 | +```math |
| 473 | +(\tilde{x}_i)_α = \sum_k V_{αk} (\tilde{y}_i)_k |
| 474 | +``` |
470 | 475 | equation $(\ast)$ becomes |
471 | 476 | ```math |
472 | 477 | \left\langle v_j, A V \tilde{y}_i - \tilde{λ}_i V \tilde{y}_i \right\rangle = 0 \qquad \forall j = 1, \ldots, n. |
|
682 | 687 |
|
683 | 688 | # ╔═╡ e1fd4f96-a465-46d8-8c78-cde7c5325e6f |
684 | 689 | md""" |
685 | | -### Optional: Forming a good subspace |
| 690 | +### Forming a good subspace |
686 | 691 |
|
687 | 692 | Projection methods and the Rayleigh-Ritz procedure are a key ingredient of pretty much all iterative diagonalisation approaches employed nowadays. A full discussion of the standard techniques employed to build the reduced subspace $\mathcal{S}$ is out of the scope of this lecture. An incomplete list of techniques worth mentioning are: |
688 | 693 | - If a Krylov basis is constructed and employed for $\mathcal{S}$ one obtains diagonalisation methods such as *Lanczos* or *Arnoldi*. |
@@ -882,6 +887,9 @@ What preconditioner should be chosen is often highly problem dependent. However, |
882 | 887 |
|
883 | 888 | """ |
884 | 889 |
|
| 890 | +# ╔═╡ 652c5513-8263-4211-833f-aa8180711d68 |
| 891 | +TODO("mention Nystrom approximation") |
| 892 | + |
885 | 893 | # ╔═╡ 4b0cd6fc-50ec-4e12-82e0-30fe2ceb98ce |
886 | 894 | md""" |
887 | 895 | ### Step sizes and LOBPCG |
@@ -3368,6 +3376,7 @@ version = "1.9.2+0" |
3368 | 3376 | # ╠═68fff309-a3ec-4ebc-bcfe-bafeac6ef3f5 |
3369 | 3377 | # ╟─f4bb1d69-1f06-4769-ac6e-081ddaa437d7 |
3370 | 3378 | # ╟─8481465b-171b-4374-8ec9-d7c19bd23d81 |
| 3379 | +# ╠═652c5513-8263-4211-833f-aa8180711d68 |
3371 | 3380 | # ╟─4b0cd6fc-50ec-4e12-82e0-30fe2ceb98ce |
3372 | 3381 | # ╟─9fdc92f6-158f-4e62-82e0-b6c5ce96d9a8 |
3373 | 3382 | # ╠═94e6808b-c89c-4431-90d8-5e252e38d834 |
|
0 commit comments