Skip to content

Commit 56ea239

Browse files
committed
Post-lecture update
1 parent 247b319 commit 56ea239

File tree

2 files changed

+90
-40
lines changed

2 files changed

+90
-40
lines changed

src/04_Nonlinear_equations.jl

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -723,9 +723,6 @@ md"""
723723
while fixed-point iterations for $g_D$ diverge.
724724
"""
725725

726-
# ╔═╡ b7ee0316-5e6b-4615-a934-8a4716b00b2d
727-
# TODO(md"Table contrast scalar versus multi-dimensional case in a table.")
728-
729726
# ╔═╡ 5d7b3d35-3456-48df-ad22-0ffccaa2f529
730727
md"""
731728
### Stopping criteria and residual
@@ -1096,6 +1093,20 @@ Clearly in both cases these ratios become approximately constant as $k$ gets lar
10961093
(and debug implementation bugs)!
10971094
"""
10981095

1096+
# ╔═╡ 312693df-616f-4e87-ba6c-0426ac606f61
1097+
md"""
1098+
!!! info "One-dimensional vs. multi-dimensional settings for fixed-point methods"
1099+
We consider a fixed-point problem $g(x_\ast) = x_\ast$, which we want to solve
1100+
with fixed-point iterations $x^{(k+1)} = g(x^{(k)})$ giving rise to a residual $r^{(k)} = x^{(k+1)} - x^{(k)}$.
1101+
1102+
| | Convergence | Linear convgnce rate $C$ | $\|r^{(k)}\| < ε$ good s.c. |
1103+
| ------------------------- | ------------------ | ------------ | ----- |
1104+
| one-dimensional $g:\mathbb{R}\to\mathbb{R}$ | $\vert g'(x_\ast) \vert < 1$ | $C=\vert g'(x_\ast) \vert$ | $\vert g'(x_\ast)\vert \neq 1$ |
1105+
| multi-dimensional $g:\mathbb{R}^n\to\mathbb{R}^n$ | $\Vert J_g(\mathbf{x}_\ast) \Vert < 1$ | $C = \Vert J_g(\mathbf{x}_\ast) \Vert$ | $\Vert J_g(\mathbf{x}_\ast) \Vert \neq 1$ |
1106+
1107+
where the last row indicates the condition for the simple residual norm stopping criteration is a good or a bad stopping criterion.
1108+
"""
1109+
10991110
# ╔═╡ 68f63b13-1326-4cb6-8db1-3b043b2ad78e
11001111
md"""
11011112
## Optional: Bisection method
@@ -3137,7 +3148,6 @@ version = "1.13.0+0"
31373148
# ╠═50639d02-55d5-4fcb-8335-13fd7f6b7624
31383149
# ╠═da60beec-74e7-4b3f-aa09-27b806054896
31393150
# ╟─ec8b8273-3724-4ea9-91d3-259390abc55d
3140-
# ╠═b7ee0316-5e6b-4615-a934-8a4716b00b2d
31413151
# ╟─5d7b3d35-3456-48df-ad22-0ffccaa2f529
31423152
# ╟─284f3fa4-ce24-4b99-8bb7-6f74a4589550
31433153
# ╠═899698d1-9dee-4f82-9171-f1b49aefcabe
@@ -3169,6 +3179,7 @@ version = "1.13.0+0"
31693179
# ╟─dc136c3d-4534-4d66-86ca-36114bb825bb
31703180
# ╟─d22a7de4-67ef-4664-951b-8fb46116a7bc
31713181
# ╟─314f733c-c3c2-4679-bb7b-c94b96b54961
3182+
# ╟─312693df-616f-4e87-ba6c-0426ac606f61
31723183
# ╟─68f63b13-1326-4cb6-8db1-3b043b2ad78e
31733184
# ╟─b96fd4b2-13d1-4e42-ac6c-074d595f4750
31743185
# ╠═7839caac-64e9-443d-bd49-03930fbe7aba

src/05_Direct_methods.jl

Lines changed: 75 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -39,16 +39,16 @@ TableOfContents()
3939
md"""
4040
# Direct methods for linear systems
4141
42-
In the previous chapter on polynomial interpolation we were already
43-
confronted with the need to solve linear systems, that is a system
44-
of equations of the form
42+
In the previous chapter in our discussion on multi-dimensional
43+
fixed-point problems we were already
44+
confronted with the need to solve linear systems
4545
```math
4646
\tag{1}
47-
\mathbf{A} \mathbf{x} = \mathbf{b},
47+
\mathbf{A} \mathbf{x} = \mathbf{b}.
4848
```
49-
where we are given a matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$
50-
as well as a right-hand side $\mathbf{b} \in \mathbb{R}^n$.
51-
As the solution we seek the unknown $\mathbf{x} \in \mathbb{R}^n$.
49+
Here, we are given a matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$
50+
as well as a right-hand side $\mathbf{b} \in \mathbb{R}^n$ and seek
51+
the unknown $\mathbf{x} \in \mathbb{R}^n$.
5252
"""
5353

5454
# ╔═╡ 419d11bf-2561-49ca-a6e7-40c8d8b88b24
@@ -284,6 +284,15 @@ function forward_substitution(L, b)
284284
x
285285
end
286286
```
287+
```julia
288+
x[1] = b[1] / L[1, 1]
289+
```
290+
```julia
291+
row_sum += L[i, j] * x[j]
292+
```
293+
```julia
294+
x[i] = 1 / L[i, i] * (b[i] - row_sum)
295+
```
287296
""")
288297

289298
# ╔═╡ 6e6fce09-d14c-459d-9dd6-5f8fd1ee8248
@@ -783,7 +792,7 @@ md"... and exactly achieves the task of swapping the last two rows, but leaving
783792

784793
# ╔═╡ c42c3c63-b96c-4af0-a276-72ebd96fe23c
785794
md"""
786-
We notice that even though $\mathbf D$ cannot be permuted it is possible to obtain an LU factorisation $\mathbf{L} \mathbf{U} = \mathbf{P} \mathbf{D}$
795+
We notice that even though $\mathbf D$ cannot be factorised it is possible to obtain an LU factorisation $\mathbf{L} \mathbf{U} = \mathbf{P} \mathbf{D}$
787796
if we additionally allow the freedom to cleverly permute the rows of $\mathbf D$.
788797
789798
This is in fact a general result:
@@ -1175,13 +1184,59 @@ Full matrices are the "standard case" and for them memory constraints usually se
11751184
linear problems with more than around $n \simeq 60000$ unknows cannot be treated.
11761185
"""
11771186

1178-
# ╔═╡ 946fb6e6-e7e5-4aee-b566-b7c33ead7789
1187+
# ╔═╡ 30620d0c-b73a-4de3-95a1-6fa199199289
1188+
md"""
1189+
!!! warning "Examples of full matrices"
1190+
- **Generic matrices are full.** If we can say nothing specific about a matrix, all its entries $A_{ij}$ may be non-zero. Thus we have $n^2 = O(n^2)$ non-zero entries and the matrix is full.
1191+
- **Upper-triangular matrices are full.** For an upper-triangular matrix all entries $A_{ij}$ with $i<j$ are zero. The first row thus has all $n$ entries, the second $n-1$ and the last just $1$. We obtain $\sum_{i=1}^n (n-i+1) = n (n+1) + \sum_{i=1}^n i = \frac{n (n+1)}{2} = O(n^2)$. The same holds for lower-triangular matrices.
1192+
- **Symmetric matrices are full.** If a matrix is symmetric, then we can get away storing only the the upper triangle. Same as above, the storage cost is $O(n^2)$.
1193+
"""
1194+
1195+
# ╔═╡ b4d56685-21e5-4bc8-88b8-14e0478f30e3
11791196
md"""
1197+
In contrast are the sparse matrices:
1198+
11801199
!!! info "Definition: Sparse matrix"
11811200
A matrix $A \in \mathbb{R}^{n\times n}$ is called **sparse** if the number
11821201
of non-zero elements is at the order of $n$.
11831202
1184-
If we know which elements are zero, we can thus save memory by only storing those elements, which are non-zero. For this purpose the [`SparseArrays` Julia package](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) implements many primitives
1203+
If we know which elements are zero, we can thus save memory by only storing those elements, which are non-zero.
1204+
1205+
A particular type of sparse matrix, which is very important is:
1206+
"""
1207+
1208+
# ╔═╡ 53b44779-c8a6-457d-b13e-52d4d7cc3a47
1209+
md"""
1210+
!!! info "Definition: Band matrix"
1211+
A matrix $A \in \mathbb{R}^{n\times n}$ is called a **band matrix**
1212+
with bandwidth $d$ if $A_{ij} = 0$ when $|j - i| > d$.
1213+
Every line of the matrix contains at most $2d + 1$ non-zero elements
1214+
and the number of non-zeros thus scales as $O(nd)$.
1215+
The idea is that $d \ll n$, i.e. that $d$ is much smaller than $n$.
1216+
1217+
An example for a banded matrix with bandwidth $5$ is:
1218+
"""
1219+
1220+
# ╔═╡ 5dc919c8-01e1-4b23-b406-492562c9e338
1221+
band = spdiagm(-5 => -ones(100),
1222+
-3 => 3ones(102),
1223+
-2 => ones(103),
1224+
0 => -10ones(105),
1225+
1 => -ones(104),
1226+
3 => 3ones(102),
1227+
4 => ones(101),
1228+
5 => -ones(100))
1229+
1230+
# ╔═╡ e38aa377-38f8-408d-8133-58b8e99edc7b
1231+
md"""
1232+
!!! warning "Examples of sparse matrices"
1233+
- **Band matrices.** By definiton a band matrix has at most $d$ non-zero entries per row. So the number of non-zeros is at most $nd = O(n)$.
1234+
- **Diagonal matrix.** A diagonal matrix has exactly one non-zero entry per row, meaning that its storage cost is $n = O(n)$. This is not surprising given that a diagonal matrix is just a band matrix with band width $d = 1$.
1235+
"""
1236+
1237+
# ╔═╡ 4eed2dc5-a7e0-4e77-aeea-bdb9480840ae
1238+
md"""
1239+
For working with sparse matrices the [`SparseArrays` Julia package](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) implements many primitives
11851240
for working with sparse arrays.
11861241
11871242
This includes, generating random sparse arrays:
@@ -1202,31 +1257,11 @@ M = [ 1 0 0 5;
12021257
# ╔═╡ d6b61d9a-817e-4c09-9a83-fde7ca591d23
12031258
sparse(M)
12041259

1205-
# ╔═╡ fb87aa13-1fd6-4c13-8ba4-68caef5f775b
1260+
# ╔═╡ 4f94086e-19be-46c6-a674-da5193b3498c
12061261
md"""
12071262
Using the `SparseArray` data structure from `SparseArrays` consistently allows to fully exploit sparsity when solving a problem. As a result the **storage costs scale only as $O(n)$**. With our laptop of 32 GiB memory we can thus tackle problems with around $n\simeq 4 \cdot 10^9$ unknowns --- much better than the $60000$ we found when using full matrices.
1208-
1209-
Finally we introduce a special kind of sparse matrix:
1210-
1211-
!!! info "Definition: Band matrix"
1212-
A matrix $A \in \mathbb{R}^{n\times n}$ is called a **band matrix**
1213-
with bandwidth $d$ if $A_{ij} = 0$ when $|j - i| > d$.
1214-
Every line of the matrix contains at most $2d + 1$ non-zero elements
1215-
and the number of non-zeros thus scales as $O(nd)$.
1216-
1217-
An example for a banded matrix with bandwidth $5$ is:
12181263
"""
12191264

1220-
# ╔═╡ 5dc919c8-01e1-4b23-b406-492562c9e338
1221-
band = spdiagm(-5 => -ones(100),
1222-
-3 => 3ones(102),
1223-
-2 => ones(103),
1224-
0 => -10ones(105),
1225-
1 => -ones(104),
1226-
3 => 3ones(102),
1227-
4 => ones(101),
1228-
5 => -ones(100))
1229-
12301265
# ╔═╡ 74709cb2-0be8-4b24-aa9d-926ef059fe2d
12311266
md"""
12321267
### Memory: LU factorisation of full matrices
@@ -1297,7 +1332,7 @@ md"""
12971332
| type of $n\times n$ matrix | memory usage | comment |
12981333
| ------------------------- | ------------ | ------------ |
12991334
| full matrix | $O(n^2)$ |
1300-
| general sparse matrix | $O(n)$ | fill-in
1335+
| general sparse matrix | $O(n^2)$ | fill-in
13011336
| banded matrix, band width $d$ | $O(n\,d)$ | stays block-diagonal
13021337
13031338
"""
@@ -1482,7 +1517,7 @@ md"""
14821517
| type of $n\times n$ matrix | computational cost | memory usage |
14831518
| ------------------------- | ------------------ | ------------ |
14841519
| full matrix | $O(n^3)$ | $O(n^2)$ |
1485-
| general sparse matrix
1520+
| general sparse matrix | $O(n^3)$ | $O(n^2)$ |
14861521
| banded matrix, band width $d$ | $O(n\,d^2)$ | $O(n\,d)$ |
14871522
14881523
"""
@@ -3351,13 +3386,17 @@ version = "1.13.0+0"
33513386
# ╠═1f34c503-6815-4639-a932-9fe2d7e1b4c2
33523387
# ╟─ef60d36e-02c7-44e1-8910-241f99a89a38
33533388
# ╟─3ab0ccfe-df7c-4aed-b83f-f0113001610b
3354-
# ╟─946fb6e6-e7e5-4aee-b566-b7c33ead7789
3389+
# ╟─30620d0c-b73a-4de3-95a1-6fa199199289
3390+
# ╟─b4d56685-21e5-4bc8-88b8-14e0478f30e3
3391+
# ╟─53b44779-c8a6-457d-b13e-52d4d7cc3a47
3392+
# ╟─5dc919c8-01e1-4b23-b406-492562c9e338
3393+
# ╟─e38aa377-38f8-408d-8133-58b8e99edc7b
3394+
# ╟─4eed2dc5-a7e0-4e77-aeea-bdb9480840ae
33553395
# ╠═4a0dc42d-31a8-49bc-8db0-5f77474d8785
33563396
# ╟─d0a852cb-0d68-4807-861f-5302a5aeecd4
33573397
# ╠═8353d1fd-75cc-41e0-866c-5234634219d5
33583398
# ╠═d6b61d9a-817e-4c09-9a83-fde7ca591d23
3359-
# ╟─fb87aa13-1fd6-4c13-8ba4-68caef5f775b
3360-
# ╟─5dc919c8-01e1-4b23-b406-492562c9e338
3399+
# ╟─4f94086e-19be-46c6-a674-da5193b3498c
33613400
# ╟─74709cb2-0be8-4b24-aa9d-926ef059fe2d
33623401
# ╠═68396ec1-444b-4c56-966b-5c70a2208d34
33633402
# ╠═8b51fcc1-6287-43cd-a44c-26c7423e8d7a

0 commit comments

Comments
 (0)