You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/05_Direct_methods.jl
+75-36Lines changed: 75 additions & 36 deletions
Original file line number
Diff line number
Diff line change
@@ -39,16 +39,16 @@ TableOfContents()
39
39
md"""
40
40
# Direct methods for linear systems
41
41
42
-
In the previous chapter on polynomial interpolation we were already
43
-
confronted with the need to solve linear systems, that is a system
44
-
of equations of the form
42
+
In the previous chapter in our discussion on multi-dimensional
43
+
fixed-point problems we were already
44
+
confronted with the need to solve linear systems
45
45
```math
46
46
\tag{1}
47
-
\mathbf{A} \mathbf{x} = \mathbf{b},
47
+
\mathbf{A} \mathbf{x} = \mathbf{b}.
48
48
```
49
-
where we are given a matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$
50
-
as well as a right-hand side $\mathbf{b} \in \mathbb{R}^n$.
51
-
As the solution we seek the unknown $\mathbf{x} \in \mathbb{R}^n$.
49
+
Here, we are given a matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$
50
+
as well as a right-hand side $\mathbf{b} \in \mathbb{R}^n$ and seek
51
+
the unknown $\mathbf{x} \in \mathbb{R}^n$.
52
52
"""
53
53
54
54
# ╔═╡ 419d11bf-2561-49ca-a6e7-40c8d8b88b24
@@ -284,6 +284,15 @@ function forward_substitution(L, b)
284
284
x
285
285
end
286
286
```
287
+
```julia
288
+
x[1] = b[1] / L[1, 1]
289
+
```
290
+
```julia
291
+
row_sum += L[i, j] * x[j]
292
+
```
293
+
```julia
294
+
x[i] = 1 / L[i, i] * (b[i] - row_sum)
295
+
```
287
296
""")
288
297
289
298
# ╔═╡ 6e6fce09-d14c-459d-9dd6-5f8fd1ee8248
@@ -783,7 +792,7 @@ md"... and exactly achieves the task of swapping the last two rows, but leaving
783
792
784
793
# ╔═╡ c42c3c63-b96c-4af0-a276-72ebd96fe23c
785
794
md"""
786
-
We notice that even though $\mathbf D$ cannot be permuted it is possible to obtain an LU factorisation $\mathbf{L} \mathbf{U} = \mathbf{P} \mathbf{D}$
795
+
We notice that even though $\mathbf D$ cannot be factorised it is possible to obtain an LU factorisation $\mathbf{L} \mathbf{U} = \mathbf{P} \mathbf{D}$
787
796
if we additionally allow the freedom to cleverly permute the rows of $\mathbf D$.
788
797
789
798
This is in fact a general result:
@@ -1175,13 +1184,59 @@ Full matrices are the "standard case" and for them memory constraints usually se
1175
1184
linear problems with more than around $n \simeq 60000$ unknows cannot be treated.
1176
1185
"""
1177
1186
1178
-
# ╔═╡ 946fb6e6-e7e5-4aee-b566-b7c33ead7789
1187
+
# ╔═╡ 30620d0c-b73a-4de3-95a1-6fa199199289
1188
+
md"""
1189
+
!!! warning "Examples of full matrices"
1190
+
- **Generic matrices are full.** If we can say nothing specific about a matrix, all its entries $A_{ij}$ may be non-zero. Thus we have $n^2 = O(n^2)$ non-zero entries and the matrix is full.
1191
+
- **Upper-triangular matrices are full.** For an upper-triangular matrix all entries $A_{ij}$ with $i<j$ are zero. The first row thus has all $n$ entries, the second $n-1$ and the last just $1$. We obtain $\sum_{i=1}^n (n-i+1) = n (n+1) + \sum_{i=1}^n i = \frac{n (n+1)}{2} = O(n^2)$. The same holds for lower-triangular matrices.
1192
+
- **Symmetric matrices are full.** If a matrix is symmetric, then we can get away storing only the the upper triangle. Same as above, the storage cost is $O(n^2)$.
1193
+
"""
1194
+
1195
+
# ╔═╡ b4d56685-21e5-4bc8-88b8-14e0478f30e3
1179
1196
md"""
1197
+
In contrast are the sparse matrices:
1198
+
1180
1199
!!! info "Definition: Sparse matrix"
1181
1200
A matrix $A \in \mathbb{R}^{n\times n}$ is called **sparse** if the number
1182
1201
of non-zero elements is at the order of $n$.
1183
1202
1184
-
If we know which elements are zero, we can thus save memory by only storing those elements, which are non-zero. For this purpose the [`SparseArrays` Julia package](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) implements many primitives
1203
+
If we know which elements are zero, we can thus save memory by only storing those elements, which are non-zero.
1204
+
1205
+
A particular type of sparse matrix, which is very important is:
1206
+
"""
1207
+
1208
+
# ╔═╡ 53b44779-c8a6-457d-b13e-52d4d7cc3a47
1209
+
md"""
1210
+
!!! info "Definition: Band matrix"
1211
+
A matrix $A \in \mathbb{R}^{n\times n}$ is called a **band matrix**
1212
+
with bandwidth $d$ if $A_{ij} = 0$ when $|j - i| > d$.
1213
+
Every line of the matrix contains at most $2d + 1$ non-zero elements
1214
+
and the number of non-zeros thus scales as $O(nd)$.
1215
+
The idea is that $d \ll n$, i.e. that $d$ is much smaller than $n$.
1216
+
1217
+
An example for a banded matrix with bandwidth $5$ is:
1218
+
"""
1219
+
1220
+
# ╔═╡ 5dc919c8-01e1-4b23-b406-492562c9e338
1221
+
band =spdiagm(-5=>-ones(100),
1222
+
-3=>3ones(102),
1223
+
-2=>ones(103),
1224
+
0=>-10ones(105),
1225
+
1=>-ones(104),
1226
+
3=>3ones(102),
1227
+
4=>ones(101),
1228
+
5=>-ones(100))
1229
+
1230
+
# ╔═╡ e38aa377-38f8-408d-8133-58b8e99edc7b
1231
+
md"""
1232
+
!!! warning "Examples of sparse matrices"
1233
+
- **Band matrices.** By definiton a band matrix has at most $d$ non-zero entries per row. So the number of non-zeros is at most $nd = O(n)$.
1234
+
- **Diagonal matrix.** A diagonal matrix has exactly one non-zero entry per row, meaning that its storage cost is $n = O(n)$. This is not surprising given that a diagonal matrix is just a band matrix with band width $d = 1$.
1235
+
"""
1236
+
1237
+
# ╔═╡ 4eed2dc5-a7e0-4e77-aeea-bdb9480840ae
1238
+
md"""
1239
+
For working with sparse matrices the [`SparseArrays` Julia package](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) implements many primitives
1185
1240
for working with sparse arrays.
1186
1241
1187
1242
This includes, generating random sparse arrays:
@@ -1202,31 +1257,11 @@ M = [ 1 0 0 5;
1202
1257
# ╔═╡ d6b61d9a-817e-4c09-9a83-fde7ca591d23
1203
1258
sparse(M)
1204
1259
1205
-
# ╔═╡ fb87aa13-1fd6-4c13-8ba4-68caef5f775b
1260
+
# ╔═╡ 4f94086e-19be-46c6-a674-da5193b3498c
1206
1261
md"""
1207
1262
Using the `SparseArray` data structure from `SparseArrays` consistently allows to fully exploit sparsity when solving a problem. As a result the **storage costs scale only as $O(n)$**. With our laptop of 32 GiB memory we can thus tackle problems with around $n\simeq 4 \cdot 10^9$ unknowns --- much better than the $60000$ we found when using full matrices.
1208
-
1209
-
Finally we introduce a special kind of sparse matrix:
1210
-
1211
-
!!! info "Definition: Band matrix"
1212
-
A matrix $A \in \mathbb{R}^{n\times n}$ is called a **band matrix**
1213
-
with bandwidth $d$ if $A_{ij} = 0$ when $|j - i| > d$.
1214
-
Every line of the matrix contains at most $2d + 1$ non-zero elements
1215
-
and the number of non-zeros thus scales as $O(nd)$.
1216
-
1217
-
An example for a banded matrix with bandwidth $5$ is:
1218
1263
"""
1219
1264
1220
-
# ╔═╡ 5dc919c8-01e1-4b23-b406-492562c9e338
1221
-
band =spdiagm(-5=>-ones(100),
1222
-
-3=>3ones(102),
1223
-
-2=>ones(103),
1224
-
0=>-10ones(105),
1225
-
1=>-ones(104),
1226
-
3=>3ones(102),
1227
-
4=>ones(101),
1228
-
5=>-ones(100))
1229
-
1230
1265
# ╔═╡ 74709cb2-0be8-4b24-aa9d-926ef059fe2d
1231
1266
md"""
1232
1267
### Memory: LU factorisation of full matrices
@@ -1297,7 +1332,7 @@ md"""
1297
1332
| type of $n\times n$ matrix | memory usage | comment |
0 commit comments