Skip to content

Commit 42ef960

Browse files
committed
Update ddasp_exercise_slides.tex
^H -> ^T improved wording for special matrices
1 parent f08df17 commit 42ef960

File tree

1 file changed

+51
-51
lines changed

1 file changed

+51
-51
lines changed

slides/ddasp_exercise_slides.tex

Lines changed: 51 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,7 @@ \subsection{Exercise 02}
320320

321321
\begin{frame}{Matrix Factorization from Eigenwert Problem for Symmetric Matrix}
322322

323-
for \underline{symmetric} matrix $\bm{A}_{M \times M} = \bm{A}_{M \times M}^H$ we can have a special case of diagonalization
323+
for \underline{Hermitian} matrix $\bm{A}_{M \times M} = \bm{A}_{M \times M}^H$ we can have a special case of diagonalization
324324

325325
$$\bm{A} = \bm{Q} \bm{\Lambda} \bm{Q}^{-1} = \bm{Q} \bm{\Lambda} \bm{Q}^{H}$$
326326

@@ -359,12 +359,12 @@ \subsection{Exercise 02}
359359

360360
\begin{frame}[t]{Matrix Factorization from Eigenwert Problem for Symmetric Matrix}
361361

362-
for a normal matrix $\bm{A}$ (such as symmetric, i.e. $\bm{A}^H \bm{A} = \bm{A} \bm{A}^H$ )
362+
for a \underline{normal} matrix $\bm{A}$ (i.e. it holds $\bm{A}^H \bm{A} = \bm{A} \bm{A}^H$ )
363363

364364
there is the fundamental spectral theorem
365365
$$\bm{A} = \bm{Q} \bm{\Lambda} \bm{Q}^{H}$$
366366

367-
i.e. diagonalization in terms of eigenvectors in full rank matrix $\bm{Q}$ and eigenvalues in $\bm{\Lambda}\in\mathbb{R}$
367+
i.e. diagonalization in terms of eigenvectors in unitary matrix $\bm{Q}$ and eigenvalues in $\bm{\Lambda}\in\mathbb{C}$
368368

369369
What does $\bm{A}$ with an eigenvector $\bm{q}$?
370370

@@ -754,7 +754,7 @@ \subsection{Exercise 03}
754754
$$
755755
matrix factorization in terms of SVD
756756
$$
757-
\bm{A} = \bm{U} \bm{\Sigma} \bm{V}^H
757+
\bm{A} = \bm{U} \bm{\Sigma} \bm{V}^T
758758
=
759759
\begin{bmatrix}
760760
0 & 1 & 0 \\
@@ -773,7 +773,7 @@ \subsection{Exercise 03}
773773
0 & 1\\
774774
1 & 0
775775
\end{bmatrix}
776-
\right)^H
776+
\right)^T
777777
$$
778778

779779
What is the rank of $\bm{A}$?
@@ -833,7 +833,7 @@ \subsection{Exercise 03}
833833

834834
Due to rank $R=1$, we expect only one non-zero singular value $\sigma_1$, therefore the dimension
835835
of row space (which is always equal to the dimension of column space) is $R=1$, i.e. we have $R$
836-
independent vectors that span the row space and $r$ independent vectors that span the column space, so these spaces are lines in both 2D spaces in our example.
836+
independent vectors that span the row space and $R$ independent vectors that span the column space, so these spaces are lines in both 2D spaces in our example.
837837

838838
The $\bm{U}$ space has vectors in $\mathbb{R}^{M=2}$, the $\bm{V}$ space has vectors in $\mathbb{R}^{N=2}$.
839839

@@ -868,7 +868,7 @@ \subsection{Exercise 03}
868868
1\\3
869869
\end{bmatrix},
870870
$$
871-
i.e. the transposed row found in the outer product. So, all $\bm{X}^\mathrm{T} \bm{y}$,
871+
i.e. the transposed row found in the above outer product. So, all $\bm{X}^\mathrm{T} \bm{y}$,
872872
except those solutions that produce $\bm{X}^\mathrm{T} \bm{y} = \bm{0}$ (these $\bm{y}$
873873
belong to the left null space), are multiples of $[1, 3]^\mathrm{T}$.
874874

@@ -1468,7 +1468,7 @@ \subsection{Exercise 03}
14681468
\drawmatrix[bbox style={fill=C1}, bbox height=\N, bbox width=\N, fill=C2, height=\N, width=\rank\N]{V}_\mathtt{N \times N}^H
14691469
$
14701470
\end{center}
1471-
$\cdot$ Flat / fat matrix $\bm{A}$, \quad $M$ rows $<$ $N$ columns, \quad full row rank ($r=M$), \quad right inverse $\bm{A}^{\dagger_r} = \bm{A}^H (\bm{A} \bm{A}^H )^{-1}$
1471+
$\cdot$ Flat / fat matrix $\bm{A}$, \quad $M$ rows $<$ $N$ columns, \quad full row rank ($r=M$), \quad a right inverse $\bm{A}^{\dagger_r} = \bm{A}^H (\bm{A} \bm{A}^H )^{-1}$
14721472
such that $\bm{A} \bm{A}^{\dagger_r} = \bm{I}$ (i.e. projection to row space)
14731473
\begin{center}
14741474
$
@@ -1481,7 +1481,7 @@ \subsection{Exercise 03}
14811481
\drawmatrix[bbox style={fill=C1}, bbox height=\N, bbox width=\N, fill=C2, height=\N, width=\rank\M]{V}_\mathtt{N \times N}^H
14821482
$
14831483
\end{center}
1484-
$\cdot$ Tall / thin matrix $\bm{A}$, \quad $M$ rows $>$ $N$ columns, \quad full column rank ($r=N$), \quad left inverse $\bm{A}^{\dagger_l} = (\bm{A}^H \bm{A})^{-1} \bm{A}^H$ such that $\bm{A}^{\dagger_l} \bm{A} = \bm{I}$ (i.e. projection to row space)
1484+
$\cdot$ Tall / thin matrix $\bm{A}$, \quad $M$ rows $>$ $N$ columns, \quad full column rank ($r=N$), \quad a left inverse $\bm{A}^{\dagger_l} = (\bm{A}^H \bm{A})^{-1} \bm{A}^H$ such that $\bm{A}^{\dagger_l} \bm{A} = \bm{I}$ (i.e. projection to row space)
14851485
\begin{center}
14861486
$
14871487
\def\M{1.4}
@@ -1502,7 +1502,7 @@ \subsection{Exercise 03}
15021502
$\cdot$ Sum of rank-1 matrices\qquad
15031503
$\bm{A} = \bm{U} \bm{\Sigma} \bm{V}^H = \sum\limits_{r=1}^{R} \sigma_r \quad \textcolor{C0}{\bm{u}}_r \quad \textcolor{C2}{\bm{v}}^H_r$
15041504

1505-
$\cdot$ not full-rank cases need (general) pseudo-inverse $\bm{A}^\dagger = \bm{V} \Sigma^\dagger \bm{U}^H$
1505+
$\cdot$ not full-rank cases need (a general) pseudo-inverse $\bm{A}^\dagger = \bm{V} \Sigma^\dagger \bm{U}^H$
15061506

15071507
\hspace{4.25cm}
15081508
\textcolor{C0}{column space} $\perp$ \textcolor{C4}{left null space}
@@ -1620,8 +1620,8 @@ \subsection{Exercise 04}
16201620
0 & 1\\
16211621
1 & 0
16221622
\end{bmatrix}
1623-
\right)^H=
1624-
\bm{U} \bm{\Sigma} \bm{V}^H
1623+
\right)^T=
1624+
\bm{U} \bm{\Sigma} \bm{V}^T
16251625
$$
16261626

16271627
Can we solve for the model parameter vector $\bm{\theta}$ given the feature matrix $\bm{X}$ and the output data vector $\bm{y}$?
@@ -1662,14 +1662,14 @@ \subsection{Exercise 04}
16621662
optimization problem in least squares sense: $\min_{\text{wrt }\bm{\theta}} \lVert\bm{e}\rVert_2^2 = \min_{\text{wrt }\bm{\theta}} \lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2$
16631663
%
16641664

1665-
recall that $\lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2 = (\bm{y} - \bm{X} \bm{\theta})^H (\bm{y} - \bm{X} \bm{\theta})$
1665+
recall that $\lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2 = (\bm{y} - \bm{X} \bm{\theta})^T (\bm{y} - \bm{X} \bm{\theta})$
16661666
\begin{align*}
16671667
\lVert \bm{y} - \bm{X} \bm{\theta}\rVert_2 &= \sqrt{(-3 - 3\theta_1)^2 + (4-8\theta_2)^2 + (0-2)^2}\\
16681668
J(\theta_1, \theta_2) = \lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2 &= (-3 - 3\theta_1)^2 + (4-8\theta_2)^2 + (0-2)^2
16691669
\end{align*}
16701670
%
16711671
$$
1672-
\nabla J(\theta_1, \theta_2) =
1672+
\text{grad} J(\theta_1, \theta_2) =
16731673
\begin{bmatrix}
16741674
\frac{\partial J}{\partial \theta_1}\\
16751675
\frac{\partial J}{\partial \theta_2}
@@ -1688,7 +1688,7 @@ \subsection{Exercise 04}
16881688
\end{bmatrix}
16891689
$$
16901690
%
1691-
minimum at $\nabla J(\theta_1, \theta_2) = \bm{0}$, hence
1691+
minimum at $\text{grad} J(\theta_1, \theta_2) = \bm{0}$, hence
16921692
%
16931693
$$
16941694
\hat{\bm{\theta}}
@@ -1761,8 +1761,8 @@ \subsection{Exercise 04}
17611761
0 & 1\\
17621762
1 & 0
17631763
\end{bmatrix}
1764-
\right)^H=
1765-
\bm{U} \bm{\Sigma} \bm{V}^H
1764+
\right)^T=
1765+
\bm{U} \bm{\Sigma} \bm{V}^T
17661766
$$
17671767

17681768
\begin{center}
@@ -1789,13 +1789,13 @@ \subsection{Exercise 04}
17891789
%
17901790
shortest path of $\bm{e}$ to column space means that $\bm{e}$ is orthogonal to column space
17911791

1792-
hence, $\bm{e}$ must live purely in left null space, i.e. $\bm{X}^H \bm{e} = \bm{0}$ holds, this yields
1792+
hence, $\bm{e}$ must live purely in left null space, i.e. $\bm{X}^T \bm{e} = \bm{0}$ holds, this yields
17931793
%
1794-
$$\bm{X}^H (\bm{y} - \bm{X} \hat{\bm{\theta}}) = \bm{0} \quad \rightarrow \quad \bm{X}^H \bm{y} = \bm{X}^H \bm{X} \hat{\bm{\theta}} \quad \rightarrow \quad
1795-
(\bm{X}^H \bm{X})^{-1} \bm{X}^H \bm{y} = \hat{\bm{\theta}}
1794+
$$\bm{X}^T (\bm{y} - \bm{X} \hat{\bm{\theta}}) = \bm{0} \quad \rightarrow \quad \bm{X}^T \bm{y} = \bm{X}^T \bm{X} \hat{\bm{\theta}} \quad \rightarrow \quad
1795+
(\bm{X}^T \bm{X})^{-1} \bm{X}^T \bm{y} = \hat{\bm{\theta}}
17961796
$$
17971797

1798-
middle equation: normal equations, right equation: least squares error solution using left inverse $\bm{X}^{\dagger_l} = (\bm{X}^H \bm{X})^{-1} \bm{X}^H$ such that
1798+
middle equation: normal equations, right equation: least squares error solution using left inverse $\bm{X}^{\dagger_l} = (\bm{X}^T \bm{X})^{-1} \bm{X}^T$ such that
17991799
$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$
18001800

18011801

@@ -1836,9 +1836,9 @@ \subsection{Exercise 04}
18361836

18371837
we meanwhile know that for left inverse characteristics $$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$$
18381838

1839-
this is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
1839+
this operation is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
18401840

1841-
factor this with SVD
1841+
we can factorise this with SVD
18421842

18431843
$$\bm{V} \,\,\bm{?}\,\, \bm{U}^H \bm{U} \bm{\Sigma} \bm{V}^H = \bm{I}$$
18441844

@@ -1867,9 +1867,9 @@ \subsection{Exercise 04}
18671867

18681868
we meanwhile know that for left inverse characteristics $$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$$
18691869

1870-
this is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
1870+
this operation is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
18711871

1872-
factor this with SVD
1872+
we can factorise this with SVD
18731873

18741874
$$\bm{V} \,\,\bm{?}\,\, \bm{\Sigma} \bm{V}^H = \bm{I}$$
18751875

@@ -1895,9 +1895,9 @@ \subsection{Exercise 04}
18951895

18961896
we meanwhile know that for left inverse characteristics $$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$$
18971897

1898-
this is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
1898+
this operation is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
18991899

1900-
factor this with SVD
1900+
we can factorise this with SVD
19011901

19021902
$$\bm{?}\,\, \bm{\Sigma} = \bm{I}$$
19031903

@@ -1921,9 +1921,9 @@ \subsection{Exercise 04}
19211921

19221922
we meanwhile know that for left inverse characteristics $$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$$
19231923

1924-
this is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
1924+
this operation is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
19251925

1926-
factor this with SVD
1926+
we can factorise this with SVD
19271927

19281928
$$\bm{\Sigma}^{\dagger_l} \bm{\Sigma} = \bm{I}$$
19291929

@@ -1963,7 +1963,7 @@ \subsection{Exercise 04}
19631963
\def\M{1.8}
19641964
\def\N{1}
19651965
\def\rank{0.999999}
1966-
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^H}_\mathtt{N \times M}
1966+
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^T}_\mathtt{N \times M}
19671967
\drawmatrix[bbox style={fill=gray!50}, bbox height=\M, bbox width=\N, fill=white, height=\rank\N, width=\rank\N]\Sigma_\mathtt{M \times N}
19681968
=
19691969
\drawmatrix[fill=none, height=\N, width=\N]?_\mathtt{N \times N}
@@ -1993,7 +1993,7 @@ \subsection{Exercise 04}
19931993
\def\M{1.8}
19941994
\def\N{1}
19951995
\def\rank{0.999999}
1996-
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^H}_\mathtt{N \times M}
1996+
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^T}_\mathtt{N \times M}
19971997
\drawmatrix[bbox style={fill=gray!50}, bbox height=\M, bbox width=\N, fill=white, height=\rank\N, width=\rank\N]\Sigma_\mathtt{M \times N}
19981998
=
19991999
\drawmatrix[diag]{\sigma^2}_\mathtt{N \times N}
@@ -2006,15 +2006,15 @@ \subsection{Exercise 04}
20062006
\def\N{1}
20072007
\def\rank{0.999999}
20082008
\drawmatrix[diag]{1/\sigma^2}_\mathtt{N \times N}
2009-
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^H}_\mathtt{N \times M} =
2009+
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^T}_\mathtt{N \times M} =
20102010
\drawmatrix[bbox style={fill=gray!50}, bbox height=\N, bbox width=\M, fill=white, height=\rank\N, width=\rank\N]{\Sigma^{\dagger_l}}_\mathtt{N \times M}
20112011
$
20122012
\end{center}
20132013

2014-
$$\bm{\Sigma}^{\dagger_l} = (\bm{\Sigma}^H \bm{\Sigma})^{-1} \bm{\Sigma}^H$$
2014+
$$\bm{\Sigma}^{\dagger_l} = (\bm{\Sigma}^T \bm{\Sigma})^{-1} \bm{\Sigma}^T$$
20152015

20162016
$$\bm{X}^{\dagger_l} = \bm{V} \bm{\Sigma}^{\dagger_l} \bm{U}^H =
2017-
\bm{V} \left[(\bm{\Sigma}^H \bm{\Sigma})^{-1} \bm{\Sigma}^H\right] \bm{U}^H$$
2017+
\bm{V} \left[(\bm{\Sigma}^T \bm{\Sigma})^{-1} \bm{\Sigma}^T\right] \bm{U}^H$$
20182018

20192019

20202020
\end{frame}
@@ -2112,8 +2112,8 @@ \subsection{Exercise 04}
21122112
0 & 1\\
21132113
1 & 0
21142114
\end{bmatrix}
2115-
\right)^H=
2116-
\bm{U} \bm{\Sigma} \bm{V}^H
2115+
\right)^T=
2116+
\bm{U} \bm{\Sigma} \bm{V}^T
21172117
$$
21182118
Find left-inverse $\bm{X}^{\dagger_l}$ of $\bm{X}$ such that $\bm{X}^{\dagger_l} \bm{X} = \bm{I}_{2 \times 2}$
21192119
%
@@ -2140,10 +2140,10 @@ \subsection{Exercise 04}
21402140
1 & 0 & 0 \\
21412141
0 & 0 & 1
21422142
\end{bmatrix}
2143-
\right)^H}_{\text{this is not the SVD of } \bm{X}^{\dagger_l} \text{, why?, check the SVD of } \bm{X}^{\dagger_l}}
2143+
\right)^T}_{\text{this is not the SVD of } \bm{X}^{\dagger_l} \text{, why?, check the SVD of } \bm{X}^{\dagger_l}}
21442144
=
2145-
\bm{V} \bm{\Sigma}^{\dagger_l} \bm{U}^H =
2146-
\bm{V} \left[(\bm{\Sigma}^H \bm{\Sigma})^{-1} \bm{\Sigma}^H\right] \bm{U}^H
2145+
\bm{V} \bm{\Sigma}^{\dagger_l} \bm{U}^T =
2146+
\bm{V} \left[(\bm{\Sigma}^T \bm{\Sigma})^{-1} \bm{\Sigma}^T\right] \bm{U}^T
21472147
$$
21482148
%
21492149
We can solve for optimum $\hat{\bm{\theta}}$ in sense of least squares error, i.e. $\lVert \bm{e} \rVert_2^2 = \lVert \bm{y} - \bm{X} \hat{\bm{\theta}} \rVert_2^2\rightarrow \text{min}$:
@@ -2253,7 +2253,7 @@ \subsection{Exercise 05}
22532253
+\frac{\sqrt{2}}{100} & -\frac{\sqrt{2}}{100}
22542254
\end{bmatrix}
22552255
=
2256-
\bm{U} \bm{\Sigma} \bm{V}^H
2256+
\bm{U} \bm{\Sigma} \bm{V}^T
22572257
=
22582258
\begin{bmatrix}
22592259
1 & 0\\
@@ -2268,12 +2268,12 @@ \subsection{Exercise 05}
22682268
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
22692269
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}
22702270
\end{bmatrix}
2271-
\right)^H
2271+
\right)^T
22722272
$$
22732273
\pause
22742274
%
22752275
Left-inverse / here actually the Exact-inverse requires
2276-
$$\hat{\bm{\theta}} = \frac{\bm{u}_1^H \bm{y}}{\textcolor{C0}{\sigma_1}}\bm{v}_1 + \frac{\bm{u}_2^H \bm{y}}{\textcolor{C1}{\sigma_2}}\bm{v}_2$$
2276+
$$\hat{\bm{\theta}} = \frac{\bm{u}_1^T \bm{y}}{\textcolor{C0}{\sigma_1}}\bm{v}_1 + \frac{\bm{u}_2^T \bm{y}}{\textcolor{C1}{\sigma_2}}\bm{v}_2$$
22772277
\pause
22782278
%
22792279
for $\bm{y}=[1,1]^T$ we get
@@ -2334,7 +2334,7 @@ \subsection{Exercise 05}
23342334
=
23352335
\bm{V}_{N \times N}\quad
23362336
\bm{\Sigma}^{\dagger_l}_{N \times M}\quad
2337-
(\bm{U}_{M \times M})^\mathrm{H}
2337+
(\bm{U}_{M \times M})^\mathrm{T}
23382338
=
23392339
\bm{V}
23402340
\begin{bmatrix}
@@ -2343,16 +2343,16 @@ \subsection{Exercise 05}
23432343
0 & 0 & \frac{\sigma_i}{\sigma_i^2} & 0 & 0 & 0\\
23442344
0 & 0 & 0 & \frac{\sigma_R}{\sigma_R^2} & 0 & 0
23452345
\end{bmatrix}
2346-
\bm{U}^\mathrm{H}
2346+
\bm{U}^\mathrm{T}
23472347
$$
23482348

23492349
$\cdot$ if condition number $\kappa(\bm{X}) = \frac{\sigma_\text{max}}{\sigma_\text{min}}$ is very large, regularization yields more robust solutions
23502350

2351-
$\cdot$ \textcolor{C0}{Tikhonov} regularization aka \textcolor{C0}{ridge regression} applies following modification
2351+
$\cdot$ \textcolor{C0}{Tikhonov} regularization aka \textcolor{C0}{ridge regression} applies following modification with $\lambda > 0$
23522352

23532353
$$
2354-
\bm{\Sigma}^{\dagger_l} = (\bm{\Sigma}^H \bm{\Sigma})^{-1} \bm{\Sigma}^H \longrightarrow
2355-
\bm{\Sigma}^{\dagger_\text{ridge}} = (\bm{\Sigma}^H \bm{\Sigma} + \textcolor{C0}{\lambda \bm{I}})^{-1} \bm{\Sigma}^H
2354+
\bm{\Sigma}^{\dagger_l} = (\bm{\Sigma}^T \bm{\Sigma})^{-1} \bm{\Sigma}^T \longrightarrow
2355+
\bm{\Sigma}^{\dagger_\text{ridge}} = (\bm{\Sigma}^T \bm{\Sigma} + \textcolor{C0}{\lambda \bm{I}})^{-1} \bm{\Sigma}^T
23562356
$$
23572357

23582358
$$
@@ -2402,7 +2402,7 @@ \subsection{Exercise 05}
24022402
\begin{frame}[t]{L-Curve to Find Optimum Regularization Parameter $\lambda$}
24032403
$$
24042404
\hat{\bm{\theta}}(\textcolor{C0}{\lambda}) \quad=\quad
2405-
\left[\bm{V} (\bm{\Sigma}^\mathrm{H} \bm{\Sigma} + \textcolor{C0}{\lambda}\bm{I})^{-1} \bm{\Sigma}^\mathrm{H} \bm{U}^\mathrm{H}\right] \bm{y} \quad=\quad
2405+
\left[\bm{V} (\bm{\Sigma}^\mathrm{T} \bm{\Sigma} + \textcolor{C0}{\lambda}\bm{I})^{-1} \bm{\Sigma}^\mathrm{T} \bm{U}^\mathrm{H}\right] \bm{y} \quad=\quad
24062406
\left[(\bm{X}^\mathrm{H}\bm{X} + \textcolor{C0}{\lambda}\bm{I})^{-1} \bm{X}^\mathrm{H}\right] \bm{y}
24072407
$$
24082408
\begin{center}
@@ -2459,7 +2459,7 @@ \subsection{Exercise 05}
24592459
\lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2 + \textcolor{C0}{\lambda} \lVert \bm{\theta} \rVert_2^2
24602460
$$
24612461

2462-
$\cdot$ and the plain Least Squares Error Problem (i.e. for $\textcolor{C0}{\lambda}=0$)
2462+
$\cdot$ and the plain Least Squares Error Problem (special case for $\textcolor{C0}{\lambda}=0$)
24632463
$$
24642464
\min_{\text{wrt }\bm{\theta}} J(\bm{\theta}) \quad\text{with cost function}\quad
24652465
J(\bm{\theta}) = \lVert\bm{y} - \bm{X} \bm{\theta}\rVert_2^2
@@ -2468,7 +2468,7 @@ \subsection{Exercise 05}
24682468
have the closed form solution using the (regularized) left inverse of $\bm{X} = \bm{U}\bm{\Sigma}\bm{V}^H$:
24692469
$$
24702470
\hat{\bm{\theta}} \quad=\quad
2471-
\left[\bm{V} (\bm{\Sigma}^\mathrm{H} \bm{\Sigma} + \textcolor{C0}{\lambda \bm{I})}^{-1} \bm{\Sigma}^\mathrm{H} \bm{U}^\mathrm{H}\right] \bm{y} \quad=\quad
2471+
\left[\bm{V} (\bm{\Sigma}^\mathrm{T} \bm{\Sigma} + \textcolor{C0}{\lambda \bm{I})}^{-1} \bm{\Sigma}^\mathrm{T} \bm{U}^\mathrm{H}\right] \bm{y} \quad=\quad
24722472
\left[(\bm{X}^\mathrm{H}\bm{X} + \textcolor{C0}{\lambda \bm{I}})^{-1} \bm{X}^\mathrm{H}\right] \bm{y}
24732473
$$
24742474

@@ -2524,7 +2524,7 @@ \subsection{Exercise 06}
25242524
\item $\bm{y}_{N \times 1}$ audio signal with $N$ samples as a result of the linear model's linear combination plus noise
25252525
\end{itemize}
25262526
%
2527-
Let us assume that a) we know $\bm{X}$ (i.e. the individual audio tracks) and $\bm{y}$ (i.e. the noise-corrupted final mixdown), b) that we do not know the noise $\bm{n}$ and c) that we want to estimate the 'real world' mixing gains $\bm{\theta}$
2527+
Let us assume that a) we know $\bm{X}$ (i.e. the individual audio tracks) and $\bm{y}$ (i.e. the noise-corrupted final mixdown), b) that we do not know the noise $\bm{\nu}$ and c) that we want to estimate the 'real world' mixing gains $\bm{\theta}$
25282528
\end{frame}
25292529

25302530

0 commit comments

Comments
 (0)