You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
\begin{frame}[t]{Matrix Factorization from Eigenwert Problem for Symmetric Matrix}
361
361
362
-
for a normal matrix $\bm{A}$ (such as symmetric, i.e. $\bm{A}^H \bm{A} = \bm{A} \bm{A}^H$ )
362
+
for a \underline{normal} matrix $\bm{A}$ (i.e. it holds$\bm{A}^H \bm{A} = \bm{A} \bm{A}^H$ )
363
363
364
364
there is the fundamental spectral theorem
365
365
$$\bm{A} = \bm{Q} \bm{\Lambda} \bm{Q}^{H}$$
366
366
367
-
i.e. diagonalization in terms of eigenvectors in full rank matrix $\bm{Q}$ and eigenvalues in $\bm{\Lambda}\in\mathbb{R}$
367
+
i.e. diagonalization in terms of eigenvectors in unitary matrix $\bm{Q}$ and eigenvalues in $\bm{\Lambda}\in\mathbb{C}$
368
368
369
369
What does $\bm{A}$ with an eigenvector $\bm{q}$?
370
370
@@ -754,7 +754,7 @@ \subsection{Exercise 03}
754
754
$$
755
755
matrix factorization in terms of SVD
756
756
$$
757
-
\bm{A} = \bm{U} \bm{\Sigma} \bm{V}^H
757
+
\bm{A} = \bm{U} \bm{\Sigma} \bm{V}^T
758
758
=
759
759
\begin{bmatrix}
760
760
0 & 1 & 0\\
@@ -773,7 +773,7 @@ \subsection{Exercise 03}
773
773
0 & 1\\
774
774
1 & 0
775
775
\end{bmatrix}
776
-
\right)^H
776
+
\right)^T
777
777
$$
778
778
779
779
What is the rank of $\bm{A}$?
@@ -833,7 +833,7 @@ \subsection{Exercise 03}
833
833
834
834
Due to rank $R=1$, we expect only one non-zero singular value $\sigma_1$, therefore the dimension
835
835
of row space (which is always equal to the dimension of column space) is $R=1$, i.e. we have $R$
836
-
independent vectors that span the row space and $r$ independent vectors that span the column space, so these spaces are lines in both 2D spaces in our example.
836
+
independent vectors that span the row space and $R$ independent vectors that span the column space, so these spaces are lines in both 2D spaces in our example.
837
837
838
838
The $\bm{U}$ space has vectors in $\mathbb{R}^{M=2}$, the $\bm{V}$ space has vectors in $\mathbb{R}^{N=2}$.
839
839
@@ -868,7 +868,7 @@ \subsection{Exercise 03}
868
868
1\\3
869
869
\end{bmatrix},
870
870
$$
871
-
i.e. the transposed row found in the outer product. So, all $\bm{X}^\mathrm{T} \bm{y}$,
871
+
i.e. the transposed row found in the above outer product. So, all $\bm{X}^\mathrm{T} \bm{y}$,
872
872
except those solutions that produce $\bm{X}^\mathrm{T} \bm{y} = \bm{0}$ (these $\bm{y}$
873
873
belong to the left null space), are multiples of $[1, 3]^\mathrm{T}$.
middle equation: normal equations, right equation: least squares error solution using left inverse $\bm{X}^{\dagger_l} = (\bm{X}^H\bm{X})^{-1} \bm{X}^H$ such that
1798
+
middle equation: normal equations, right equation: least squares error solution using left inverse $\bm{X}^{\dagger_l} = (\bm{X}^T\bm{X})^{-1} \bm{X}^T$ such that
1799
1799
$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$
1800
1800
1801
1801
@@ -1836,9 +1836,9 @@ \subsection{Exercise 04}
1836
1836
1837
1837
we meanwhile know that for left inverse characteristics $$\bm{X}^{\dagger_l} \bm{X} = \bm{I}$$
1838
1838
1839
-
this is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
1839
+
this operation is a projection matrix into the row space of $\bm{X}$, because $\bm{X}^{\dagger_l} \bm{X} \bm{\theta} = \bm{I} \bm{\theta}$
We can solve for optimum $\hat{\bm{\theta}}$ in sense of least squares error, i.e. $\lVert\bm{e} \rVert_2^2 = \lVert\bm{y} - \bm{X} \hat{\bm{\theta}} \rVert_2^2\rightarrow\text{min}$:
@@ -2253,7 +2253,7 @@ \subsection{Exercise 05}
2253
2253
+\frac{\sqrt{2}}{100} & -\frac{\sqrt{2}}{100}
2254
2254
\end{bmatrix}
2255
2255
=
2256
-
\bm{U} \bm{\Sigma} \bm{V}^H
2256
+
\bm{U} \bm{\Sigma} \bm{V}^T
2257
2257
=
2258
2258
\begin{bmatrix}
2259
2259
1 & 0\\
@@ -2268,12 +2268,12 @@ \subsection{Exercise 05}
2268
2268
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
2269
2269
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}
2270
2270
\end{bmatrix}
2271
-
\right)^H
2271
+
\right)^T
2272
2272
$$
2273
2273
\pause
2274
2274
%
2275
2275
Left-inverse / here actually the Exact-inverse requires
\item$\bm{y}_{N \times 1}$ audio signal with $N$ samples as a result of the linear model's linear combination plus noise
2525
2525
\end{itemize}
2526
2526
%
2527
-
Let us assume that a) we know $\bm{X}$ (i.e. the individual audio tracks) and $\bm{y}$ (i.e. the noise-corrupted final mixdown), b) that we do not know the noise $\bm{n}$ and c) that we want to estimate the 'real world' mixing gains $\bm{\theta}$
2527
+
Let us assume that a) we know $\bm{X}$ (i.e. the individual audio tracks) and $\bm{y}$ (i.e. the noise-corrupted final mixdown), b) that we do not know the noise $\bm{\nu}$ and c) that we want to estimate the 'real world' mixing gains $\bm{\theta}$
0 commit comments