Skip to content

Commit 3f38f1c

Browse files
authored
Corrected incorrect capitalisation + removed extra brackets (#29)
Corrected incorrect capitalisation of one sigma and one z (second could lead to confusion) + removed extra brackets in multiple repetitions of \mathbb{E}[\log(f_{X|Z}(x|Y))]
1 parent df2b9d7 commit 3f38f1c

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

Lectures/diffusion.jl

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -291,13 +291,13 @@ frametitle("Monte-Carlo sampling")
291291
md"""
292292
This can be approximated using Monte-Carlo given ``L`` samples ``\epsilon_1, \ldots \epsilon_L`` from the distribution ``\mathcal{N}(0, I)`` as
293293
```math
294-
\mathbb{E}[\log(f_{X|Z}(x|Y))]] \approx \frac{1}{L} \sum_{i=1}^L \log(f_{X|Z}(x|E_\mu(x) + \epsilon_i \odot E_\sigma(x))).
294+
\mathbb{E}[\log(f_{X|Z}(x|Y))] \approx \frac{1}{L} \sum_{i=1}^L \log(f_{X|Z}(x|E_\mu(x) + \epsilon_i \odot E_\sigma(x))).
295295
```
296296
In the simpler case where ``D_\sigma(z) = \mathbf{1}``, we recognize the classical L2 norm:
297297
```math
298298
\begin{align}
299-
\mathbb{E}[\log(f_{X|Z}(x|Y))]]
300-
& \approx -\frac{\log(2\pi)}{2}+\frac{1}{L}\sum_{i=1}^L\|x - D_\mu(E_\mu(x) + \epsilon_i\|_2^2.
299+
\mathbb{E}[\log(f_{X|Z}(x|Y))]
300+
& \approx -\frac{\log(2\pi)}{2}+\frac{1}{L}\sum_{i=1}^L\|x - D_\mu(E_\mu(x) + \epsilon_i \odot E_\sigma(x))\|_2^2.
301301
\end{align}
302302
```
303303
"""
@@ -308,8 +308,8 @@ frametitle("Variational AutoEncoders (VAEs)")
308308
# ╔═╡ 23f3de75-0617-4232-bb71-bd9f3e355a1e
309309
md"""
310310
* We want to learn the distribution of our data represented by the random variable ``X``.
311-
* The encoder maps a data point ``x`` to a Gaussian distribution ``Y \sim \mathcal{N}(E_\mu(x), E_{\Sigma}(x))`` over the latent space
312-
* The decoder maps a latent variable ``z \sim Z`` to a the Gaussian distribution ``\mathcal{N}(D_\mu(z), D_\sigma(Z))``
311+
* The encoder maps a data point ``x`` to a Gaussian distribution ``Y \sim \mathcal{N}(E_\mu(x), E_{\sigma}(x))`` over the latent space
312+
* The decoder maps a latent variable ``z \sim Z`` to a the Gaussian distribution ``\mathcal{N}(D_\mu(z), D_\sigma(z))``
313313
314314
The Maximum Likelihood Estimator (MLE) maximizes the following sum over our datapoints ``x`` with its ELBO:
315315
```math
@@ -425,10 +425,10 @@ We have (see $(cite("kingma2013AutoEncoding", "Appendix B")) for a proof):
425425
For the second part of the ELBO, we have
426426
```math
427427
\begin{align}
428-
& \mathbb{E}[\log(f_{X|Z}(x|Y))]]\\
429-
& = \mathbb{E}[\log(f_{X|Z}(x|E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))]]\\
430-
& = \mathbb{E}[\log(f_{\mathcal{E}_1}(\text{Diag}(D_\sigma(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))^{-1} (x - D_\mu(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))))]]\\
431-
& = -\frac{\log(2\pi)}{2}+\mathbb{E}[\|\text{Diag}(D_\sigma(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))^{-1} (x - D_\mu(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))\|_2^2]].
428+
& \mathbb{E}[\log(f_{X|Z}(x|Y))]\\
429+
& = \mathbb{E}[\log(f_{X|Z}(x|E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))]\\
430+
& = \mathbb{E}[\log(f_{\mathcal{E}_1}(\text{Diag}(D_\sigma(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))^{-1} (x - D_\mu(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))))]\\
431+
& = -\frac{\log(2\pi)}{2}+\mathbb{E}[\|\text{Diag}(D_\sigma(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))^{-1} (x - D_\mu(E_\mu(x) + \mathcal{E}_2 \odot E_\sigma(x)))\|_2^2].
432432
\end{align}
433433
```
434434
"""

0 commit comments

Comments
 (0)