Skip to content

Commit 7ee136c

Browse files
committed
latex debug
1 parent c0eb80c commit 7ee136c

File tree

1 file changed

+15
-19
lines changed

1 file changed

+15
-19
lines changed

lab2/Part2_Debiasing.ipynb

Lines changed: 15 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -478,32 +478,28 @@
478478
"In practice, how can we train a VAE? In learning the latent space, we constrain the means and standard deviations to approximately follow a unit Gaussian. Recall that these are learned parameters, and therefore must factor into the loss computation, and that the decoder portion of the VAE is using these parameters to output a reconstruction that should closely match the input image, which also must factor into the loss. What this means is that we'll have two terms in our VAE loss function:\n",
479479
"\n",
480480
"1. **Latent loss ($L_{KL}$)**: measures how closely the learned latent variables match a unit Gaussian and is defined by the Kullback-Leibler (KL) divergence.\n",
481-
"2. **Reconstruction loss ($L_{x}{(x,\\hat{x})}$)**: measures how accurately the reconstructed outputs match the input and is given by the $L^1$ norm of the input image and its reconstructed output. \n",
482-
"\n",
483-
"The equations for both of these losses are provided below:\n",
484-
"\n",
485-
"$$L_{KL}(\\mu, \\sigma) = \\frac{1}{2}\\sum\\limits_{j=0}^{k-1}\\small{(\\sigma_j + \\mu_j^2 - 1 - \\log{\\sigma_j})}$$\n",
486-
"\n",
487-
"$$L_{x}{(x,\\hat{x})} = ||x-\\hat{x}||_1$$\n",
488-
"\n",
489-
"Thus for the VAE loss we have: \n",
490-
"\n",
491-
"$$L_{VAE} = c\\cdot L_{KL} + L_{x}{(x,\\hat{x})}$$\n",
492-
"\n",
493-
"where $c$ is a weighting coefficient used for regularization. \n",
494-
"\n",
495-
"Now we're ready to define our VAE loss function:"
481+
"2. **Reconstruction loss ($L_{x}{(x,\\hat{x})}$)**: measures how accurately the reconstructed outputs match the input and is given by the $L^1$ norm of the input image and its reconstructed output."
496482
]
497483
},
498484
{
499485
"cell_type": "markdown",
500486
"metadata": {
501-
"id": "3UG8Ms5svZMX"
487+
"id": "qWxOCPgvv1lf"
502488
},
503489
"source": [
504-
"$$\r\n",
505-
"1 + 2\r\n",
506-
"$$"
490+
"The equations for both of these losses are provided below:\r\n",
491+
"\r\n",
492+
"$$L_{KL}(\\mu, \\sigma) = \\frac{1}{2}\\sum\\limits_{j=0}^{k-1}\\small{(\\sigma_j + \\mu_j^2 - 1 - \\log{\\sigma_j})}$$\r\n",
493+
"\r\n",
494+
"$$L_{x}{(x,\\hat{x})} = ||x-\\hat{x}||_1$$\r\n",
495+
"\r\n",
496+
"Thus for the VAE loss we have: \r\n",
497+
"\r\n",
498+
"$$L_{VAE} = c\\cdot L_{KL} + L_{x}{(x,\\hat{x})}$$\r\n",
499+
"\r\n",
500+
"where $c$ is a weighting coefficient used for regularization. \r\n",
501+
"\r\n",
502+
"Now we're ready to define our VAE loss function:"
507503
]
508504
},
509505
{

0 commit comments

Comments
 (0)