Skip to content

Commit ce0153a

Browse files
committed
clear outputs and fix schematic
1 parent d092e26 commit ce0153a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

lab2/solutions/Part2_Debiasing_Solution.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@
459459
"\n",
460460
"Our goal is to train a *debiased* version of this classifier -- one that accounts for potential disparities in feature representation within the training data. Specifically, to build a debiased facial classifier, we'll train a model that **learns a representation of the underlying latent space** to the face training data. The model then uses this information to mitigate unwanted biases by sampling faces with rare features, like dark skin or hats, *more frequently* during training. The key design requirement for our model is that it can learn an *encoding* of the latent features in the face data in an entirely *unsupervised* way. To achieve this, we'll turn to variational autoencoders (VAEs).\n",
461461
"\n",
462-
"![The concept of a VAE](http://kvfrans.com/content/images/2016/08/vae.jpg)\n",
462+
"![The concept of a VAE](https://i.ibb.co/3s4S6Gc/vae.jpg)\n",
463463
"\n",
464464
"As shown in the schematic above and in Lecture 4, VAEs rely on an encoder-decoder structure to learn a latent representation of the input data. In the context of computer vision, the encoder network takes in input images, encodes them into a series of variables defined by a mean and standard deviation, and then draws from the distributions defined by these parameters to generate a set of sampled latent variables. The decoder network then \"decodes\" these variables to generate a reconstruction of the original image, which is used during training to help the model identify which latent variables are important to learn. \n",
465465
"\n",

0 commit comments

Comments
 (0)