You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
" <img src=\"https://i.ibb.co/xfJbPmL/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n",
17
18
"</table>\n",
18
19
"\n",
@@ -156,8 +157,8 @@
156
157
"cell_type": "code",
157
158
"execution_count": null,
158
159
"metadata": {
159
-
"id": "Jg17jzwtbxDA",
160
-
"cellView": "form"
160
+
"cellView": "form",
161
+
"id": "Jg17jzwtbxDA"
161
162
},
162
163
"outputs": [],
163
164
"source": [
@@ -499,12 +500,12 @@
499
500
},
500
501
{
501
502
"cell_type": "markdown",
502
-
"source": [
503
-
"Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to diagnoses hidden biases in facial detection classifiers."
504
-
],
505
503
"metadata": {
506
504
"id": "bcpznUHHuR6I"
507
-
}
505
+
},
506
+
"source": [
507
+
"Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to diagnoses hidden biases in facial detection classifiers."
508
+
]
508
509
},
509
510
{
510
511
"cell_type": "markdown",
@@ -519,6 +520,9 @@
519
520
},
520
521
{
521
522
"cell_type": "markdown",
523
+
"metadata": {
524
+
"id": "A3IOB3d61WSN"
525
+
},
522
526
"source": [
523
527
"### Semi-supervised VAE architecture\n",
524
528
"\n",
@@ -531,10 +535,7 @@
531
535
"We will apply our SS-VAE to a *supervised classification* problem -- the facial detection task. Importantly, note how the encoder portion in the SS-VAE architecture also outputs a single supervised variable, $z_o$, corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is the key distinction between the SS-VAE and a traditional VAE. \n",
532
536
"\n",
533
537
"Keep in mind that we only want to learn the latent representation of *faces*, as that is where we are interested in uncovering potential biases, even though we are training a model on a binary classification problem. So, we will need to ensure that, **for faces**, our SS-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution $q_\\phi(z|x)$, and outputs a supervised class prediction $z_o$, but that, **for negative examples**, it only outputs a class prediction $z_o$."
534
-
],
535
-
"metadata": {
536
-
"id": "A3IOB3d61WSN"
537
-
}
538
+
]
538
539
},
539
540
{
540
541
"cell_type": "markdown",
@@ -839,6 +840,9 @@
839
840
},
840
841
{
841
842
"cell_type": "markdown",
843
+
"metadata": {
844
+
"id": "QfVngr5J6sj3"
845
+
},
842
846
"source": [
843
847
"### Linking model performance to uncertainty and bias\n",
844
848
"\n",
@@ -851,10 +855,7 @@
851
855
"1. What, if any, trends do you observe comparing the samples with the highest and lowest reconstruction loss?\n",
852
856
"2. Based on these observations, which features seemed harder to learn for the VAE?\n",
853
857
"3. How does reconstruction loss relate to uncertainty? Think back to our lecture on Robust & Trustworthy Deep Learning! What can you say about examples on which the model may be more or less uncertain?"
"Hopefully this lab has shed some light on a few concepts, from vision based tasks, to VAEs, to algorithmic bias. We like to think it has, but we're biased ;).\n",
0 commit comments