Skip to content

Commit 89be069

Browse files
committed
fixed link
1 parent cccd26c commit 89be069

File tree

1 file changed

+40
-39
lines changed

1 file changed

+40
-39
lines changed

lab2/Part2_FaceDetection.ipynb

Lines changed: 40 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {
67
"id": "Ag_e7xtTzT1W"
@@ -12,7 +13,7 @@
1213
" Visit MIT Deep Learning</a></td>\n",
1314
" <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/2023/lab2/solutions/Part2_FaceDetection_Solution.ipynb\">\n",
1415
" <img src=\"https://i.ibb.co/2P3SLwK/colab.png\" style=\"padding-bottom:5px;\" />Run in Google Colab</a></td>\n",
15-
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/2023/lab2/solutions/Part2_FaceDetection_Solution.ipynb\">\n",
16+
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/2023/lab2/Part2_FaceDetection_Solution.ipynb\">\n",
1617
" <img src=\"https://i.ibb.co/xfJbPmL/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n",
1718
"</table>\n",
1819
"\n",
@@ -156,8 +157,8 @@
156157
"cell_type": "code",
157158
"execution_count": null,
158159
"metadata": {
159-
"id": "Jg17jzwtbxDA",
160-
"cellView": "form"
160+
"cellView": "form",
161+
"id": "Jg17jzwtbxDA"
161162
},
162163
"outputs": [],
163164
"source": [
@@ -499,12 +500,12 @@
499500
},
500501
{
501502
"cell_type": "markdown",
502-
"source": [
503-
"Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to diagnoses hidden biases in facial detection classifiers."
504-
],
505503
"metadata": {
506504
"id": "bcpznUHHuR6I"
507-
}
505+
},
506+
"source": [
507+
"Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to diagnoses hidden biases in facial detection classifiers."
508+
]
508509
},
509510
{
510511
"cell_type": "markdown",
@@ -519,6 +520,9 @@
519520
},
520521
{
521522
"cell_type": "markdown",
523+
"metadata": {
524+
"id": "A3IOB3d61WSN"
525+
},
522526
"source": [
523527
"### Semi-supervised VAE architecture\n",
524528
"\n",
@@ -531,10 +535,7 @@
531535
"We will apply our SS-VAE to a *supervised classification* problem -- the facial detection task. Importantly, note how the encoder portion in the SS-VAE architecture also outputs a single supervised variable, $z_o$, corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is the key distinction between the SS-VAE and a traditional VAE. \n",
532536
"\n",
533537
"Keep in mind that we only want to learn the latent representation of *faces*, as that is where we are interested in uncovering potential biases, even though we are training a model on a binary classification problem. So, we will need to ensure that, **for faces**, our SS-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution $q_\\phi(z|x)$, and outputs a supervised class prediction $z_o$, but that, **for negative examples**, it only outputs a class prediction $z_o$."
534-
],
535-
"metadata": {
536-
"id": "A3IOB3d61WSN"
537-
}
538+
]
538539
},
539540
{
540541
"cell_type": "markdown",
@@ -839,6 +840,9 @@
839840
},
840841
{
841842
"cell_type": "markdown",
843+
"metadata": {
844+
"id": "QfVngr5J6sj3"
845+
},
842846
"source": [
843847
"### Linking model performance to uncertainty and bias\n",
844848
"\n",
@@ -851,10 +855,7 @@
851855
"1. What, if any, trends do you observe comparing the samples with the highest and lowest reconstruction loss?\n",
852856
"2. Based on these observations, which features seemed harder to learn for the VAE?\n",
853857
"3. How does reconstruction loss relate to uncertainty? Think back to our lecture on Robust & Trustworthy Deep Learning! What can you say about examples on which the model may be more or less uncertain?"
854-
],
855-
"metadata": {
856-
"id": "QfVngr5J6sj3"
857-
}
858+
]
858859
},
859860
{
860861
"cell_type": "code",
@@ -912,6 +913,12 @@
912913
},
913914
{
914915
"cell_type": "code",
916+
"execution_count": null,
917+
"metadata": {
918+
"cellView": "form",
919+
"id": "8qcR9uvfCJku"
920+
},
921+
"outputs": [],
915922
"source": [
916923
"### Inspect different latent features\n",
917924
"\n",
@@ -944,26 +951,25 @@
944951
"ax[1].imshow(mdl.util.create_grid_of_images(recons, (1, num_steps)))\n",
945952
"ax[1].set_xlabel(\"Latent step\")\n",
946953
"ax[1].set_ylabel(\"Visualization\");\n"
947-
],
948-
"metadata": {
949-
"id": "8qcR9uvfCJku",
950-
"cellView": "form"
951-
},
952-
"execution_count": null,
953-
"outputs": []
954+
]
954955
},
955956
{
956957
"cell_type": "markdown",
958+
"metadata": {
959+
"id": "3ExRRPO2z27z"
960+
},
957961
"source": [
958962
"\n",
959963
"### Inspect how the accuracy changes as a function of density in the latent space\n"
960-
],
961-
"metadata": {
962-
"id": "3ExRRPO2z27z"
963-
}
964+
]
964965
},
965966
{
966967
"cell_type": "code",
968+
"execution_count": null,
969+
"metadata": {
970+
"id": "PnmPXmkGLBVU"
971+
},
972+
"outputs": [],
967973
"source": [
968974
"### Accuracy vs. density in latent space\n",
969975
"\n",
@@ -994,12 +1000,7 @@
9941000
"plt.plot(np.linspace(np.min(z_mean), np.max(z_mean), num_steps+1), accuracy_per_latent,'-o')\n",
9951001
"plt.xlabel(\"Latent step\")\n",
9961002
"plt.ylabel(\"Relative accuracy\")"
997-
],
998-
"metadata": {
999-
"id": "PnmPXmkGLBVU"
1000-
},
1001-
"execution_count": null,
1002-
"outputs": []
1003+
]
10031004
},
10041005
{
10051006
"cell_type": "markdown",
@@ -1029,6 +1030,9 @@
10291030
},
10301031
{
10311032
"cell_type": "markdown",
1033+
"metadata": {
1034+
"id": "mPRZReq4p68k"
1035+
},
10321036
"source": [
10331037
"## 2.9 Thinking ahead\n",
10341038
"\n",
@@ -1041,10 +1045,7 @@
10411045
"Hopefully this lab has shed some light on a few concepts, from vision based tasks, to VAEs, to algorithmic bias. We like to think it has, but we're biased ;).\n",
10421046
"\n",
10431047
"<img src=\"https://i.ibb.co/BjLSRMM/ezgif-2-253dfd3f9097.gif\" />"
1044-
],
1045-
"metadata": {
1046-
"id": "mPRZReq4p68k"
1047-
}
1048+
]
10481049
}
10491050
],
10501051
"metadata": {
@@ -1056,6 +1057,7 @@
10561057
],
10571058
"provenance": []
10581059
},
1060+
"gpuClass": "standard",
10591061
"kernelspec": {
10601062
"display_name": "Python 3",
10611063
"language": "python",
@@ -1069,9 +1071,8 @@
10691071
"interpreter": {
10701072
"hash": "7812ea015bdcee6f23a998adcdd2ef97c151c0c241b7b7070987d9313e41299d"
10711073
}
1072-
},
1073-
"gpuClass": "standard"
1074+
}
10741075
},
10751076
"nbformat": 4,
10761077
"nbformat_minor": 0
1077-
}
1078+
}

0 commit comments

Comments
 (0)