Skip to content

Commit c611f4f

Browse files
authored
Update proj5.html
1 parent ebdebb3 commit c611f4f

File tree

1 file changed

+98
-0
lines changed

1 file changed

+98
-0
lines changed

project-5/proj5.html

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -918,5 +918,103 @@ <h2>Part 1.9 – Hybrid Images</h2>
918918
</div>
919919
</section>
920920

921+
<!-- ========================================================= -->
922+
<!-- Part 2.0: Noising Process Visualization -->
923+
<!-- ========================================================= -->
924+
<section id="part-2-1">
925+
<h2>Part 2 – Implementing the UNet from scratch</h2>
926+
927+
Now that we know how we can generate images with the help of a UNet in a denoising model, we will go through implementing one from scratch. More specifically, we will be attempting to generate digits similar to those in the MNIST dataset from pure noise using a denoising UNet that we will create.
928+
929+
<h3>Training an Unconditioned UNet</h3>
930+
931+
The most basic denoiser is a one-step denoiser. Formally, given a noisy image <code>z</code>, we aim to train a denoiser <code>D<sub>&theta;</sub>(z)</code> that can map it to a clean image <code>x</code>. To do this, we can optimize over the L<sup>2</sup> loss E<sub>z,x</sub>||z - x||<sup>2</sup> while training.<br>
932+
933+
<br>To create a noisy image, we can use the process z = x + &sigma;&epsilon; where &sigma; &isin; [0, 1] and &epsilon; ~ &Nscr;(0, 1). Here, &Nscr; is the standard normal distribution. To visualize the kind of images this process will result in below is an example of an MNIST digit with progressively more noise as &sigma; gradually increases from 0 to 1:
934+
935+
<div class="image-row">
936+
<figure>
937+
<img src="images/unet/00.png" alt="00.png" />
938+
<figcaption>&sigma; = 0.0</figcaption>
939+
</figure>
940+
<figure>
941+
<img src="images/unet/02.png" alt="02.png" />
942+
<figcaption>&sigma; = 0.2</figcaption>
943+
</figure>
944+
<figure>
945+
<img src="images/unet/04.png" alt="04.png" />
946+
<figcaption>&sigma; = 0.4</figcaption>
947+
</figure>
948+
<figure>
949+
<img src="images/unet/05.png" alt="05.png" />
950+
<figcaption>&sigma; = 0.5</figcaption>
951+
</figure>
952+
<figure>
953+
<img src="images/unet/06.png" alt="06.png" />
954+
<figcaption>&sigma; = 0.6</figcaption>
955+
</figure>
956+
<figure>
957+
<img src="images/unet/08.png" alt="08.png" />
958+
<figcaption>&sigma; = 0.8</figcaption>
959+
</figure>
960+
<figure>
961+
<img src="images/unet/02.png" alt="10.png" />
962+
<figcaption>&sigma; = 1.0</figcaption>
963+
</figure>
964+
</div>
965+
966+
To start building the model, we will be using the following architecture:
967+
<div align="center">
968+
<figure>
969+
<img src="images/unet/unconditioned_arch.png" alt="unconditioned_arch.png" />
970+
<figcaption>Source: <a href="https://cal-cs180.github.io/fa25/hw/proj5/partb.html">CS180</a></figcaption>
971+
</figure>
972+
</div>
973+
974+
where <code>D</code> is the number of hidden dimensions.
975+
976+
<h4>Training hyperparameters</h4>
977+
For the hyperparameters, we will be using a batch size of 256, a learning rate of 1e-4, a hidden dimension of 128, the Adam optimizer with the given learning rate, and a training time of 5 epochs. A fixed noise level of &sigma; = 0.5 will be used to noise the training images.
978+
979+
<h4>Evaluation results</h4>
980+
After the model is trained, below is the training loss curve, where the loss of the model is plotted for every batch processed:
981+
<div align="center">
982+
<figure>
983+
<img src="images/unet/121_training_curve.png" alt="121_training_curve.png" />
984+
</figure>
985+
</div>
986+
987+
The following are the performance of the model after the 1st and 5th epoch on sample test images, all noised with &sigma; = 0.5:
988+
<div align="center">
989+
<figure>
990+
<img src="images/unet/121_visualization.png" alt="121_visualization.png" />
991+
</figure>
992+
</div>
993+
994+
We can see that the model performs decently well. To illustrate its effectiveness on images noised with different levels of &sigma; below is the model after the 5th epoch denoising the same image with different levels of noise for &sigma; &isin; [0.0, 0.2, 0.4, 0.5, 0.6, 0.8, 1.0]:
995+
<div align="center">
996+
<figure>
997+
<img src="images/unet/122_visualization.png" alt="122_visualization.png" />
998+
</figure>
999+
</div>
1000+
1001+
<h4>Limitations on pure noise</h4>
1002+
Although the model is decent at removing noise from images, our goal is to generate digits from pure noise. This proves to be an issue because with MSE loss, the model will learn to predict the image that minimizes the sum of its squared distance to all other training images. Because pure noise is the input to the model for any given training image, the result is an average of all digits in training set. This is illustrated in the following inputs and the output of the model after the 1st and 5th epoch:
1003+
<div align="center">
1004+
<figure>
1005+
<img src="images/unet/123_visualization.png" alt="123_visualization.png" />
1006+
</figure>
1007+
</div>
1008+
1009+
The training loss curve also tells a similar story, as the loss ends up stuck at a certain level:
1010+
<div align="center">
1011+
<figure>
1012+
<img src="images/unet/123_training_curve.png" alt="123_training_curve.png" />
1013+
</figure>
1014+
</div>
1015+
1016+
To generate plausible-looking digits, we need a different approach than one-step denoising.
1017+
</section>
1018+
9211019
</body>
9221020
</html>

0 commit comments

Comments
 (0)