Skip to content

Commit 8d090ed

Browse files
authored
Revise image captions and add new images
Updated image captions and added new images for clarity.
1 parent db130dc commit 8d090ed

File tree

1 file changed

+21
-144
lines changed

1 file changed

+21
-144
lines changed

project-3/proj3.html

Lines changed: 21 additions & 144 deletions
Original file line numberDiff line numberDiff line change
@@ -47,164 +47,41 @@ <h3>Part 2</h3>
4747
<div style="display:flex; flex-wrap:wrap; justify-content:center; text-align:center;">
4848

4949
<figure style="flex: 1 0 20%; margin:12px;">
50-
<figcaption style="margin-bottom:6px;">Original</figcaption>
51-
<img src="images/img_right.jpg" alt="img_right.jpg" width="50%">
50+
<figcaption style="margin-bottom:6px;">Center (destination) image</figcaption>
51+
<img src="images/img_center.jpg" alt="img_center.jpg" width="50%">
5252
</figure>
5353

5454
<figure style="flex: 1 0 20%; margin:12px;">
55-
<figcaption style="margin-bottom:6px;">Transformed</figcaption>
56-
<img src="images/churchCrop.jpg" alt="churchCrop.jpg" width="50%">
55+
<figcaption style="margin-bottom:6px;">Right (source) image</figcaption>
56+
<img src="images/img_right.jpg" alt="img_right.jpg" width="50%">
5757
</figure>
5858
</div>
5959

60-
<h3>Part 3</h3>
6160
<p>
62-
Given the following image:
61+
Using the points
62+
</p>
6363

64+
<pre><code>
65+
xy1, xy2, xy3, xy4, uv1, uv2, uv3, uv4 = (
66+
(303.4064, 446.3877),
67+
(449.93048, 419.11496),
68+
(447.79144, 660.29144),
69+
(322.123, 702.5374),
70+
(689.78, 411.4477),
71+
(851.77454, 410.35315),
72+
(802.5195, 661.00684),
73+
(665.69977, 661.00684)
74+
)
75+
</code></pre>
6476

65-
</article>
66-
<hr>
77+
we can obtain the system of equations
6778

68-
<!-- 1.3 -->
69-
<article id="part1-3">
70-
<h3>Part 1.3: Derivative of Gaussian (DoG) Filter</h3>
71-
<p>
72-
To further improve edge visibility, we can first smooth out the noise by convolving the original image with a Gaussian filter. To generate one with dimensions <i>n &times; n</i>, we can take the outer product of 2 length <i>n</i> arrays. Below is the result of blurring the original image using a 5 &times; 5 Gaussian filter with &sigma; = 1:
73-
</p>
74-
<div align="center">
75-
<img src="images/orggaussblur.png" alt="orggauss.png" width="50%">
76-
</div>
77-
<p>
78-
Applying the same edge detection process above, we get:
79-
</p>
80-
<div align="center">
81-
<img src="images/blurclip.png" alt="blurclip.png" width="50%">
82-
</div>
83-
<p>
84-
To illustrate the improvements, below is a side-by-side comparison of the edge magnitudes for each of the 3 methods, in row-major order from least to most clarity:
85-
</p>
86-
<div align="center">
87-
<img src="images/gradmag.png" alt="gradmag.png" width="50%">
88-
</div>
89-
<p>
90-
Instead of applying 2 convolutions to the image, we can take advantage of the fact that convolution is commutative, and first convolve the Gaussian with <i>D<sub>x</sub></i> and <i>D<sub>y</sub></i>, then convolve the image with the resulting kernel. This optimizes the computation by only convolving with the image once. Below are the respective results:
91-
</p>
92-
<div align="center">
93-
<img src="images/twovsone.png" alt="twovsone.png" width="50%">
94-
</div>
95-
</article>
96-
</section>
97-
<hr>
98-
99-
<!-- ======================== -->
100-
<!-- Part 2: Applications -->
101-
<!-- ======================== -->
102-
<section id="part2">
103-
<h2>Part 2: Applications</h2>
104-
105-
<!-- 2.1 -->
106-
<article id="part2-1">
107-
<h3>Part 2.1: Image "Sharpening"</h3>
108-
<p>
109-
Using convolution, we can also sharpen a blurry image with a similar technique. Since a Gaussian filter removes the highest frequencies in a signal, subtracting the filtered image from the original would leave all of the highest frequencies from the base image. We can then add this difference to the original to highlight the highest frequencies of the image, then clip it to [0, 255] to preserve brightness. Using the following Taj Mahal image:
110-
</p>
111-
<div align="center">
112-
<img src="images/taj.jpg" alt="taj.jpg" width="50%">
113-
</div>
114-
<p>
115-
We can obtain the blurred and high-pass filtered versions of this image as follows:
116-
</p>
117-
<div align="center">
118-
<img src="images/lpvshp.png" alt="lpvshp.png" width="50%">
119-
</div>
120-
<p>
121-
Now, we can add the second image to the original to get a sharpened version:
122-
<div align="center">
123-
<img src="images/orgsharp.png" alt="orgsharp.png" width="50%">
124-
</div>
125-
<p>
126-
In general, we can change the sharpening amount by multiplying the high-pass filtered image by a constant. Below is a demonstration of various sharpening amounts from 1 to 100:
127-
</p>
128-
<div align="center">
129-
<img src="onetotensquared.png" alt="onetotensquared.png" width="50%">
130-
</div>
131-
<p>
132-
From the visualization above, increasing the sharpening amount highlights the high-frequency signals from the original image. Below is an example of sharpening a blurred image, using the same selfie from 1.1 as the original image, and the box-filtered version as the starting image to sharpen:
133-
</p>
134-
<div align="center">
135-
<img src="images/blurthensharp.png" alt="blurthensharp.png" width="50%">
136-
</div>
137-
</article>
138-
<hr>
13979

140-
<!-- 2.2 -->
141-
<article id="part2-2">
142-
<h3>Part 2.2: Hybrid Images</h3>
143-
<p>
144-
A <strong>hybrid image</strong> is when 2 images, one under a low-pass filter and the other a high-pass filter, are blended to create an illusion where one sees mostly the high-frequency image at a close distance, but only the low-frequency image at a longer distance. This occurs because our vision has a limited spatial frequency resolution, so higher frequencies fall outside of the frequencies visible at a sufficiently far distance. Below is an example of 2 images that we can align and create a hybrid effect:
145-
</p>
146-
<div align="center">
147-
<img src="images/lowhigh.png" alt="lowhigh.png" width="50%">
148-
</div>
149-
<p>
150-
To find the optimal &sigma; for each image to create the Gaussian & impulse filter with, a good starting point through experimentation is 1.8-3% times the shorter of the width and height of the LPF image, and 0.6-1.5% for the HPF image. In the example above, a cutoff percentage of 3% and 1% was used:
151-
</p>
152-
<div align="center">
153-
<img src="images/lpfhpf.png" alt="lpfhpf.png" width="50%">
154-
</div>
155-
<p>
156-
Below is a visualization displaying the log magnitude of the Fourier Transform of the starting and the filtered images:
157-
</p>
158-
<div align="center">
159-
<img src="images/fourier.png" alt="fourier.png" width="50%">
160-
</div>
161-
<p>
162-
After aligning and combining the LPF and HPF images, we get the final result below:
163-
</p>
164-
<div align="center">
165-
<img src="images/hybrid1.png" alt="hybrid1.png" width="50%">
166-
<figcaption style="margin-bottom:6px;">Cutoffs used: &sigma;<sub>1, 2</sub> = (21.96, 10.56) [732 &times; 1024, 1408 &times; 1056]</figcaption>
167-
</div>
80+
<h3>Part 3</h3>
16881
<p>
169-
This effect can also be used on any 2 images that are aligned, like the following examples, which use a frequency cutoff of 2% and 0.8% times the shorter side:
82+
Given the following image:
17083
</p>
171-
<div align="center">
172-
<img src="images/lowhigh2.png" alt="lowhigh2.png" width="50%">
173-
<figcaption style="margin-bottom:6px;">Source: <a href="https://mobile-legends.fandom.com/wiki/Odette?file=Odette_%28Wisdom_of_the_Stars%29.jpg"><sup>[1]</sup></a> | <a href="https://mobile-legends.fandom.com/wiki/Floryn?file=Floryn_%28Melody_of_Light%29.jpg"><sup>[2]</sup></a></figcaption>
174-
</div>
175-
<div align="center">
176-
<img src="images/hybrid2.png" alt="hybrid2.png" width="50%">
177-
<figcaption style="margin-bottom:6px;">Cutoffs used: &sigma;<sub>1, 2</sub> = (7.44, 2.976) [372 &times; 372]</figcaption>
178-
</div>
179-
<div align="center">
180-
<img src="images/lowhigh3.png" alt="lowhigh3.png" width="50%">
181-
<figcaption style="margin-bottom:6px;">Source: <a href="https://mobile-legends.fandom.com/wiki/Lesley?file=Lesley_%28Angelic_Agent%29.jpg"><sup>[3]</sup></a></figcaption>
182-
</div>
183-
<div align="center">
184-
<img src="images/hybrid3.png" alt="hybrid3.png" width="50%">
185-
<figcaption style="margin-bottom:6px;">Cutoffs used: &sigma;<sub>1, 2</sub> = (9.6, 3.84) [480 &times; 720]</figcaption>
186-
</div>
187-
</article>
188-
<hr>
18984

190-
<!-- 2.3 + 2.4 -->
191-
<article id="part2-3">
192-
<h3>Part 2.3 &amp; 2.4: Multiresolution Blending</h3>
193-
<p>
194-
To blend 2 images, we can first start by creating a Gaussian/Laplacian stack. This is similar to a Gaussian stack, except that at each level, we don't need to downsample. The easiest way to achieve this is to have a function that takes in an array of images, and for each level, append one image to a list and recurse on the last (latest added) element of the list. In the end, a 3D (4D if using RGB images) array will be returned that is a collection of all the images in the stack:
195-
</p>
196-
<div align="center">
197-
<img src="images/appleorangestack.png" alt="appleorangestack.png" width="50%">
198-
<figcaption style="margin-bottom:6px;">(T-B): Gaussian stack for apple, Laplacian stack for apple, Gaussian stack for orange, Laplacian stack for orange.</figcaption>
199-
</div>
200-
</article>
201-
</section>
20285
</main>
203-
<hr>
204-
205-
<div align="center">
206-
<a href="https://cjxthecoder.github.io">cjxthecoder</a> | <a href="https://github.com/cjxthecoder">GitHub</a> | <a href="https://www.linkedin.com/in/daniel-cheng-71b475279">LinkedIn</a>
207-
</div>
208-
20986
</body>
21087
</html>

0 commit comments

Comments
 (0)