Skip to content

Commit f39a2d2

Browse files
authored
Update proj1.html
1 parent 24716b0 commit f39a2d2

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

project-1/proj1.html

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,44 +4,44 @@
44
<!-- Section 1 -->
55
<section class="section card" aria-labelledby="s2-title">
66
<div class="heading">
7-
<h2 id="s2-title" class="title">Project 1: Introduction</h2>
7+
<h2 id="s2-title" class="title">Project 1: Colorizing the Prokudin-Gorskii photo collection</h2>
88
</div>
99

10-
<!-- Text BEFORE the comparison images -->
10+
<!-- Body text -->
1111
<p class="lead" contenteditable="false"
1212
data-placeholder="Describe what you’re comparing (this text appears before both images)">
1313
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
1414
</p>
1515
</section>
1616

1717

18-
<!-- Section 2 (Side-by-Side Comparison) -->
18+
<!-- Section 2 -->
1919
<section class="section card" aria-labelledby="s2-title">
2020
<div class="heading">
2121
<h2 id="s2-title" class="title">NCC & Naive alignment</h2>
2222
</div>
2323

24-
<!-- Text BEFORE the comparison images -->
24+
<!-- Body text -->
2525
<p class="lead" contenteditable="false"
2626
data-placeholder="Describe what you’re comparing (this text appears before both images)">
2727
The first strategy is to perform a naive search on what translations align the different image plates the best. To compute the similarity, we can use the normalized cross-correlation (NCC) formula for 2 images of the same size: [#formula]. Since we want to only compare the top (blue) & bottom (red) plates with the center (green) plate, we can start by moving the image down by H/3 units. For any given displacements (x1, y1) and (x2, y2) where y1 >= 0 and y2 <= 0, the resulting intersection of all 3 images is given by the following:
2828
<br>
2929
<figure>
30-
<img class="media" src="images/crop3.png" alt="Describe the animation" loading="lazy"/>
30+
<img class="media" src="images/proj1.png" alt="Describe the animation" loading="lazy"/>
3131
</figure>
3232
<br>
3333
Therefore, if we only want to compare the blue and green plates, we can add a second displacement (0, -H/3) to limit the comparison area to the center plate. Likewise, when comparing the red and green plates, we can have the second displacement be (0, H/3). Once the best displacements for the blue & red plates are computed, we can use the formula above to compute the final composite of the 3 plates.
3434
</p>
3535
</section>
3636

3737

38-
<!-- Section 3 (GIF) -->
38+
<!-- Section 3 -->
3939
<section class="section card" aria-labelledby="s2-title">
4040
<div class="heading">
41-
<h2 id="s2-title" class="title">Naive search</h2>
41+
<h2 id="s2-title" class="title">Pyramid search</h2>
4242
</div>
4343

44-
<!-- Text BEFORE the comparison images -->
44+
<!-- Body text -->
4545
<p class="lead" contenteditable="false"
4646
data-placeholder="Describe what you’re comparing (this text appears before both images)">
4747
In the previous section, we saw that while naive search works well on small images, it becomes computationally inefficient on larger ones. Is there a way to extend what works on smaller images so that it will still work on larger ones?

0 commit comments

Comments
 (0)