|
1 | 1 | <!DOCTYPE html> |
2 | | - |
3 | | - |
4 | | -<!-- Section 1 --> |
5 | | -<section class="section card" aria-labelledby="s2-title"> |
6 | | - <div class="heading"> |
7 | | - <h2 id="s2-title" class="title">Project 1: Colorizing the Prokudin-Gorskii photo collection</h2> |
| 2 | +<html lang="en"> |
| 3 | +<head> |
| 4 | + <meta charset="UTF-8" /> |
| 5 | + <title>Project 1</title> |
| 6 | +</head> |
| 7 | +<body> |
| 8 | + |
| 9 | + <h1>Project 1: Colorizing the Prokudin-Gorskii photo collection</h1> |
| 10 | + |
| 11 | + <!-- Section 1 --> |
| 12 | + <h2>Introduction</h2> |
| 13 | + <p> |
| 14 | + The Prokudin-Gorskii photo collection consists of 3 digitalized glass plate images each taken in grayscale with a blue, green, and red filter (ordered from top to bottom). To obtain a colorized version of the original image, we can align the images and use the pixel brightness (normalized to [0, 255]) from each image as the value of its respective color channel. This project aims to perform the aligning and compositing process automatically given any image containing the 3 glass plates in BGR order. |
| 15 | + </p> |
| 16 | + <div align="center"> |
| 17 | + <img src="intro.png" alt="Introduction"> |
8 | 18 | </div> |
| 19 | + <hr> |
9 | 20 |
|
10 | | - <!-- Body text --> |
11 | | - <p class="lead" contenteditable="false" |
12 | | - data-placeholder="Describe what you’re comparing (this text appears before both images)"> |
13 | | - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. |
| 21 | + <!-- Section 2 --> |
| 22 | + <h2>NCC & Preprocessing</h2> |
| 23 | + <p> |
| 24 | + To begin the alignment process, we need a function that can compute a similarity score between 2 images. Motivated by the fact that given <i>a + b = c</i>, <i>ab</i> attains its highest value at <i>a = b = c/2</i>, we can use the normalized cross-correlation formula |
14 | 25 | </p> |
15 | | -</section> |
16 | | - |
17 | | - |
18 | | -<!-- Section 2 --> |
19 | | -<section class="section card" aria-labelledby="s2-title"> |
20 | | - <div class="heading"> |
21 | | - <h2 id="s2-title" class="title">NCC & Naive alignment</h2> |
| 26 | + <div align="center"> |
| 27 | + <img src="equation.png" alt="Description of image 2"> |
22 | 28 | </div> |
23 | | - |
24 | | - <!-- Body text --> |
25 | | - <p class="lead" contenteditable="false" |
26 | | - data-placeholder="Describe what you’re comparing (this text appears before both images)"> |
27 | | - The first strategy is to perform a naive search on what translations align the different image plates the best. To compute the similarity, we can use the normalized cross-correlation (NCC) formula for 2 images of the same size: [#formula]. Since we want to only compare the top (blue) & bottom (red) plates with the center (green) plate, we can start by moving the image down by H/3 units. For any given displacements (x1, y1) and (x2, y2) where y1 >= 0 and y2 <= 0, the resulting intersection of all 3 images is given by the following: |
28 | | - <br> |
29 | | - <figure> |
30 | | - <img class="media" src="images/proj1.png" alt="Describe the animation" loading="lazy"/> |
31 | | - </figure> |
32 | | - <br> |
33 | | - Therefore, if we only want to compare the blue and green plates, we can add a second displacement (0, -H/3) to limit the comparison area to the center plate. Likewise, when comparing the red and green plates, we can have the second displacement be (0, H/3). Once the best displacements for the blue & red plates are computed, we can use the formula above to compute the final composite of the 3 plates. |
| 29 | + <p> |
| 30 | + for 2 vectors <strong>x</strong> and <strong>y</strong>. Since grayscale images are represented by 2d arrays, we can first flatten the 2 images we want to compare, before using them as the input vectors. A caveat of this method is that is both images should have the same size. However, we can approximate the crop dimensions for just the blue and red plates, and find the best displacements with respect to the green plate. The final step would require us to find the intersection of 3 rectangles, which is given by the following formula: |
34 | 31 | </p> |
35 | | -</section> |
36 | | - |
| 32 | + <hr> |
37 | 33 |
|
38 | | -<!-- Section 3 --> |
39 | | -<section class="section card" aria-labelledby="s2-title"> |
40 | | - <div class="heading"> |
41 | | - <h2 id="s2-title" class="title">Pyramid search</h2> |
| 34 | + <!-- Section 3 --> |
| 35 | + <h2>Subheading 3</h2> |
| 36 | + <p> |
| 37 | + We can improve the NCC step using edge detection technqiues to find the crop of each image. The easiest way is to use the Sobel kernel |
| 38 | + </p> |
| 39 | + <div align="center"> |
| 40 | + <img src="image3.jpg" alt="Description of image 3" width="400"> |
42 | 41 | </div> |
| 42 | + <hr> |
43 | 43 |
|
44 | | - <!-- Body text --> |
45 | | - <p class="lead" contenteditable="false" |
46 | | - data-placeholder="Describe what you’re comparing (this text appears before both images)"> |
47 | | - In the previous section, we saw that while naive search works well on small images, it becomes computationally inefficient on larger ones. Is there a way to extend what works on smaller images so that it will still work on larger ones? |
48 | | - <br><br> |
49 | | - The answer is yes. Instead of searching for |
| 44 | + <!-- Section 4 --> |
| 45 | + <h2>Subheading 4</h2> |
| 46 | + <p> |
| 47 | + This is some body text for subheading 4. Summarize outcomes or provide references. |
50 | 48 | </p> |
51 | | -</section> |
| 49 | + <div align="center"> |
| 50 | + <img src="image4.jpg" alt="Description of image 4" width="400"> |
| 51 | + </div> |
52 | 52 |
|
53 | 53 | </body> |
54 | 54 | </html> |
0 commit comments