You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: project-2/proj2.html
+25-21Lines changed: 25 additions & 21 deletions
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,7 @@ <h2>Part 1: Filters and Edges</h2>
26
26
<h3>Part 1.1: Convolutions from Scratch!</h3>
27
27
<p>
28
28
Using the definition of convolution:
29
+
</p>
29
30
30
31
<pre><code>
31
32
for result_i in range(result.shape[0]):
@@ -38,34 +39,37 @@ <h3>Part 1.1: Convolutions from Scratch!</h3>
38
39
return result
39
40
</code></pre>
40
41
41
-
where <code>ffker</code> is the <i>xy</i>-flipped kernel and <code>result</code> is the output 2D array, we can slightly speed up the computation by replacing the 2 inner for-loops with <code>result[result_i, result_j] = np.dot(img_flat, ffker.flatten())</code>, where <code>img_flat</code> is the flattened 1D array of the current patch of <code>img</code> based on the position of the kernel. Although this optimztion produces a noticable speed-up compared to using 4 for-loops, the fastest way is still to use <code>scipy.signal.convolve2d</code>. Below is a timing comparsion of the 3 methods above using a 5x5 kernel:
42
+
<p>
43
+
where <code>ffker</code> is the <i>xy</i>-flipped kernel and <code>result</code> is the output 2D array, we can slightly speed up the computation by replacing the 2 inner for-loops with <code>result[result_i, result_j] = np.dot(img_flat, ffker.flatten())</code>, where <code>img_flat</code> is the flattened 1D array of the current patch of <code>img</code> based on the position of the kernel. Although this optimization produces a noticeable speed-up compared to using 4 for-loops, the fastest way is still to use <code>scipy.signal.convolve2d</code>. Below is a timing comparison of the 3 methods above using a 5x5 kernel:
To demonstrate an application of convolution, we can convolve an image with the <strong>box filter</strong>, a kernel with entries that sum to 1 that contains only 1 unique value. We can also convolve with the difference operators <imgsrc="images/diff_op.png" alt="diff_op.png" width="50%"> for edge detection. Each kernel computes the difference in the <i>x</i>- or <i>y</i>-direction, so edges where the brightness of the pixel changes significantly will show up as white and black pixels on the output. Using <code>box_9x9</code> = <i>J<sub>9</sub> / 9<sup>2</sup></i> as the box filter, we get the following results:
<h3>Part 1.2. Partial Derivatives, Gradient Magnitude, and Binarized Edge Image</h3>
62
+
<h3>Part 1.2: Finite Difference Operator</h3>
49
63
<p>
50
-
<strong>Goal:</strong> Show derivative images in x and y, compute the gradient magnitude, and present a binarized edge map. Justify any thresholding or denoising choices.
64
+
Given the following image:
51
65
</p>
52
-
<figure>
53
-
<imgsrc="images/partial_dx.png" alt="Partial derivative in x" />
54
-
<figcaption>Partial derivative in x.</figcaption>
55
-
</figure>
56
-
<figure>
57
-
<imgsrc="images/partial_dy.png" alt="Partial derivative in y" />
0 commit comments