You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<h3>Part 1.3: Derivative of Gaussian (DoG) Filter</h3>
71
-
<p>
72
-
To further improve edge visibility, we can first smooth out the noise by convolving the original image with a Gaussian filter. To generate one with dimensions <i>n × n</i>, we can take the outer product of 2 length <i>n</i> arrays. Below is the result of blurring the original image using a 5 × 5 Gaussian filter with σ = 1:
To illustrate the improvements, below is a side-by-side comparison of the edge magnitudes for each of the 3 methods, in row-major order from least to most clarity:
Instead of applying 2 convolutions to the image, we can take advantage of the fact that convolution is commutative, and first convolve the Gaussian with <i>D<sub>x</sub></i> and <i>D<sub>y</sub></i>, then convolve the image with the resulting kernel. This optimizes the computation by only convolving with the image once. Below are the respective results:
Using convolution, we can also sharpen a blurry image with a similar technique. Since a Gaussian filter removes the highest frequencies in a signal, subtracting the filtered image from the original would leave all of the highest frequencies from the base image. We can then add this difference to the original to highlight the highest frequencies of the image, then clip it to [0, 255] to preserve brightness. Using the following Taj Mahal image:
In general, we can change the sharpening amount by multiplying the high-pass filtered image by a constant. Below is a demonstration of various sharpening amounts from 1 to 100:
From the visualization above, increasing the sharpening amount highlights the high-frequency signals from the original image. Below is an example of sharpening a blurred image, using the same selfie from 1.1 as the original image, and the box-filtered version as the starting image to sharpen:
A <strong>hybrid image</strong> is when 2 images, one under a low-pass filter and the other a high-pass filter, are blended to create an illusion where one sees mostly the high-frequency image at a close distance, but only the low-frequency image at a longer distance. This occurs because our vision has a limited spatial frequency resolution, so higher frequencies fall outside of the frequencies visible at a sufficiently far distance. Below is an example of 2 images that we can align and create a hybrid effect:
To find the optimal σ for each image to create the Gaussian & impulse filter with, a good starting point through experimentation is 1.8-3% times the shorter of the width and height of the LPF image, and 0.6-1.5% for the HPF image. In the example above, a cutoff percentage of 3% and 1% was used:
This effect can also be used on any 2 images that are aligned, like the following examples, which use a frequency cutoff of 2% and 0.8% times the shorter side:
<h3>Part 2.3 & 2.4: Multiresolution Blending</h3>
193
-
<p>
194
-
To blend 2 images, we can first start by creating a Gaussian/Laplacian stack. This is similar to a Gaussian stack, except that at each level, we don't need to downsample. The easiest way to achieve this is to have a function that takes in an array of images, and for each level, append one image to a list and recurse on the last (latest added) element of the list. In the end, a 3D (4D if using RGB images) array will be returned that is a collection of all the images in the stack:
<figcaptionstyle="margin-bottom:6px;">(T-B): Gaussian stack for apple, Laplacian stack for apple, Gaussian stack for orange, Laplacian stack for orange.</figcaption>
0 commit comments