You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/Features/Filters/Blur.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This method only works with images.
8
8
9
9
Blur, also known as average blur or box blur, is a simple image processing technique used to reduce noise and smooth out images. It involves replacing the color value of a pixel with the average color value of its neighboring pixels within a specified window or kernel. This process effectively blurs the image and reduces high-frequency noise.
10
10
11
-
Box blur is particularly effective in reducing [salt-and-pepper](https://en.wikipedia.org/wiki/Salt-and-pepper_noise'wikipedia link on salt and pepper noise') noise (random black and white pixels) and minor imperfections in an image. However, it also leads to loss of finer details, so the choice of [kernel](../../../Glossary.md#kernel) size is important.
11
+
Box blur is particularly effective in reducing [salt-and-pepper](https://en.wikipedia.org/wiki/Salt-and-pepper_noise'wikipedia link on salt and pepper noise') noise (random black and white pixels) and minor imperfections in an image. However, it also leads to loss of finer details, so the choice of [kernel](../../Glossary.md#kernel) size is important.
12
12
13
13
<BlurDemo />
14
14
@@ -37,7 +37,7 @@ Here's how blur filter is implemented in ImageJS:
37
37
38
38
_Select a Kernel Size_: The first step is to choose the size of the kernel or window that will be used for the blurring operation. The kernel is typically a square matrix with odd dimensions, such as 3x3, 5x5, 7x7, etc. The larger the kernel, the more intense the blurring effect.
39
39
40
-
_Iterate through Pixels_: For each pixel in the image, the algorithm applies [convolution](../../../Glossary.md#convolution).
40
+
_Iterate through Pixels_: For each pixel in the image, the algorithm applies [convolution](../../Glossary.md#convolution).
41
41
42
42
_Calculate Average Color_: The algorithm calculates the average color value of all the pixels within the kernel.
Copy file name to clipboardExpand all lines: docs/Features/Filters/Derivative.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ $KernelY = \begin{bmatrix}
67
67
\end{bmatrix}$
68
68
69
69
:::info
70
-
As was mentioned, derivative filter is a type of gradient filter. Therefore using the same kernels with gradient filter will provide the same image output.Derivative filter simplifies some kernel's application.
70
+
As was mentioned, derivative filter is a type of gradient filter. Therefore using the same kernels with gradient filter will provide the same image output.Derivative filter simplifies some kernel's application.
Copy file name to clipboardExpand all lines: docs/Features/Filters/Gradient.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,10 +10,16 @@ import GradientDemo from './gradient.demo.tsx'
10
10
This method only works with images.
11
11
:::
12
12
13
-
Gradient filter or specifically[ a gradient-based edge detection filter](https://en.wikipedia.org/wiki/Graduated_neutral-density_filter'Wikipedia link on gradient filter'), is an image processing technique used to highlight edges and boundaries within an image by emphasizing areas of rapid intensity change. The gradient filter operates by calculating the rate of change of pixel intensities across the image. When there's a rapid transition from one intensity level to another, [the convolution operation](../../../Glossary.md#convolution'glossary link on convolution') captures this change as a high gradient magnitude value, indicating the presence of an edge. It's a fundamental step in various computer vision and image analysis tasks, such as edge detection, object recognition, and image segmentation.
13
+
Gradient filter or specifically[ a gradient-based edge detection filter](https://en.wikipedia.org/wiki/Graduated_neutral-density_filter'Wikipedia link on gradient filter'), is an image processing technique used to highlight edges and boundaries within an image by emphasizing areas of rapid intensity change. The gradient filter operates by calculating the rate of change of pixel intensities across the image. When there's a rapid transition from one intensity level to another, [the convolution operation](../../Glossary.md#convolution'glossary link on convolution') captures this change as a high gradient magnitude value, indicating the presence of an edge. It's a fundamental step in various computer vision and image analysis tasks, such as edge detection, object recognition, and image segmentation.
14
14
15
15
<GradientDemo />
16
16
17
+
The gradient filter enhances edges by detecting abrupt changes in pixel intensities.
18
+
19
+
:::caution
20
+
Keep in mind that gradient filters can be sensitive to noise and might result in false edges or emphasize noise. Smoothing the image (e.g., using Gaussian blur) before applying the gradient filter can help mitigate this issue.
21
+
:::
22
+
17
23
### Parameters and default values
18
24
19
25
-`options`
@@ -30,22 +36,16 @@ Gradient filter or specifically[ a gradient-based edge detection filter](https:/
30
36
31
37
**\*** - if applying filter is necessary in only one of directions, then a user can pass one kernel instead of two. However, if none were passed on, function will throw an error.
32
38
33
-
The gradient filter enhances edges by detecting abrupt changes in pixel intensities.
34
-
35
-
:::caution
36
-
Keep in mind that gradient filters can be sensitive to noise and might result in false edges or emphasize noise. Smoothing the image (e.g., using Gaussian blur) before applying the gradient filter can help mitigate this issue.
37
-
:::
38
-
39
39
<details>
40
40
<summary><b>Implementation</b></summary>
41
41
42
42
Here's how gradient filter is implemented in ImageJS:
43
43
44
44
_Grayscale Conversion_: Before applying a gradient filter, the color image is converted into [grayscale](grayscale.md'link to grayscale filter'). This simplifies the processing by reducing the image to a single channel representing pixel intensities.
45
45
46
-
_Kernel Operators_: Gradient filter consists of small convolution [kernels](../../../Glossary.md#kernel'glossary link on kernel'). Normally, one for detecting horizontal changes and another for vertical changes, however user might indicate only one kernel to check only one of directions. These kernels are usually 3x3 matrices of numerical weights.
46
+
_Kernel Operators_: Gradient filter consists of small convolution [kernels](../../Glossary.md#kernel'glossary link on kernel'). Normally, one for detecting horizontal changes and another for vertical changes, however user might indicate only one kernel to check only one of directions. These kernels are usually 3x3 matrices of numerical weights.
47
47
48
-
_Convolution Operation_: The gradient filter is applied through a [convolution](../../../Glossary.md#convolution'glossary link on convolution') operation, where the filter kernel slides over the grayscale image. At each position, the convolution operation involves element-wise multiplication of the filter kernel with the corresponding pixels in the image, followed by summing up the results. This sum represents the rate of intensity change (gradient) at that location in the image.
48
+
_Convolution Operation_: The gradient filter is applied through a [convolution](../../Glossary.md#convolution'glossary link on convolution') operation, where the filter kernel slides over the grayscale image. At each position, the convolution operation involves element-wise multiplication of the filter kernel with the corresponding pixels in the image, followed by summing up the results. This sum represents the rate of intensity change (gradient) at that location in the image.
49
49
50
50
_Gradient Magnitude and Direction_: For each pixel, the gradient magnitude is calculated by combining the results of the horizontal and vertical convolutions. The corresponding values from each convolution are put in square and summed, then put in square root.
Copy file name to clipboardExpand all lines: docs/Features/Filters/Invert.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,14 +22,14 @@ import InvertDemo from './invert.demo.tsx'
22
22
23
23
Here's how invert filter is implemented in ImageJS:
24
24
25
-
_Pixel Transformation_: For each pixel in the image, the inversion filter transforms its color [intensity](../../../Glossary.md#intensity'glossary link on intensity') value. The new intensity value is calculated using the formula:
25
+
_Pixel Transformation_: For each pixel in the image, the inversion filter transforms its color [intensity](../../Glossary.md#intensity'glossary link on intensity') value. The new intensity value is calculated using the formula:
26
26
27
27
$$New Intensity = Max Intensity - Original Intensity$$
28
28
29
-
Where "_Max Intensity_" is the maximum possible intensity value for the color channel.
29
+
Where $$Max Intensity$$ is the maximum possible intensity value for the color channel.
30
30
31
31
:::warning
32
-
ImageJS uses components to calculate each pixel value and leaves alpha channel unchanged. For more information about channels and components visit [this link](../../../Tutorials%20and%20concepts/Concepts/Channel%20vs%20component.md).
32
+
ImageJS uses components to calculate each pixel value and leaves alpha channel unchanged. For more information about channels and components visit [this link](../../Tutorials%20and%20concepts/Concepts/Channel%20vs%20component.md).
Copy file name to clipboardExpand all lines: docs/Features/Filters/Median.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,10 @@ This method only works with images.
10
10
11
11
<MedianDemo />
12
12
13
+
The key advantage of using the median filter, especially for noise reduction, is that it is less sensitive to extreme values or outliers compared to other filters like the [mean filter](https://en.wikipedia.org/wiki/Geometric_mean_filter'wikipedia link on mean filter'). Since noise often appears as isolated bright or dark pixels that deviate significantly from their neighbors, the median filter effectively ignores these outliers and replaces them with more representative values from the local neighborhood.
14
+
15
+
However, the median filter also has limitations. It can blur sharp edges and thin lines in the image, as it doesn't consider the spatial relationship between pixels beyond their intensity values. This means that while it's great for removing noise, it might not be suitable for all types of image enhancement tasks.
16
+
13
17
### Parameters and default values
14
18
15
19
-`options`
@@ -22,16 +26,12 @@ This method only works with images.
22
26
|[`borderType`](https://image-js.github.io/image-js-typescript/interfaces/MedianFilterOptions.html#borderType)| no |`reflect101`|
23
27
|[`borderValue`](https://image-js.github.io/image-js-typescript/interfaces/MedianFilterOptions.html#borderValue)| no |`0`|
24
28
25
-
The key advantage of using the median filter, especially for noise reduction, is that it is less sensitive to extreme values or outliers compared to other filters like the [mean filter](https://en.wikipedia.org/wiki/Geometric_mean_filter'wikipedia link on mean filter'). Since noise often appears as isolated bright or dark pixels that deviate significantly from their neighbors, the median filter effectively ignores these outliers and replaces them with more representative values from the local neighborhood.
26
-
27
-
However, the median filter also has limitations. It can blur sharp edges and thin lines in the image, as it doesn't consider the spatial relationship between pixels beyond their intensity values. This means that while it's great for removing noise, it might not be suitable for all types of image enhancement tasks.
28
-
29
29
<details>
30
30
<summary><b>Implementation</b></summary>
31
31
32
32
Here's how median filter is implemented in ImageJS:
33
33
34
-
_Window or Kernel Selection_: The first step is to choose a small window or [kernel](../../../Glossary.md#kernel'glossary link to kernel'). This window will move over the entire image, pixel by pixel.
34
+
_Window or Kernel Selection_: The first step is to choose a small window or [kernel](../../Glossary.md#kernel'glossary link to kernel'). This window will move over the entire image, pixel by pixel.
35
35
36
36
_Pixel Neighborhood_: As the window moves over the image, for each pixel location, the filter collects the pixel values within the window's neighborhood. The neighborhood consists of the pixels that are currently covered by the window/kernel.
Copy file name to clipboardExpand all lines: docs/Features/Filters/gaussianBlur.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,8 @@ This method only works with images.
12
12
13
13
[Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur'Wikipedia link on gaussian blur') is a widely used image processing technique that smooths an image by reducing high-frequency noise and fine details while preserving the overall structure and larger features. It's named after the [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function'wikipedia link on Gaussian function'), which is a mathematical function that represents a bell-shaped curve. Gaussian blur is often applied to images before other processing steps like edge detection to improve their quality and reliability.
14
14
15
+
The key idea behind Gaussian blur is that it simulates a diffusion process, where each pixel's value is influenced by the values of its neighbors. Because the weights are determined by the Gaussian function, pixels that are closer to the central pixel have a larger impact on the smoothed value, while pixels that are farther away contribute less.
16
+
15
17
<GaussianBlurDemo />
16
18
17
19
### Parameters and default values
@@ -42,8 +44,6 @@ With Gaussian blur there are two ways of passing options: through sigma and thro
42
44
|[`sizeX`](https://image-js.github.io/image-js-typescript/interfaces/GaussianBlurXYOptions.html#sizeX)| no |`2 * Math.ceil(2 * sigmaX) + 1`|
43
45
|[`sizeX`](https://image-js.github.io/image-js-typescript/interfaces/GaussianBlurXYOptions.html#sizeY)| no |`2 * Math.ceil(2 * sigmaY) + 1`|
44
46
45
-
The key idea behind Gaussian blur is that it simulates a diffusion process, where each pixel's value is influenced by the values of its neighbors. Because the weights are determined by the Gaussian function, pixels that are closer to the central pixel have a larger impact on the smoothed value, while pixels that are farther away contribute less.
46
-
47
47
The size of the Gaussian kernel and the standard deviation parameter (which controls the spread of the Gaussian curve) influence the degree of smoothing. A larger kernel or a higher standard deviation will produce more pronounced smoothing, but might also result in a loss of fine details.
48
48
49
49
<details>
@@ -53,7 +53,7 @@ The size of the Gaussian kernel and the standard deviation parameter (which cont
53
53
54
54
Here's how Gaussian blur is implemented in ImageJS:
55
55
56
-
_Kernel Definition_: The core concept of Gaussian blur involves [convolving](../../../Glossary.md#convolution'glossary link on convolution') the image with a Gaussian [kernel](../../../Glossary.md#kernel'glossary link on kernel'), also known as a Gaussian filter or mask. This kernel's values are arranged in a way that creates a symmetric, bell-shaped pattern around the center of the kernel to approximate Gaussian function.
56
+
_Kernel Definition_: The core concept of Gaussian blur involves [convolving](../../Glossary.md#convolution'glossary link on convolution') the image with a Gaussian [kernel](../../Glossary.md#kernel'glossary link on kernel'), also known as a Gaussian filter or mask. This kernel's values are arranged in a way that creates a symmetric, bell-shaped pattern around the center of the kernel to approximate Gaussian function.
57
57
58
58
_Convolution Operation_: The Gaussian kernel is applied to the image using a convolution operation. This involves placing the kernel's center over each pixel in the image and performing element-wise multiplication of the kernel's values with the corresponding pixel values in the neighborhood. The results of these multiplications are summed up to compute the new value for the central pixel.
Copy file name to clipboardExpand all lines: docs/Features/Filters/hypotenuse.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,7 @@ $$
10
10
NewValue = \sqrt{Value1^2+Value2^2}
11
11
$$
12
12
13
-
Where $$Value1$$ is a value of the pixel in the first image and $$Value2$$ is the value in the second one. The goal is to identify which points in one image correspond to points in another image, which is essential for various computer vision and image processing applications. Calculating hypotenuse value between two pixels is necessary for image aligning, feature matching.
14
-
15
-
:::caution
16
-
Images must be compatible by size, bit depth, number of channels and number of alpha channels. However, for the resulting image the bit depth and number of channels depends on the input options.
17
-
:::
13
+
Where $$Value1$$ is a value of the pixel in the first image and $$Value2$$ is the value in the second one. The goal is to identify which points in one image correspond to points in another image, which is essential for various computer vision and image processing applications. Calculating hypotenuse value between two pixels is also necessary for image aligning and feature matching.
18
14
19
15
### Parameters and default values
20
16
@@ -28,3 +24,7 @@ Images must be compatible by size, bit depth, number of channels and number of a
|[`bitDepth`](https://image-js.github.io/image-js-typescript/interfaces/HypotenuseOptions.html#bitDepth)| no |`image.bitDepth`|
30
26
|[`channels`](https://image-js.github.io/image-js-typescript/interfaces/HypotenuseOptions.html#channels)| no | - |
27
+
28
+
:::caution
29
+
Images must be compatible by size, bit depth, number of channels and number of alpha channels. However, for the resulting image the bit depth and number of channels depends on the input options.
Copy file name to clipboardExpand all lines: docs/Features/Filters/level.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,17 +33,17 @@ This process can make details in both dark and bright regions of the image more
33
33
34
34
Here's how level filter is implemented in ImageJS:
35
35
36
-
_Input border values selection_: The first step is to choose the range of values where the filter must be applied.
36
+
_Input border values selection_: The first step is to choose the range of values that the filter must redistribute.
37
37
38
38
_Output border values selection_: Then the range of output values must be chosen. It is necessary to understand in what output limits should lie pixels that belong to the input values set.
39
39
40
-
_Calculation of the values_: After getting input and output values each pixel's gets compared with it and a ratio is calculated by using formula:
40
+
_Calculation of the values_: After getting input and output values each pixel is compared with input values and a ratio is calculated by using formula:
41
41
42
42
$$
43
-
(value - inputMin)/(inputMax - inputMin)
43
+
\dfrac{value - inputMin}{inputMax - inputMin}
44
44
$$
45
45
46
-
where $$value$$ is a value of a pixel which is within the input borders. Otherwise it is equal to maximum input value.
46
+
where $$value$$ is a value of a pixel which is within the input borders. If value is outside of input limits it is equal to maximum input value.
47
47
From there the formula is reciprocated to compute new output value.
0 commit comments