Skip to content

Commit a5160d0

Browse files
EscapedGibbonstropitek
authored andcommitted
docs: minor fixes and changes
1 parent 63c39f7 commit a5160d0

File tree

19 files changed

+53
-165
lines changed

19 files changed

+53
-165
lines changed

docs/Features/Comparison/subtract.md

Lines changed: 0 additions & 23 deletions
This file was deleted.

docs/Features/Filters/Blur.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This method only works with images.
88

99
Blur, also known as average blur or box blur, is a simple image processing technique used to reduce noise and smooth out images. It involves replacing the color value of a pixel with the average color value of its neighboring pixels within a specified window or kernel. This process effectively blurs the image and reduces high-frequency noise.
1010

11-
Box blur is particularly effective in reducing [salt-and-pepper](https://en.wikipedia.org/wiki/Salt-and-pepper_noise 'wikipedia link on salt and pepper noise') noise (random black and white pixels) and minor imperfections in an image. However, it also leads to loss of finer details, so the choice of [kernel](../../../Glossary.md#kernel) size is important.
11+
Box blur is particularly effective in reducing [salt-and-pepper](https://en.wikipedia.org/wiki/Salt-and-pepper_noise 'wikipedia link on salt and pepper noise') noise (random black and white pixels) and minor imperfections in an image. However, it also leads to loss of finer details, so the choice of [kernel](../../Glossary.md#kernel) size is important.
1212

1313
<BlurDemo />
1414

@@ -37,7 +37,7 @@ Here's how blur filter is implemented in ImageJS:
3737

3838
_Select a Kernel Size_: The first step is to choose the size of the kernel or window that will be used for the blurring operation. The kernel is typically a square matrix with odd dimensions, such as 3x3, 5x5, 7x7, etc. The larger the kernel, the more intense the blurring effect.
3939

40-
_Iterate through Pixels_: For each pixel in the image, the algorithm applies [convolution](../../../Glossary.md#convolution).
40+
_Iterate through Pixels_: For each pixel in the image, the algorithm applies [convolution](../../Glossary.md#convolution).
4141

4242
_Calculate Average Color_: The algorithm calculates the average color value of all the pixels within the kernel.
4343

docs/Features/Filters/Derivative.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ $KernelY = \begin{bmatrix}
6767
\end{bmatrix}$
6868

6969
:::info
70-
As was mentioned, derivative filter is a type of gradient filter. Therefore using the same kernels with gradient filter will provide the same image output.Derivative filter simplifies some kernel's application.
70+
As was mentioned, derivative filter is a type of gradient filter. Therefore using the same kernels with gradient filter will provide the same image output. Derivative filter simplifies some kernel's application.
7171

7272
_Applying Sobel kernel using gradient filter_
7373

docs/Features/Filters/Filters.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ sidebar_position: 0
1818

1919
- [Derivative](./Derivative.md 'internal link on derivative')
2020

21-
- [Median](./median.md 'internal link on median')
21+
- [Median](./Median.md 'internal link on median')
2222

2323
- [hypotenuse](./hypotenuse.md 'internal link on hypotenuse')
2424

2525
- [level](./level.md 'internal link on level')
2626

27-
- [pixelate](./level.md 'internal link on pixelate')
27+
- [pixelate](./Pixelate.md 'internal link on pixelate')
2828

2929
### Methods that can be applied on Masks only
3030

@@ -34,4 +34,6 @@ sidebar_position: 0
3434

3535
### Methods that can be applied on Images or Masks
3636

37-
- [invert](./invert.md 'internal link on invert')
37+
- [invert](./Invert.md 'internal link on invert')
38+
39+
- [subtract](./subtract.md 'internal link on subtract')

docs/Features/Filters/Gradient.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,16 @@ import GradientDemo from './gradient.demo.tsx'
1010
This method only works with images.
1111
:::
1212

13-
Gradient filter or specifically[ a gradient-based edge detection filter](https://en.wikipedia.org/wiki/Graduated_neutral-density_filter 'Wikipedia link on gradient filter'), is an image processing technique used to highlight edges and boundaries within an image by emphasizing areas of rapid intensity change. The gradient filter operates by calculating the rate of change of pixel intensities across the image. When there's a rapid transition from one intensity level to another, [the convolution operation](../../../Glossary.md#convolution 'glossary link on convolution') captures this change as a high gradient magnitude value, indicating the presence of an edge. It's a fundamental step in various computer vision and image analysis tasks, such as edge detection, object recognition, and image segmentation.
13+
Gradient filter or specifically[ a gradient-based edge detection filter](https://en.wikipedia.org/wiki/Graduated_neutral-density_filter 'Wikipedia link on gradient filter'), is an image processing technique used to highlight edges and boundaries within an image by emphasizing areas of rapid intensity change. The gradient filter operates by calculating the rate of change of pixel intensities across the image. When there's a rapid transition from one intensity level to another, [the convolution operation](../../Glossary.md#convolution 'glossary link on convolution') captures this change as a high gradient magnitude value, indicating the presence of an edge. It's a fundamental step in various computer vision and image analysis tasks, such as edge detection, object recognition, and image segmentation.
1414

1515
<GradientDemo />
1616

17+
The gradient filter enhances edges by detecting abrupt changes in pixel intensities.
18+
19+
:::caution
20+
Keep in mind that gradient filters can be sensitive to noise and might result in false edges or emphasize noise. Smoothing the image (e.g., using Gaussian blur) before applying the gradient filter can help mitigate this issue.
21+
:::
22+
1723
### Parameters and default values
1824

1925
- `options`
@@ -30,22 +36,16 @@ Gradient filter or specifically[ a gradient-based edge detection filter](https:/
3036

3137
**\*** - if applying filter is necessary in only one of directions, then a user can pass one kernel instead of two. However, if none were passed on, function will throw an error.
3238

33-
The gradient filter enhances edges by detecting abrupt changes in pixel intensities.
34-
35-
:::caution
36-
Keep in mind that gradient filters can be sensitive to noise and might result in false edges or emphasize noise. Smoothing the image (e.g., using Gaussian blur) before applying the gradient filter can help mitigate this issue.
37-
:::
38-
3939
<details>
4040
<summary><b>Implementation</b></summary>
4141

4242
Here's how gradient filter is implemented in ImageJS:
4343

4444
_Grayscale Conversion_: Before applying a gradient filter, the color image is converted into [grayscale](grayscale.md 'link to grayscale filter'). This simplifies the processing by reducing the image to a single channel representing pixel intensities.
4545

46-
_Kernel Operators_: Gradient filter consists of small convolution [kernels](../../../Glossary.md#kernel 'glossary link on kernel'). Normally, one for detecting horizontal changes and another for vertical changes, however user might indicate only one kernel to check only one of directions. These kernels are usually 3x3 matrices of numerical weights.
46+
_Kernel Operators_: Gradient filter consists of small convolution [kernels](../../Glossary.md#kernel 'glossary link on kernel'). Normally, one for detecting horizontal changes and another for vertical changes, however user might indicate only one kernel to check only one of directions. These kernels are usually 3x3 matrices of numerical weights.
4747

48-
_Convolution Operation_: The gradient filter is applied through a [convolution](../../../Glossary.md#convolution 'glossary link on convolution') operation, where the filter kernel slides over the grayscale image. At each position, the convolution operation involves element-wise multiplication of the filter kernel with the corresponding pixels in the image, followed by summing up the results. This sum represents the rate of intensity change (gradient) at that location in the image.
48+
_Convolution Operation_: The gradient filter is applied through a [convolution](../../Glossary.md#convolution 'glossary link on convolution') operation, where the filter kernel slides over the grayscale image. At each position, the convolution operation involves element-wise multiplication of the filter kernel with the corresponding pixels in the image, followed by summing up the results. This sum represents the rate of intensity change (gradient) at that location in the image.
4949

5050
_Gradient Magnitude and Direction_: For each pixel, the gradient magnitude is calculated by combining the results of the horizontal and vertical convolutions. The corresponding values from each convolution are put in square and summed, then put in square root.
5151

docs/Features/Filters/Invert.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,14 @@ import InvertDemo from './invert.demo.tsx'
2222

2323
Here's how invert filter is implemented in ImageJS:
2424

25-
_Pixel Transformation_: For each pixel in the image, the inversion filter transforms its color [intensity](../../../Glossary.md#intensity 'glossary link on intensity') value. The new intensity value is calculated using the formula:
25+
_Pixel Transformation_: For each pixel in the image, the inversion filter transforms its color [intensity](../../Glossary.md#intensity 'glossary link on intensity') value. The new intensity value is calculated using the formula:
2626

2727
$$New Intensity = Max Intensity - Original Intensity$$
2828

29-
Where "_Max Intensity_" is the maximum possible intensity value for the color channel.
29+
Where $$Max Intensity$$ is the maximum possible intensity value for the color channel.
3030

3131
:::warning
32-
ImageJS uses components to calculate each pixel value and leaves alpha channel unchanged. For more information about channels and components visit [this link](../../../Tutorials%20and%20concepts/Concepts/Channel%20vs%20component.md).
32+
ImageJS uses components to calculate each pixel value and leaves alpha channel unchanged. For more information about channels and components visit [this link](../../Tutorials%20and%20concepts/Concepts/Channel%20vs%20component.md).
3333
:::
3434

3535
</details>

docs/Features/Filters/Median.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,10 @@ This method only works with images.
1010

1111
<MedianDemo />
1212

13+
The key advantage of using the median filter, especially for noise reduction, is that it is less sensitive to extreme values or outliers compared to other filters like the [mean filter](https://en.wikipedia.org/wiki/Geometric_mean_filter 'wikipedia link on mean filter'). Since noise often appears as isolated bright or dark pixels that deviate significantly from their neighbors, the median filter effectively ignores these outliers and replaces them with more representative values from the local neighborhood.
14+
15+
However, the median filter also has limitations. It can blur sharp edges and thin lines in the image, as it doesn't consider the spatial relationship between pixels beyond their intensity values. This means that while it's great for removing noise, it might not be suitable for all types of image enhancement tasks.
16+
1317
### Parameters and default values
1418

1519
- `options`
@@ -22,16 +26,12 @@ This method only works with images.
2226
| [`borderType`](https://image-js.github.io/image-js-typescript/interfaces/MedianFilterOptions.html#borderType) | no | `reflect101` |
2327
| [`borderValue`](https://image-js.github.io/image-js-typescript/interfaces/MedianFilterOptions.html#borderValue) | no | `0` |
2428

25-
The key advantage of using the median filter, especially for noise reduction, is that it is less sensitive to extreme values or outliers compared to other filters like the [mean filter](https://en.wikipedia.org/wiki/Geometric_mean_filter 'wikipedia link on mean filter'). Since noise often appears as isolated bright or dark pixels that deviate significantly from their neighbors, the median filter effectively ignores these outliers and replaces them with more representative values from the local neighborhood.
26-
27-
However, the median filter also has limitations. It can blur sharp edges and thin lines in the image, as it doesn't consider the spatial relationship between pixels beyond their intensity values. This means that while it's great for removing noise, it might not be suitable for all types of image enhancement tasks.
28-
2929
<details>
3030
<summary><b>Implementation</b></summary>
3131

3232
Here's how median filter is implemented in ImageJS:
3333

34-
_Window or Kernel Selection_: The first step is to choose a small window or [kernel](../../../Glossary.md#kernel 'glossary link to kernel'). This window will move over the entire image, pixel by pixel.
34+
_Window or Kernel Selection_: The first step is to choose a small window or [kernel](../../Glossary.md#kernel 'glossary link to kernel'). This window will move over the entire image, pixel by pixel.
3535

3636
_Pixel Neighborhood_: As the window moves over the image, for each pixel location, the filter collects the pixel values within the window's neighborhood. The neighborhood consists of the pixels that are currently covered by the window/kernel.
3737

docs/Features/Filters/gaussianBlur.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ This method only works with images.
1212

1313
[Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur 'Wikipedia link on gaussian blur') is a widely used image processing technique that smooths an image by reducing high-frequency noise and fine details while preserving the overall structure and larger features. It's named after the [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function 'wikipedia link on Gaussian function'), which is a mathematical function that represents a bell-shaped curve. Gaussian blur is often applied to images before other processing steps like edge detection to improve their quality and reliability.
1414

15+
The key idea behind Gaussian blur is that it simulates a diffusion process, where each pixel's value is influenced by the values of its neighbors. Because the weights are determined by the Gaussian function, pixels that are closer to the central pixel have a larger impact on the smoothed value, while pixels that are farther away contribute less.
16+
1517
<GaussianBlurDemo />
1618

1719
### Parameters and default values
@@ -42,8 +44,6 @@ With Gaussian blur there are two ways of passing options: through sigma and thro
4244
| [`sizeX`](https://image-js.github.io/image-js-typescript/interfaces/GaussianBlurXYOptions.html#sizeX) | no | `2 * Math.ceil(2 * sigmaX) + 1` |
4345
| [`sizeX`](https://image-js.github.io/image-js-typescript/interfaces/GaussianBlurXYOptions.html#sizeY) | no | `2 * Math.ceil(2 * sigmaY) + 1` |
4446

45-
The key idea behind Gaussian blur is that it simulates a diffusion process, where each pixel's value is influenced by the values of its neighbors. Because the weights are determined by the Gaussian function, pixels that are closer to the central pixel have a larger impact on the smoothed value, while pixels that are farther away contribute less.
46-
4747
The size of the Gaussian kernel and the standard deviation parameter (which controls the spread of the Gaussian curve) influence the degree of smoothing. A larger kernel or a higher standard deviation will produce more pronounced smoothing, but might also result in a loss of fine details.
4848

4949
<details>
@@ -53,7 +53,7 @@ The size of the Gaussian kernel and the standard deviation parameter (which cont
5353

5454
Here's how Gaussian blur is implemented in ImageJS:
5555

56-
_Kernel Definition_: The core concept of Gaussian blur involves [convolving](../../../Glossary.md#convolution 'glossary link on convolution') the image with a Gaussian [kernel](../../../Glossary.md#kernel 'glossary link on kernel'), also known as a Gaussian filter or mask. This kernel's values are arranged in a way that creates a symmetric, bell-shaped pattern around the center of the kernel to approximate Gaussian function.
56+
_Kernel Definition_: The core concept of Gaussian blur involves [convolving](../../Glossary.md#convolution 'glossary link on convolution') the image with a Gaussian [kernel](../../Glossary.md#kernel 'glossary link on kernel'), also known as a Gaussian filter or mask. This kernel's values are arranged in a way that creates a symmetric, bell-shaped pattern around the center of the kernel to approximate Gaussian function.
5757

5858
_Convolution Operation_: The Gaussian kernel is applied to the image using a convolution operation. This involves placing the kernel's center over each pixel in the image and performing element-wise multiplication of the kernel's values with the corresponding pixel values in the neighborhood. The results of these multiplications are summed up to compute the new value for the central pixel.
5959

docs/Features/Filters/hypotenuse.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,7 @@ $$
1010
NewValue = \sqrt{Value1^2+Value2^2}
1111
$$
1212

13-
Where $$Value1$$ is a value of the pixel in the first image and $$Value2$$ is the value in the second one. The goal is to identify which points in one image correspond to points in another image, which is essential for various computer vision and image processing applications. Calculating hypotenuse value between two pixels is necessary for image aligning, feature matching.
14-
15-
:::caution
16-
Images must be compatible by size, bit depth, number of channels and number of alpha channels. However, for the resulting image the bit depth and number of channels depends on the input options.
17-
:::
13+
Where $$Value1$$ is a value of the pixel in the first image and $$Value2$$ is the value in the second one. The goal is to identify which points in one image correspond to points in another image, which is essential for various computer vision and image processing applications. Calculating hypotenuse value between two pixels is also necessary for image aligning and feature matching.
1814

1915
### Parameters and default values
2016

@@ -28,3 +24,7 @@ Images must be compatible by size, bit depth, number of channels and number of a
2824
| ------------------------------------------------------------------------------------------------------- | -------- | ---------------- |
2925
| [`bitDepth`](https://image-js.github.io/image-js-typescript/interfaces/HypotenuseOptions.html#bitDepth) | no | `image.bitDepth` |
3026
| [`channels`](https://image-js.github.io/image-js-typescript/interfaces/HypotenuseOptions.html#channels) | no | - |
27+
28+
:::caution
29+
Images must be compatible by size, bit depth, number of channels and number of alpha channels. However, for the resulting image the bit depth and number of channels depends on the input options.
30+
:::

docs/Features/Filters/level.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,17 @@ This process can make details in both dark and bright regions of the image more
3333

3434
Here's how level filter is implemented in ImageJS:
3535

36-
_Input border values selection_: The first step is to choose the range of values where the filter must be applied.
36+
_Input border values selection_: The first step is to choose the range of values that the filter must redistribute.
3737

3838
_Output border values selection_: Then the range of output values must be chosen. It is necessary to understand in what output limits should lie pixels that belong to the input values set.
3939

40-
_Calculation of the values_: After getting input and output values each pixel's gets compared with it and a ratio is calculated by using formula:
40+
_Calculation of the values_: After getting input and output values each pixel is compared with input values and a ratio is calculated by using formula:
4141

4242
$$
43-
(value - inputMin)/(inputMax - inputMin)
43+
\dfrac{value - inputMin}{inputMax - inputMin}
4444
$$
4545

46-
where $$value$$ is a value of a pixel which is within the input borders. Otherwise it is equal to maximum input value.
46+
where $$value$$ is a value of a pixel which is within the input borders. If value is outside of input limits it is equal to maximum input value.
4747
From there the formula is reciprocated to compute new output value.
4848

4949
:::caution

0 commit comments

Comments
 (0)