You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorials/brief.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,13 +16,13 @@ BRIEF is a very simple feature descriptor and does not provide scale or rotation
16
16
17
17
## Example
18
18
19
-
Let us take a look at a simple example where the BRIEF descriptor is used to match two images where one has been translated by `(100, 200)` pixels. We will use the `lena_gray` image from the [TestImages](https://github.com/timholy/TestImages.jl) package for this example.
19
+
Let us take a look at a simple example where the BRIEF descriptor is used to match two images where one has been translated by `(100, 200)` pixels. We will use the `lena_gray` image from the [TestImages](https://github.com/JuliaImages/TestImages.jl) package for this example.
20
20
21
21
22
22
Now, let us create the two images we will match using BRIEF.
23
23
24
24
```@example 1
25
-
using ImageFeatures, TestImages, Images, ImageDraw, CoordinateTransformations
25
+
using ImageFeatures, TestImages, ImageCore, ImageDraw, CoordinateTransformations
Copy file name to clipboardExpand all lines: docs/src/tutorials/brisk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,12 @@ The descriptor is built using intensity comparisons. For each short pair if the
13
13
14
14
## Example
15
15
16
-
Let us take a look at a simple example where the BRISK descriptor is used to match two images where one has been translated by `(50, 40)` pixels and then rotated by an angle of 75 degrees. We will use the `lighthouse` image from the [TestImages](https://github.com/timholy/TestImages.jl) package for this example.
16
+
Let us take a look at a simple example where the BRISK descriptor is used to match two images where one has been translated by `(50, 40)` pixels and then rotated by an angle of 75 degrees. We will use the `lighthouse` image from the [TestImages](https://github.com/JuliaImages/TestImages.jl) package for this example.
17
17
18
18
First, let us create the two images we will match using BRISK.
19
19
20
20
```@example 4
21
-
using ImageFeatures, TestImages, Images, ImageDraw, CoordinateTransformations, Rotations
21
+
using ImageFeatures, TestImages, ImageCore, ImageDraw, CoordinateTransformations, Rotations
Copy file name to clipboardExpand all lines: docs/src/tutorials/freak.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,12 +9,12 @@ The descriptor is built using intensity comparisons of a predetermined set of 51
9
9
10
10
## Example
11
11
12
-
Let us take a look at a simple example where the FREAK descriptor is used to match two images where one has been translated by `(50, 40)` pixels and then rotated by an angle of 75 degrees. We will use the `lighthouse` image from the [TestImages](https://github.com/timholy/TestImages.jl) package for this example.
12
+
Let us take a look at a simple example where the FREAK descriptor is used to match two images where one has been translated by `(50, 40)` pixels and then rotated by an angle of 75 degrees. We will use the `lighthouse` image from the [TestImages](https://github.com/JuliaImages/TestImages.jl) package for this example.
13
13
14
14
First, let us create the two images we will match using FREAK.
15
15
16
16
```@example 3
17
-
using ImageFeatures, TestImages, Images, ImageDraw, CoordinateTransformations, Rotations
17
+
using ImageFeatures, TestImages, ImageCore, ImageDraw, CoordinateTransformations, Rotations
Copy file name to clipboardExpand all lines: docs/src/tutorials/object_detection.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,16 +11,16 @@ representation which is invariant to local geometric and photometric changes (i.
11
11
Download the script to get the training data [here](https://drive.google.com/file/d/11G_9zh9N-0veQ2EL5WDGsnxRpihsqLX5/view?usp=sharing). Download tutorial.zip, decompress it and run get_data.bash. (Change the variable `path_to_tutorial` in preprocess.jl and path to julia executable in get_data.bash). This script will download the required datasets. We will start by loading the data and computing HOG features of all the images.
n_pos =length(readdir(pos_examples)) # number of positive training examples
21
21
n_neg =length(readdir(neg_examples)) # number of negative training examples
22
-
n = n_pos + n_neg # number of training examples
23
-
data =Array{Float64}(undef, 3780, n) # Array to store HOG descriptor of each image. Each image in our training data has size 128x64 and so has a 3780 length
22
+
n = n_pos + n_neg # number of training examples
23
+
data =Array{Float64}(undef, 3780, n) # Array to store HOG descriptor of each image. Each image in our training data has size 128x64 and so has a 3780 length
24
24
labels =Vector{Int}(undef, n) # Vector to store label (1=human, 0=not human) of each image.
25
25
26
26
for (i, file) inenumerate([readdir(pos_examples); readdir(neg_examples)])
@@ -31,7 +31,7 @@ for (i, file) in enumerate([readdir(pos_examples); readdir(neg_examples)])
31
31
end
32
32
```
33
33
34
-
Basically we now have an encoded version of images in our training data. This encoding captures useful information but discards extraneous information
34
+
Basically we now have an encoded version of images in our training data. This encoding captures useful information but discards extraneous information
35
35
(illumination changes, pose variations etc). We will train a linear SVM on this data.
36
36
37
37
```julia
@@ -94,7 +94,7 @@ end
94
94
95
95

96
96
97
-
You can see that classifier gave low score to not-human class (i.e. high score to human class) at positions corresponding to humans in the original image.
97
+
You can see that classifier gave low score to not-human class (i.e. high score to human class) at positions corresponding to humans in the original image.
98
98
Below we threshold the image and supress non-minimal values to get the human locations. We then plot the bounding boxes using `ImageDraw`.
0 commit comments