Skip to content

Commit e48d148

Browse files
committed
Introduce doxygen documentation for the code implemented with OpenCV.
Improve previous doc.
1 parent d9bbc4b commit e48d148

20 files changed

+439
-57
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ For most of the presented approaches presented in the submitted paper:
88

99
E. Marchand, H. Uchiyama and F. Spindler. Camera localization for augmented reality: a hands-on survey.
1010

11-
we provide the code of short examples. This should allow developers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <a href="http://team.inria.fr/lagadic/visp">ViSP library</a> developed at Inria.
11+
we provide the code of short examples. This should allow developers to easily bridge the gap between theoretical aspects and practice. These examples have been written using either <a href="http://opencv.org">OpenCV</a> or <a href="http://visp.inria.fr">ViSP</a> developed at Inria.
1212

1313

1414
The full documentation of this project is available from <http://team.inria.fr/lagadic/camera_localization>.

doc/mainpage.doc.cmake

Lines changed: 11 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,22 +16,18 @@ E. Marchand, H. Uchiyama and F. Spindler. Pose estimation for augmented reality:
1616
a hands-on survey. submitted
1717
\endcode
1818

19-
a brief but almost self contented introduction to the most important approaches dedicated to camera localization along with a survey of the extension that have been proposed in the recent years. We also try to link these methodological concepts to the main libraries and SDK available on the market.
19+
a brief but almost self contented introduction to the most important approaches dedicated to camera localization along with a survey of the extension that have been proposed in the recent years. We also try to link these methodological concepts to the main libraries and SDK available on the market.
2020

2121

2222

2323
The aim of this paper is then to provide researchers and practitioners with an almost comprehensive and consolidate
2424
introduction to effective tools to facilitate research in augmented reality. It is also dedicated to academics involved in teaching augmented reality at the undergraduate and graduate level.
2525

26-
For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <a href="http://opencv.org">OpenCV</a> but also <a href="http://visp.inria.fr">ViSP</a> developed at Inria.
26+
For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <a href="http://opencv.org">OpenCV</a> but also <a href="http://visp.inria.fr">ViSP</a> developed at Inria.
2727
This page contains the documentation of these documented source code proposed as a supplementary material of the paper.
2828

29-
3029
We hope this article and source code will be accessible and interesting to experts and students alike.
3130

32-
33-
34-
3531
\section install_sec Installation
3632

3733
\subsection prereq_subsec Prerequisities
@@ -54,12 +50,12 @@ Once ViSP is installed, download the lastest source code release from github <ht
5450

5551
Unzip the archive:
5652
\code
57-
$ unzip camera_localization-1.0.0.zip
53+
$ unzip camera_localization-2.0.0.zip
5854
\endcode
5955

6056
or extract the code from tarball:
6157
\code
62-
$ tar xvzf camera_localization-1.0.0.tar.gz
58+
$ tar xvzf camera_localization-2.0.0.tar.gz
6359
\endcode
6460

6561

@@ -82,27 +78,27 @@ $ make doc
8278

8379
In this section we give base algorithm for camera localization.
8480

85-
- <b>Pose from Direct Linear Transform method</b> \ref tutorial-pose-dlt-visp "(ViSP)"<br>In this first tutorial a simple solution known as Direct Linear Transform (DLT) \cite HZ01 \cite Sut74 based on the resolution of a linear system is considered to estimate the pose of the camera from at least 6 non coplanar points.
81+
- <b>Pose from Direct Linear Transform method</b> using \ref tutorial-pose-dlt-opencv "OpenCV" or using \ref tutorial-pose-dlt-visp "ViSP"<br>In this first tutorial a simple solution known as Direct Linear Transform (DLT) \cite HZ01 \cite Sut74 based on the resolution of a linear system is considered to estimate the pose of the camera from at least 6 non coplanar points.
8682

87-
- <b>Pose from Dementhon's POSIT method</b> \ref tutorial-pose-dementhon-visp "(ViSP)"<br>In this second tutorial we give Dementhon's POSIT method \cite DD95 \cite ODD96 used to estimate the pose based on the resolution of a linear system introducing additional constraints on the rotation matrix. The pose is estimated from at least 4 non coplanar points.
83+
- <b>Pose from Dementhon's POSIT method</b> using \ref tutorial-pose-dementhon-opencv "OpenCV" or using \ref tutorial-pose-dementhon-visp "ViSP"<br>In this second tutorial we give Dementhon's POSIT method \cite DD95 \cite ODD96 used to estimate the pose based on the resolution of a linear system introducing additional constraints on the rotation matrix. The pose is estimated from at least 4 non coplanar points.
8884

89-
- <b>Pose from homography estimation</b> \ref tutorial-pose-dlt-planar-visp "(ViSP)" <br>In this tutorial we explain how to decompose the homography to estimate the pose from at least 4 coplanar points.
85+
- <b>Pose from homography estimation</b> using \ref tutorial-pose-dlt-planar-opencv "OpenCV" or using \ref tutorial-pose-dlt-planar-visp "ViSP" <br>In this tutorial we explain how to decompose the homography to estimate the pose from at least 4 coplanar points.
9086

91-
- <b>Pose from a non-linear minimization method</b> \ref tutorial-pose-gauss-newton-visp "(ViSP)" <br>In this other tutorial we give a non-linear minimization method to estimate the pose from at least 4 points. This method requires an initialization of the pose to estimate. Depending on the points planarity, this initialization could be performed using one of the previous pose algorithms.
87+
- <b>Pose from a non-linear minimization method</b> using \ref tutorial-pose-gauss-newton-opencv "OpenCV" or using \ref tutorial-pose-gauss-newton-visp "ViSP" <br>In this other tutorial we give a non-linear minimization method to estimate the pose from at least 4 points. This method requires an initialization of the pose to estimate. Depending on the points planarity, this initialization could be performed using one of the previous pose algorithms.
9288

9389
\subsection pose_mbt_sec Extension to markerless model-based tracking
9490

95-
- <b>Pose from markerless model-based tracking</b> \ref tutorial-pose-mbt-visp "(ViSP)" <br>This tutorial focuses on markerless model-based tracking that allows to estimate the pose of the camera.
91+
- <b>Pose from markerless model-based tracking</b> using \ref tutorial-pose-mbt-visp "ViSP" <br>This tutorial focuses on markerless model-based tracking that allows to estimate the pose of the camera.
9692

9793
\section motion_estimation_sec Pose estimation relying on an image model
9894

9995
\subsection homography_from_point_sec Homography estimation
10096

101-
- <b>Homography estimation</b> \ref tutorial-homography-visp "(ViSP)" <br>In this tutorial we describe the estimation of an homography using Direct Linear Transform (DLT) algorithm. At least 4 coplanar points are requested to achieve the estimation.
97+
- <b>Homography estimation</b> using \ref tutorial-homography-opencv "OpenCV" or using \ref tutorial-homography-visp "ViSP" <br>In this tutorial we describe the estimation of an homography using Direct Linear Transform (DLT) algorithm. At least 4 coplanar points are requested to achieve the estimation.
10298

10399
\subsection homography_from_template_tracker_sec Direct motion estimation through template matching
104100

105-
- <b>Direct motion estimation through template matching</b> \ref tutorial-template-matching-visp "(ViSP)" <br>In this other tutorial we propose a direct motion estimation through template matching approach. Here the reference template should be planar.
101+
- <b>Direct motion estimation through template matching</b> using \ref tutorial-template-matching-visp "ViSP" <br>In this other tutorial we propose a direct motion estimation through template matching approach. Here the reference template should be planar.
106102

107103

108104

doc/tutorial-homography-opencv.doc

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
/**
2+
3+
\page tutorial-homography-opencv Homography estimation
4+
\tableofcontents
5+
6+
\section intro_homography_cv Introduction
7+
8+
The estimation of an homography from coplanar points can be easily and precisely achieved using a Direct Linear Transform algorithm \cite HZ01 \cite Sut74 based on the resolution of a linear system.
9+
10+
\section homography_code_cv Source code
11+
12+
The following source code that uses <a href="http://opencv.org">OpenCV</a> is also available in \ref homography-dlt-opencv.cpp file. It allows to estimate the homography between matched coplanar points. At least 4 points are required.
13+
14+
\include homography-dlt-opencv.cpp
15+
16+
\section homography_explained_cv Source code explained
17+
18+
First of all we inlude OpenCV headers that are requested to manipulate vectors and matrices.
19+
20+
\snippet homography-dlt-opencv.cpp Include
21+
22+
Then we introduce the function that does the homography estimation.
23+
\snippet homography-dlt-opencv.cpp Estimation function
24+
25+
From a vector of planar points \f${\bf x_1} = (x_1, y_1, 1)^T\f$ in image \f$I_1\f$ and a vector of matched points \f${\bf x_2} = (x_2, y_2, 1)^T\f$ in image \f$I_2\f$ it allows to estimate the homography \f$\bf {^2}H_1\f$:
26+
27+
\f[\bf x_2 = {^2}H_1 x_1\f]
28+
29+
The implementation of the Direct Linear Transform algorithm to estimate \f$\bf {^2}H_1\f$ is done next. First, for each point we update the values of matrix A using equation (33). Then we solve the system \f${\bf Ah}=0\f$ using a Singular Value Decomposition of \f$\bf A\f$. Finaly, we determine the smallest eigen value that allows to identify the eigen vector that corresponds to the solution \f$\bf h\f$.
30+
31+
\snippet homography-dlt-opencv.cpp DLT
32+
33+
From now the resulting eigen vector \f$\bf h\f$ that corresponds to the minimal
34+
eigen value of matrix \f$\bf A\f$ is used to update the homography \f$\bf {^2}H_1\f$.
35+
36+
\snippet homography-dlt-opencv.cpp Update homography matrix
37+
38+
Finally we define the main function in which we will initialize the input data before calling the previous function.
39+
40+
\snippet homography-dlt-opencv.cpp Main function
41+
42+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
43+
44+
\snippet homography-dlt-opencv.cpp Create data structures
45+
46+
For our simulation we then initialize the input data from a ground truth pose that corresponds to the pose of the camera in frame 1 with respect to the object frame; in \e c1tw_truth for the translation vector and in \e c1Rw_truth for the rotation matrix.
47+
For each point, we compute their coordinates in the camera frame 1 \e c1X = (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane \e x1 = (x1, y1) using perspective projection.
48+
49+
Thanks to the ground truth transformation between camera frame 2 and camera frame 1 set in \e c2tc1 for the translation vector and in \e c2Rc1 for the rotation matrix, we compute also the coordinates of the points in camera frame 2 \e c2X = (c2X, c2Y, c2Z) and their corresponding coordinates \e x2 = (x2, y2) in the image plane.
50+
\snippet homography-dlt-opencv.cpp Simulation
51+
52+
From here we have initialized \f${\bf x_1} = (x1,y1,1)^T\f$ and \f${\bf x_2} = (x2,y2,1)^T\f$. We are now ready to call the function that does the homography estimation.
53+
54+
\snippet homography-dlt-opencv.cpp Call function
55+
56+
\section homography_result_cv Resulting homography estimation
57+
58+
If you run the previous code, it we produce the following result that shows that the estimated pose is equal to the ground truth one used to generate the input data:
59+
60+
\code
61+
2H1 (computed with DLT):
62+
[0.5425233873981674, -0.04785624324415742, 0.03308292557420141;
63+
0.0476448024215215, 0.5427592708789931, 0.005830349194436123;
64+
-0.02550335176952741, -0.005978041062955012, 0.6361649706821216]
65+
\endcode
66+
*/

doc/tutorial-homography-visp.doc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,13 @@ The estimation of an homography from coplanar points can be easily and precisely
99

1010
\section homography_code Source code
1111

12-
The following source code also available in homography-dlt-visp.cpp allows to estimate the homography between matched coplanar points. At least 4 points are required.
12+
The following source code that uses <a href="http://visp.inria.fr">ViSP</a> is also available in \ref homography-dlt-visp.cpp file. It allows to estimate the homography between matched coplanar points. At least 4 points are required.
1313

1414
\include homography-dlt-visp.cpp
1515

1616
\section homography_explained Source code explained
1717

18-
First of all we inlude the header of the files that are requested to manipulate vectors and matrices.
18+
First of all we inlude ViSP headers that are requested to manipulate vectors and matrices.
1919

2020
\snippet homography-dlt-visp.cpp Include
2121

@@ -35,18 +35,18 @@ eigen value of matrix \f$\bf A\f$ is used to update the homography \f$\bf {^2}H_
3535

3636
\snippet homography-dlt-visp.cpp Update homography matrix
3737

38-
Finally we define the main() function in which we will initialize the input data before calling the previous function.
38+
Finally we define the main function in which we will initialize the input data before calling the previous function.
3939

4040
\snippet homography-dlt-visp.cpp Main function
4141

42-
First in the main() we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
42+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
4343

4444
\snippet homography-dlt-visp.cpp Create data structures
4545

4646
For our simulation we then initialize the input data from a ground truth pose \e c1Tw_truth that corresponds to the pose of the camera in frame 1 with respect to the object frame.
47-
For each point, we compute their coordinates in the camera frame 1 (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane (x1, y1) using perspective projection.
47+
For each point, we compute their coordinates in the camera frame 1 \e c1X = (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane \e x1 = (x1, y1) using perspective projection.
4848

49-
Thanks to the ground truth transformation \e c2Tc1 between camera frame 2 and camera frame 1, we compute also the coordinates of the points in camera frame 2 (c2X, c2Y, c2Z) and their corresponding coordinates (x2, y2) in the image plane.
49+
Thanks to the ground truth transformation \e c2Tc1 between camera frame 2 and camera frame 1, we compute also the coordinates of the points in camera frame 2 \e c2X = (c2X, c2Y, c2Z) and their corresponding coordinates \e x2 = (x2, y2) in the image plane.
5050
\snippet homography-dlt-visp.cpp Simulation
5151

5252
From here we have initialized \f${\bf x_1} = (x1,y1,1)^T\f$ and \f${\bf x_2} = (x2,y2,1)^T\f$. We are now ready to call the function that does the homography estimation.
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
/**
2+
3+
\page tutorial-pose-dementhon-opencv Pose from Dementhon's POSIT method
4+
\tableofcontents
5+
6+
\section intro_dementhon_cv Introduction
7+
8+
An alternative and very elegant solution to pose estimation from points has been proposed in \cite DD95 \cite ODD96. This algorithm is called POSIT.
9+
10+
\section dementhon_code_cv Source code
11+
12+
The following source code that uses <a href="opencv.org">OpenCV</a> is also available in \ref pose-dementhon-opencv.cpp file. It allows to compute the pose of the camera from points.
13+
14+
\include pose-dementhon-opencv.cpp
15+
16+
\section dementon_explained_cv Source code explained
17+
18+
First of all we include OpenCV headers that are requested to manipulate vectors and matrices.
19+
20+
\snippet pose-dementhon-opencv.cpp Include
21+
22+
Then we introduce the function that does the pose estimation. It takes as input parameters \f${^w}{\bf X} = (X,Y,Z,1)^T\f$ the 3D coordinates of the points in the world frame and \f${\bf x} = (x,y,1)^T\f$ their normalized coordinates in the image plane. It returns the estimated pose in \e ctw for the translation vector and in \e cRw for the rotation matrix.
23+
24+
\snippet pose-dementhon-opencv.cpp Estimation function
25+
26+
The implementation of the POSIT algorithm is done next.
27+
\snippet pose-dementhon-opencv.cpp POSIT
28+
29+
After a minimal number of iterations, all the parameters are estimated and can be used to update the value of the homogenous transformation, first in \e ctw for the translation, then in \e cRw for the rotation matrix.
30+
31+
\snippet pose-dementhon-opencv.cpp Update homogeneous matrix
32+
33+
Finally we define the main function in which we will initialize the input data before calling the previous function and computing the pose using Dementhon POSIT algorithm.
34+
35+
\snippet pose-dementhon-opencv.cpp Main function
36+
37+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, and their coordinates in the image plane \e x obtained after prerspective projection. Note here that at least 4 non coplanar points are requested to estimate the pose.
38+
39+
\snippet pose-dementhon-opencv.cpp Create data structures
40+
41+
For our simulation we then initialize the input data from a ground truth pose with the translation in \e ctw_truth and the rotation matrix in \e cRw_truth.
42+
For each point we set in \e wX[i] the 3D coordinates in the world frame (wX, wY, wZ, 1) and compute in \e cX their 3D coordinates (cX, cY, cZ, 1) in the camera frame. Then in \e x[i] we update their coordinates (x, y) in the image plane, obtained by perspective projection.
43+
44+
\snippet pose-dementhon-opencv.cpp Simulation
45+
46+
From here we have initialized \f${^w}{\bf X} = (X,Y,Z,1)^T\f$ and \f${\bf x} = (x,y,1)^T\f$. We are now ready to call the function that does the pose estimation.
47+
48+
\snippet pose-dementhon-opencv.cpp Call function
49+
50+
\section dementhon_result_cv Resulting pose estimation
51+
52+
If you run the previous code, it we produce the following result that shows that the estimated pose is very close to the ground truth one used to generate the input data:
53+
54+
\code
55+
ctw (ground truth):
56+
[-0.1;
57+
0.1;
58+
0.5]
59+
ctw (computed with DLT):
60+
[-0.1070274014258891;
61+
0.1741233539654255;
62+
0.5236967119016803]
63+
cRw (ground truth):
64+
[0.7072945483755065, -0.7061704379962989, 0.03252282795827704;
65+
0.7061704379962989, 0.7036809008245869, -0.07846338199958876;
66+
0.03252282795827704, 0.07846338199958876, 0.9963863524490802]
67+
cRw (computed with DLT):
68+
[0.6172726698299887, -0.7813181606576134, 0.09228425059326956;
69+
0.6906596219977137, 0.4249461799313228, -0.5851581245302429;
70+
0.4179788297943932, 0.4249391234325819, 0.8019325685199985]
71+
\endcode
72+
73+
*/

0 commit comments

Comments
 (0)