You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ For most of the presented approaches presented in the submitted paper:
8
8
9
9
E. Marchand, H. Uchiyama and F. Spindler. Camera localization for augmented reality: a hands-on survey.
10
10
11
-
we provide the code of short examples. This should allow developers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <ahref="http://team.inria.fr/lagadic/visp">ViSP library</a> developed at Inria.
11
+
we provide the code of short examples. This should allow developers to easily bridge the gap between theoretical aspects and practice. These examples have been written using either <ahref="http://opencv.org">OpenCV</a> or <ahref="http://visp.inria.fr">ViSP</a> developed at Inria.
12
12
13
13
14
14
The full documentation of this project is available from <http://team.inria.fr/lagadic/camera_localization>.
Copy file name to clipboardExpand all lines: doc/mainpage.doc.cmake
+11-15Lines changed: 11 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -16,22 +16,18 @@ E. Marchand, H. Uchiyama and F. Spindler. Pose estimation for augmented reality:
16
16
a hands-on survey. submitted
17
17
\endcode
18
18
19
-
a brief but almost self contented introduction to the most important approaches dedicated to camera localization along with a survey of the extension that have been proposed in the recent years. We also try to link these methodological concepts to the main libraries and SDK available on the market.
19
+
a brief but almost self contented introduction to the most important approaches dedicated to camera localization along with a survey of the extension that have been proposed in the recent years. We also try to link these methodological concepts to the main libraries and SDK available on the market.
20
20
21
21
22
22
23
23
The aim of this paper is then to provide researchers and practitioners with an almost comprehensive and consolidate
24
24
introduction to effective tools to facilitate research in augmented reality. It is also dedicated to academics involved in teaching augmented reality at the undergraduate and graduate level.
25
25
26
-
For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <a href="http://opencv.org">OpenCV</a> but also <a href="http://visp.inria.fr">ViSP</a> developed at Inria.
26
+
For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practice. These examples have been written using <a href="http://opencv.org">OpenCV</a> but also <a href="http://visp.inria.fr">ViSP</a> developed at Inria.
27
27
This page contains the documentation of these documented source code proposed as a supplementary material of the paper.
28
28
29
-
30
29
We hope this article andsource code will be accessible and interesting to experts and students alike.
31
30
32
-
33
-
34
-
35
31
\section install_sec Installation
36
32
37
33
\subsection prereq_subsec Prerequisities
@@ -54,12 +50,12 @@ Once ViSP is installed, download the lastest source code release from github <ht
54
50
55
51
Unzip the archive:
56
52
\code
57
-
$ unzip camera_localization-1.0.0.zip
53
+
$ unzip camera_localization-2.0.0.zip
58
54
\endcode
59
55
60
56
or extract the code from tarball:
61
57
\code
62
-
$ tar xvzf camera_localization-1.0.0.tar.gz
58
+
$ tar xvzf camera_localization-2.0.0.tar.gz
63
59
\endcode
64
60
65
61
@@ -82,27 +78,27 @@ $ make doc
82
78
83
79
In this section we give base algorithm for camera localization.
84
80
85
-
- <b>Pose from Direct Linear Transform method</b> \ref tutorial-pose-dlt-visp "(ViSP)"<br>In this first tutorial a simple solution known as Direct Linear Transform (DLT) \cite HZ01 \cite Sut74 based on the resolution of a linear system is considered to estimate the pose of the camera from at least 6 non coplanar points.
81
+
- <b>Pose from Direct Linear Transform method</b> using \ref tutorial-pose-dlt-opencv "OpenCV"or using \ref tutorial-pose-dlt-visp "ViSP"<br>In this first tutorial a simple solution known as Direct Linear Transform (DLT) \cite HZ01 \cite Sut74 based on the resolution of a linear system is considered to estimate the pose of the camera from at least 6 non coplanar points.
86
82
87
-
- <b>Pose from Dementhon's POSIT method</b> \ref tutorial-pose-dementhon-visp "(ViSP)"<br>In this second tutorial we give Dementhon's POSIT method \cite DD95 \cite ODD96 used to estimate the pose based on the resolution of a linear system introducing additional constraints on the rotation matrix. The pose is estimated from at least 4 non coplanar points.
83
+
- <b>Pose from Dementhon's POSIT method</b> using \ref tutorial-pose-dementhon-opencv "OpenCV"or using \ref tutorial-pose-dementhon-visp "ViSP"<br>In this second tutorial we give Dementhon's POSIT method \cite DD95 \cite ODD96 used to estimate the pose based on the resolution of a linear system introducing additional constraints on the rotation matrix. The pose is estimated from at least 4 non coplanar points.
88
84
89
-
- <b>Pose from homography estimation</b> \ref tutorial-pose-dlt-planar-visp "(ViSP)" <br>In this tutorial we explain how to decompose the homography to estimate the pose from at least 4 coplanar points.
85
+
- <b>Pose from homography estimation</b> using \ref tutorial-pose-dlt-planar-opencv "OpenCV"or using \ref tutorial-pose-dlt-planar-visp "ViSP" <br>In this tutorial we explain how to decompose the homography to estimate the pose from at least 4 coplanar points.
90
86
91
-
- <b>Pose from a non-linear minimization method</b> \ref tutorial-pose-gauss-newton-visp "(ViSP)" <br>In this other tutorial we give a non-linear minimization method to estimate the pose from at least 4 points. This method requires an initialization of the pose to estimate. Depending on the points planarity, this initialization could be performed using one of the previous pose algorithms.
87
+
- <b>Pose from a non-linear minimization method</b> using \ref tutorial-pose-gauss-newton-opencv "OpenCV"or using \ref tutorial-pose-gauss-newton-visp "ViSP" <br>In this other tutorial we give a non-linear minimization method to estimate the pose from at least 4 points. This method requires an initialization of the pose to estimate. Depending on the points planarity, this initialization could be performed using one of the previous pose algorithms.
92
88
93
89
\subsection pose_mbt_sec Extension to markerless model-based tracking
94
90
95
-
- <b>Pose from markerless model-based tracking</b> \ref tutorial-pose-mbt-visp "(ViSP)" <br>This tutorial focuses on markerless model-based tracking that allows to estimate the pose of the camera.
91
+
- <b>Pose from markerless model-based tracking</b> using \ref tutorial-pose-mbt-visp "ViSP" <br>This tutorial focuses on markerless model-based tracking that allows to estimate the pose of the camera.
96
92
97
93
\section motion_estimation_sec Pose estimation relying on an image model
- <b>Homography estimation</b> \ref tutorial-homography-visp "(ViSP)" <br>In this tutorial we describe the estimation of an homography using Direct Linear Transform (DLT) algorithm. At least 4 coplanar points are requested to achieve the estimation.
97
+
- <b>Homography estimation</b> using \ref tutorial-homography-opencv "OpenCV"or using \ref tutorial-homography-visp "ViSP" <br>In this tutorial we describe the estimation of an homography using Direct Linear Transform (DLT) algorithm. At least 4 coplanar points are requested to achieve the estimation.
102
98
103
99
\subsection homography_from_template_tracker_sec Direct motion estimation through template matching
104
100
105
-
- <b>Direct motion estimation through template matching</b> \ref tutorial-template-matching-visp "(ViSP)" <br>In this other tutorial we propose a direct motion estimation through template matching approach. Here the reference template should be planar.
101
+
- <b>Direct motion estimation through template matching</b> using \ref tutorial-template-matching-visp "ViSP" <br>In this other tutorial we propose a direct motion estimation through template matching approach. Here the reference template should be planar.
The estimation of an homography from coplanar points can be easily and precisely achieved using a Direct Linear Transform algorithm \cite HZ01 \cite Sut74 based on the resolution of a linear system.
9
+
10
+
\section homography_code_cv Source code
11
+
12
+
The following source code that uses <a href="http://opencv.org">OpenCV</a> is also available in \ref homography-dlt-opencv.cpp file. It allows to estimate the homography between matched coplanar points. At least 4 points are required.
First of all we inlude OpenCV headers that are requested to manipulate vectors and matrices.
19
+
20
+
\snippet homography-dlt-opencv.cpp Include
21
+
22
+
Then we introduce the function that does the homography estimation.
23
+
\snippet homography-dlt-opencv.cpp Estimation function
24
+
25
+
From a vector of planar points \f${\bf x_1} = (x_1, y_1, 1)^T\f$ in image \f$I_1\f$ and a vector of matched points \f${\bf x_2} = (x_2, y_2, 1)^T\f$ in image \f$I_2\f$ it allows to estimate the homography \f$\bf {^2}H_1\f$:
26
+
27
+
\f[\bf x_2 = {^2}H_1 x_1\f]
28
+
29
+
The implementation of the Direct Linear Transform algorithm to estimate \f$\bf {^2}H_1\f$ is done next. First, for each point we update the values of matrix A using equation (33). Then we solve the system \f${\bf Ah}=0\f$ using a Singular Value Decomposition of \f$\bf A\f$. Finaly, we determine the smallest eigen value that allows to identify the eigen vector that corresponds to the solution \f$\bf h\f$.
30
+
31
+
\snippet homography-dlt-opencv.cpp DLT
32
+
33
+
From now the resulting eigen vector \f$\bf h\f$ that corresponds to the minimal
34
+
eigen value of matrix \f$\bf A\f$ is used to update the homography \f$\bf {^2}H_1\f$.
Finally we define the main function in which we will initialize the input data before calling the previous function.
39
+
40
+
\snippet homography-dlt-opencv.cpp Main function
41
+
42
+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
43
+
44
+
\snippet homography-dlt-opencv.cpp Create data structures
45
+
46
+
For our simulation we then initialize the input data from a ground truth pose that corresponds to the pose of the camera in frame 1 with respect to the object frame; in \e c1tw_truth for the translation vector and in \e c1Rw_truth for the rotation matrix.
47
+
For each point, we compute their coordinates in the camera frame 1 \e c1X = (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane \e x1 = (x1, y1) using perspective projection.
48
+
49
+
Thanks to the ground truth transformation between camera frame 2 and camera frame 1 set in \e c2tc1 for the translation vector and in \e c2Rc1 for the rotation matrix, we compute also the coordinates of the points in camera frame 2 \e c2X = (c2X, c2Y, c2Z) and their corresponding coordinates \e x2 = (x2, y2) in the image plane.
50
+
\snippet homography-dlt-opencv.cpp Simulation
51
+
52
+
From here we have initialized \f${\bf x_1} = (x1,y1,1)^T\f$ and \f${\bf x_2} = (x2,y2,1)^T\f$. We are now ready to call the function that does the homography estimation.
If you run the previous code, it we produce the following result that shows that the estimated pose is equal to the ground truth one used to generate the input data:
Copy file name to clipboardExpand all lines: doc/tutorial-homography-visp.doc
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,13 @@ The estimation of an homography from coplanar points can be easily and precisely
9
9
10
10
\section homography_code Source code
11
11
12
-
The following source code also available in homography-dlt-visp.cpp allows to estimate the homography between matched coplanar points. At least 4 points are required.
12
+
The following source code that uses <a href="http://visp.inria.fr">ViSP</a> is also available in \ref homography-dlt-visp.cpp file. It allows to estimate the homography between matched coplanar points. At least 4 points are required.
Finally we define the main() function in which we will initialize the input data before calling the previous function.
38
+
Finally we define the main function in which we will initialize the input data before calling the previous function.
39
39
40
40
\snippet homography-dlt-visp.cpp Main function
41
41
42
-
First in the main() we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
42
+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, their coordinates in the camera frame 1 \e c1X and 2 \e c2X and their coordinates in the image plane \e x1 and \e x2 obtained after perspective projection. Note here that at least 4 coplanar points are requested to estimate the 8 parameters of the homography.
43
43
44
44
\snippet homography-dlt-visp.cpp Create data structures
45
45
46
46
For our simulation we then initialize the input data from a ground truth pose \e c1Tw_truth that corresponds to the pose of the camera in frame 1 with respect to the object frame.
47
-
For each point, we compute their coordinates in the camera frame 1 (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane (x1, y1) using perspective projection.
47
+
For each point, we compute their coordinates in the camera frame 1 \e c1X = (c1X, c1Y, c1Z). These values are then used to compute their coordinates in the image plane \e x1 = (x1, y1) using perspective projection.
48
48
49
-
Thanks to the ground truth transformation \e c2Tc1 between camera frame 2 and camera frame 1, we compute also the coordinates of the points in camera frame 2 (c2X, c2Y, c2Z) and their corresponding coordinates (x2, y2) in the image plane.
49
+
Thanks to the ground truth transformation \e c2Tc1 between camera frame 2 and camera frame 1, we compute also the coordinates of the points in camera frame 2 \e c2X = (c2X, c2Y, c2Z) and their corresponding coordinates \e x2 = (x2, y2) in the image plane.
50
50
\snippet homography-dlt-visp.cpp Simulation
51
51
52
52
From here we have initialized \f${\bf x_1} = (x1,y1,1)^T\f$ and \f${\bf x_2} = (x2,y2,1)^T\f$. We are now ready to call the function that does the homography estimation.
\page tutorial-pose-dementhon-opencv Pose from Dementhon's POSIT method
4
+
\tableofcontents
5
+
6
+
\section intro_dementhon_cv Introduction
7
+
8
+
An alternative and very elegant solution to pose estimation from points has been proposed in \cite DD95 \cite ODD96. This algorithm is called POSIT.
9
+
10
+
\section dementhon_code_cv Source code
11
+
12
+
The following source code that uses <a href="opencv.org">OpenCV</a> is also available in \ref pose-dementhon-opencv.cpp file. It allows to compute the pose of the camera from points.
First of all we include OpenCV headers that are requested to manipulate vectors and matrices.
19
+
20
+
\snippet pose-dementhon-opencv.cpp Include
21
+
22
+
Then we introduce the function that does the pose estimation. It takes as input parameters \f${^w}{\bf X} = (X,Y,Z,1)^T\f$ the 3D coordinates of the points in the world frame and \f${\bf x} = (x,y,1)^T\f$ their normalized coordinates in the image plane. It returns the estimated pose in \e ctw for the translation vector and in \e cRw for the rotation matrix.
23
+
24
+
\snippet pose-dementhon-opencv.cpp Estimation function
25
+
26
+
The implementation of the POSIT algorithm is done next.
27
+
\snippet pose-dementhon-opencv.cpp POSIT
28
+
29
+
After a minimal number of iterations, all the parameters are estimated and can be used to update the value of the homogenous transformation, first in \e ctw for the translation, then in \e cRw for the rotation matrix.
Finally we define the main function in which we will initialize the input data before calling the previous function and computing the pose using Dementhon POSIT algorithm.
34
+
35
+
\snippet pose-dementhon-opencv.cpp Main function
36
+
37
+
First in the main we create the data structures that will contain the 3D points coordinates \e wX in the world frame, and their coordinates in the image plane \e x obtained after prerspective projection. Note here that at least 4 non coplanar points are requested to estimate the pose.
38
+
39
+
\snippet pose-dementhon-opencv.cpp Create data structures
40
+
41
+
For our simulation we then initialize the input data from a ground truth pose with the translation in \e ctw_truth and the rotation matrix in \e cRw_truth.
42
+
For each point we set in \e wX[i] the 3D coordinates in the world frame (wX, wY, wZ, 1) and compute in \e cX their 3D coordinates (cX, cY, cZ, 1) in the camera frame. Then in \e x[i] we update their coordinates (x, y) in the image plane, obtained by perspective projection.
43
+
44
+
\snippet pose-dementhon-opencv.cpp Simulation
45
+
46
+
From here we have initialized \f${^w}{\bf X} = (X,Y,Z,1)^T\f$ and \f${\bf x} = (x,y,1)^T\f$. We are now ready to call the function that does the pose estimation.
If you run the previous code, it we produce the following result that shows that the estimated pose is very close to the ground truth one used to generate the input data:
0 commit comments