-
-
Notifications
You must be signed in to change notification settings - Fork 0
Camera
A camera defines a device that can contain a list of photos. Each photo shares the projection and calibration parameters of its parent camera.
As the projection parameters are shared between the photos of a camera, photos of different focal length (different angle of view), orientation or crop shouldn't be mixed. If you have photos that differ in any of the above mentioned parameters, put them into their own camera object.
You can still mix photos that are taken with landscape or portrait orientation as long as you have any auto rotation disabled.
-
Name: The name of the camera. -
Horizontal angle of view: The angle between the leftmost and rightmost pixel's unprojected direction. Can be measured with a pen, paper and a flat surface. It's also ok to guess this value, and let the optimizer find a better fit after the geometry has been estimated well enough. If this value is unlocked while the estimated geometry is too far off of the real geometry, the optimizer will find the trivial solution for the photo parameters: That is, all photo's positions drift off as far away from the geometry as possible, while the angle of view converges to zero. -
Accuracy: The accuracy of mapped points in pixels. This defines how much the point mapping residuals are weighted in the optimization. Lower this value as much you are confident in your accuracy of your camera, your distortion model parameters and angle of view. -
K1,K2,K3,K4: Radial distortion coefficients. Dimensionless. They can be determined by the optimizer, but shouldn't. -
P1,P2,P3,P4: Tangential distortion coefficients. Dimensionless. They can be determined by the optimizer, but shouldn't. -
B1,B2: Affinity and non-orthogonality coefficients. In pixels. They can be determined by the optimizer, but shouldn't. -
Distortion center offset: Offset of the distortion center from the image center (Principal point offset). It can be determined by the optimizer, but shouldn't.

- Disable any auto rotation or lens distortion correction in your camera.
- Disable any optical image stabilization, and use a tripod if possible. Make sure the camera doesn't move while capturing a photo. Even if the resulting image has no motion blur, there may be rolling shutter distortion that prevents any accurate geometry reconstruction.
- Don't crop your images (digital zoom), don't downscale photos. If you want to scale your photos, modify all photos of a camera the exact same way.
- For rooms: Use a wide angle lens/camera to capture as many points as possible.
The software uses a pinhole camera model similar to what OpenCV uses.
The lens distortion model is similar to the Brown-Conrady model, but transforming from undistorted to distorted coordinates.
The camera coordinate system's Z-direction points into the view direction of the camera, the X- and Y-direction align with the right and bottom direction on the photo.
- $\begin{bmatrix} R|t \end{bmatrix}$ is the joint rotation-translation matrix of the camera (photo).
-
$X_\text{w}$ ,$Y_\text{w}$ and$Z_\text{w}$ define a point in the world space. -
$X_\text{c}$ ,$Y_\text{c}$ and$Z_\text{c}$ define a point in the camera (photo) space. -
$x'$ and$y'$ being the normalized camera coordinates. -
$x''$ and$y''$ being the distorted normalized coordinates. -
$f$ is the focal length in pixels. -
$\alpha_\text{aov}$ is the angle of view in radian. -
$w$ and$h$ being the image size in pixels. -
$c_\text{x}, c_\text{y}$ being the principal point offset. -
$u$ and$v$ being the image coordinates, with their origin at the top left of the photo/image.
Transformation of world coordinates into camera (photo) coordinates:
Distortion transformation:
with
and
Final transformation into the image coordinate system with last distortion transformation:
Conversion between focal length