Skip to content
This repository was archived by the owner on Jul 16, 2024. It is now read-only.

Commit 590aafd

Browse files
authored
Add CalibDB instructions (#261)
* Add CalibDB instructions * Change links to our fork of glowo * Update calibration.rst
1 parent f49c477 commit 590aafd

File tree

7 files changed

+21
-8
lines changed

7 files changed

+21
-8
lines changed

source/docs/getting-started/installation/sw_install/gloworm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ To turn the LED lights off or on you need to modify the ``ledMode`` network tabl
5252
Support Links
5353
-------------
5454

55-
* `Website/Documentation <http://web.archive.org/web/20220525051935/https://gloworm.vision/>`__
55+
* `Website/Documentation <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`__ (Note: Gloworm is no longer in production)
5656

5757
* `Image <https://github.com/gloworm-vision/pi-img-updator/releases>`__
5858

source/docs/getting-started/installation/sw_install/limelight.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,5 @@ Download the hardwareConfig.json file for the version of your Limelight:
2020

2121
:ref:`Import the hardwareConfig.json file <docs/hardware/config:Importing and Exporting Settings>`. Again, this is **REQUIRED** or target measurements will be incorrect, and LEDs will not work.
2222

23-
After installation you should be able to `locate the camera <https://web.archive.org/web/20220525051734/https://gloworm.vision//docs/quickstart/#finding-gloworm>`_ at: ``http://photonvision.local:5800/`` (not ``gloworm.local``, as previously)
23+
After installation you should be able to `locate the camera <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_ at: ``http://photonvision.local:5800/`` (not ``gloworm.local``, as previously)
2424

source/docs/getting-started/installation/sw_install/romi.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,6 @@ Next, from the SSH terminal, run ``sudo nano /home/pi/runCamera`` then arrow dow
1616

1717
.. image:: images/nano.png
1818

19-
After it reboots, you should be able to `locate the PhotonVision UI <https://web.archive.org/web/20220525051734/https://gloworm.vision//docs/quickstart/#finding-gloworm>`_ at: ``http://10.0.0.2:5800/``.
19+
After it reboots, you should be able to `locate the PhotonVision UI <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_ at: ``http://10.0.0.2:5800/``.
2020

2121
.. warning:: In order for settings, logs, etc. to be saved / take effect, ensure that PhotonVision is in writable mode.

source/docs/getting-started/pipeline-tuning/about-pipelines.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ AprilTag
2424

2525
This pipeline type is based on detecting AprilTag fiducial markers. More information about AprilTags can be found in the WPILib documentation. While being more performance intensive than the reflective and colored shape pipeline, it has the benefit of providing easy to use 3D pose information which allows localization.
2626

27-
.. note:: In order to get 3D Pose data about AprilTags, you are required to :ref:`calibrate your camera<docs/getting-started/pipeline-tuning/calibration:Calibration Steps>`.
27+
.. note:: In order to get 3D Pose data about AprilTags, you are required to :ref:`calibrate your camera<docs/getting-started/pipeline-tuning/calibration:Calibrating Your Camera>`.
2828

2929
Note About Multiple Cameras and Pipelines
3030
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

source/docs/getting-started/pipeline-tuning/calibration.rst

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,13 @@ Calibrating Your Camera
33

44
.. important:: In order to detect AprilTags and use 3D mode, your camera must be calibrated at the desired resolution! Inaccurate calibration will lead to poor performance.
55

6-
To calibrate a camera, images of a chessboard (or grid of dots) are taken. by comparing where the grid corners (or dots) should be in object space (for example, a dot once every inch in an 8x6 grid) with where they appear in the camera image, we can find a least-squares estimate for intrinsic camera properties like focal lengths, center point, and distortion coefficients. For more on camera calibration, please review the `OpenCV documentation <https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html>`_.
6+
To calibrate a camera, images of a chessboard (or grid of dots, or other target) are taken. by comparing where the grid corners (or dots) should be in object space (for example, a dot once every inch in an 8x6 grid) with where they appear in the camera image, we can find a least-squares estimate for intrinsic camera properties like focal lengths, center point, and distortion coefficients. For more on camera calibration, please review the `OpenCV documentation <https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html>`_.
77

88
.. warning:: While any resolution can be calibrated, resolutions lower than 960x720 are often too low to provide accurate results. Additionally, high resolutions may be too performance intensive for a coprocessor like a Raspberry Pi to handle (solutions to this are being looked into). Thus, we recommend 960x720 when using 3D mode.
99

1010
.. note::The calibration data collected during calibration is specific to each physical camera, as well as each individual resolution.
1111
12+
1213
Calibration Tips
1314
----------------
1415
Accurate camera calibration is required in order to get accurate pose measurements when using AprilTags and 3D mode. The tips below should help ensure success:
@@ -36,7 +37,19 @@ Accurate camera calibration is required in order to get accurate pose measuremen
3637
Following the ideas above should help in getting an accurate calibration.
3738

3839
Calibration Steps
39-
-----------------
40+
=================
41+
42+
Your camera can be calibrated using either the utility built into PhotonVision, which performs all the calculations on your coprocessor, or using a website such as `calibdb <https://calibdb.net/>`, which uses a USB webcam connected to your laptop. The integrated calibration utility is currently the only one that works with ribbon-cable CSI cameras or Limelights, but for USB webcams, calibdb is the preferred option.
43+
44+
Calibrating using calibdb
45+
-------------------------
46+
47+
Calibdb uses a modified chessboard/aruco marker combination target called `ChArUco targets <https://docs.opencv.org/3.4/df/d4a/tutorial_charuco_detection.html>`. The website currently only supports Chrome browser.
48+
49+
Download and print out (or display on a monitor) the calibration by clicking Show Pattern. Click "Calibrate" and align your camera with the ghost overlay of the calibration board. The website automatically calculates the next position and displays it for you. When complete, download the calibration (do **not** use the OpenCV format). Reconnect your camera to your coprocessor and navigate to the PhotonVision web interface's camera tab. Ensure the correct camera is selected, and click the "Import from CalibDB" button. Your calibration data will be automatically saved and applied!
50+
51+
Calibrating using PhotonVision
52+
------------------------------
4053

4154
1. Navigate to the calibration section in the UI.
4255
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

source/docs/getting-started/pipeline-tuning/reflectiveAndShape/3D.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
3D Tuning
22
=========
33

4-
In 3D mode, the SolvePNP algorithm is used to compute the position and rotation of the target relative to the robot. This requires your :ref:`camera to be calibrated <docs/getting-started/pipeline-tuning/calibration:Calibration Steps>` which can be done through the cameras tab.
4+
In 3D mode, the SolvePNP algorithm is used to compute the position and rotation of the target relative to the robot. This requires your :ref:`camera to be calibrated <docs/getting-started/pipeline-tuning/calibration:Calibrating Your Camera>` which can be done through the cameras tab.
55

66
The target model dropdown is used to select the target model used to compute target position. This should match the target your camera will be tracking.
77

source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
.. image:: assets/PhotonVision-Header-onWhite.png
22
:alt: PhotonVision
33

4-
Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the *FIRST*\ Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3 and 4, the `Gloworm smart camera <https://web.archive.org/web/20220525051734/https://gloworm.vision//>`_, and the `SnakeEyes Pi hat <https://www.playingwithfusion.com/productview.php?pdid=133>`_.
4+
Welcome to the official documentation of PhotonVision! PhotonVision is the free, fast, and easy-to-use vision processing solution for the *FIRST*\ Robotics Competition. PhotonVision is designed to get vision working on your robot *quickly*, without the significant cost of other similar solutions. PhotonVision supports a variety of COTS hardware, including the Raspberry Pi 3 and 4, the `Gloworm smart camera <https://photonvision.github.io/gloworm-docs/docs/quickstart/#finding-gloworm>`_, and the `SnakeEyes Pi hat <https://www.playingwithfusion.com/productview.php?pdid=133>`_.
55

66
Content
77
-------

0 commit comments

Comments
 (0)