You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 16, 2024. It is now read-only.
Update docs and Java for pose estimator refactoring (#262)
* Update docs and Java for pose estimator refactoring
* Fix types in example
* Fix headers
* Fix header and use RLI
* Add note about one camera per instance
Copy file name to clipboardExpand all lines: source/docs/examples/apriltag.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Knowledge and Equipment Needed
12
12
- An open space with properly mounted 16h5 AprilTags
13
13
- PhotonVision running on your laptop or a coprocessor
14
14
15
-
This is example will show how to use AprilTags for full field robot localization using ``RobotPoseEstimator``, ``AprilTagFieldLayout``, and the WPILib Pose Estimaton Classes.
15
+
This is example will show how to use AprilTags for full field robot localization using ``PhotonPoseEstimator``, ``AprilTagFieldLayout``, and the WPILib Pose Estimaton Classes.
16
16
17
17
All PhotonVision specific code is in ``PhotonCameraWrapper.java`` and the relevant pose estimation parts are in ``DriveTrain.java.``
Copy file name to clipboardExpand all lines: source/docs/integration/aprilTagStrategies.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,6 +39,6 @@ The nature of how AprilTags will be laid out makes it very likely that you will
39
39
* A camera seeing one target, and picking a pose most similar to one provided externally (ie, from previous loop's odometry)
40
40
* A camera seeing one target, and picking the pose with the lowest ambiguity.
41
41
42
-
PhotonVision supports all of these different strategies via our ``RobotPoseEstimator`` class (coming soon) that allows you to select one of the strategies above and get the relevant pose estimation.
42
+
PhotonVision supports all of these different strategies via our ``PhotonPoseEstimator`` class that allows you to select one of the strategies above and get the relevant pose estimation.
43
43
44
44
All of these strategies are valid approaches, and we recommend doing independent testing in order to see which one works best for your team / current game.
Copy file name to clipboardExpand all lines: source/docs/programming/photonlib/robot-pose-estimator.rst
+16-30Lines changed: 16 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
-
AprilTags and RobotPoseEstimator
2
-
================================
1
+
AprilTags and PhotonPoseEstimator
2
+
=================================
3
3
4
4
.. note:: For more information on how to methods to get AprilTag data, look :ref:`here <docs/programming/photonlib/getting-target-data:Getting AprilTag Data From A Target>`.
5
5
6
-
PhotonLib includes a ``RobotPoseEstimator`` class, which allows you to combine the pose data from all tags in view in order to get a field relative pose.
6
+
PhotonLib includes a ``PhotonPoseEstimator`` class, which allows you to combine the pose data from all tags in view in order to get a field relative pose. The ``PhotonPoseEstimator`` class works with one camera per object instance, but more than one instance may be created.
7
7
8
8
Creating an ``AprilTagFieldLayout``
9
9
-----------------------------------
@@ -29,9 +29,9 @@ The API documentation can be found in here: `Java <https://github.wpilib.org/all
The RobotPoseEstimator has a constructor that takes an ``AprilTagFieldLayout`` (see above), ``PoseStrategy``, and ``ArrayList<Pair<PhotonCamera, Transform3d>>``. ``PoseStrategy`` has five possible values:
32
+
Creating a ``PhotonPoseEstimator``
33
+
----------------------------------
34
+
The PhotonPoseEstimator has a constructor that takes an ``AprilTagFieldLayout`` (see above), ``PoseStrategy``, ``PhotonCamera``, and ``Transform3d``. ``PoseStrategy`` has five possible values:
35
35
36
36
* LOWEST_AMBIGUITY
37
37
* Choose the Pose with the lowest ambiguity.
@@ -51,12 +51,8 @@ The RobotPoseEstimator has a constructor that takes an ``AprilTagFieldLayout`` (
51
51
cam =newPhotonCamera("testCamera");
52
52
Transform3d robotToCam =newTransform3d(newTranslation3d(0.5, 0.0, 0.5), newRotation3d(0,0,0)); //Cam mounted facing forward, half a meter forward of center, half a meter up from center.
53
53
54
-
// ... Add other cameras here
55
-
56
-
// Assemble the list of cameras & mount locations
57
-
var camList =newArrayList<Pair<PhotonCamera, Transform3d>>();
Calling ``update()`` on your ``RobotPoseEstimator`` will return a ``Pair<Pose3d, Double>``, which includes a ``Pose3d`` of the latest estimated pose (using the selected strategy) along with a ``Double`` of the latency in milliseconds.
79
+
Using a ``PhotonPoseEstimator``
80
+
-------------------------------
81
+
Calling ``update()`` on your ``PhotonPoseEstimator`` will return an ``EstimatedRobotPose``, which includes a ``Pose3d`` of the latest estimated pose (using the selected strategy) along with a ``double`` of the timestamp when the robot pose was estimated. You should be updating your `drivetrain pose estimator <https://docs.wpilib.org/en/latest/docs/software/advanced-controls/state-space/state-space-pose-estimators.html>`_ with the result from the ``PhotonPoseEstimator`` every loop using ``addVisionMeasurement()``. See our `code example <https://github.com/PhotonVision/photonvision/tree/master/photonlib-java-examples/apriltagExample>`_ for more.
@@ -116,8 +102,8 @@ Calling ``update()`` on your ``RobotPoseEstimator`` will return a ``Pair<Pose3d,
116
102
117
103
You should be updating your `drivetrain pose estimator <https://docs.wpilib.org/en/latest/docs/software/advanced-controls/state-space/state-space-pose-estimators.html>`_ with the result from the ``RobotPoseEstimator`` every loop using ``addVisionMeasurement()``. See our :ref:`code example <docs/examples/apriltag:knowledge and equipment needed>` for more.
0 commit comments