Skip to content
This repository was archived by the owner on Jul 16, 2024. It is now read-only.

Commit df8271f

Browse files
mdurrani808mcm001
andauthored
updated photonlib and robot pose estimator docs (#240)
* updated photonlib and robot pose estimator docs * update broken link * Update robot-pose-estimator.rst * address changes * linter * Update robot-pose-estimator.rst Co-authored-by: Matt <[email protected]>
1 parent b864c30 commit df8271f

File tree

4 files changed

+173
-5
lines changed

4 files changed

+173
-5
lines changed

source/docs/programming/nt-api.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ About
99
API
1010
^^^
1111

12-
More advanced users may want to create their own NetworkTables entries to retrieve data instead of using :ref:`PhotonLib <docs/programming/photonlib/index:PhotonLib: Robot Code Interface>`. However, it is recommended for most users to use PhotonLib as it simplifies the user code experience.
12+
.. warning:: NetworkTables is not a supported setup/viable option when using PhotonVision as we only send one target at a time (this is problematic when using AprilTags, which will return data from multiple tags at once). We reccomend using PhotonLib.
1313

1414
The tables below contain the the name of the key for each entry that PhotonVision sends over the network and a short description of the key. The entries should be extracted from a subtable with your camera's nickname (visible in the PhotonVision UI) under the main ``photonvision`` table.
1515

source/docs/programming/photonlib/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ PhotonLib: Robot Code Interface
77
adding-vendordep
88
getting-target-data
99
using-target-data
10+
robot-pose-estimator
1011
driver-mode-pipeline-index
1112
controlling-led
1213
simulation
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
AprilTags and RobotPoseEstimator
2+
================================
3+
4+
.. note:: For more information on how to methods to get AprilTag data, look :ref:`here <docs/programming/photonlib/getting-target-data:Getting AprilTag Data From A Target>`.
5+
6+
PhotonLib includes a ``RobotPoseEstimator`` class, which allows you to combine the pose data from all tags in view in order to get one final pose using different strategies.
7+
8+
Creating an ``AprilTagFieldLayout``
9+
-----------------------------------
10+
``AprilTagFieldLayout`` is used to represent a layout of AprilTags within a space (field, shop at home, classroom, etc.). WPILib provides a JSON that describes the layout of AprilTags on the field which you can then use in the AprilTagFieldLayout constructor. You can also specify a custom layout.
11+
12+
The API documentation can be found in here: `Java <https://github.wpilib.org/allwpilib/docs/beta/java/edu/wpi/first/apriltag/AprilTagFieldLayout.html>`_ and `C++ <https://github.wpilib.org/allwpilib/docs/beta/cpp/classfrc_1_1_april_tag_field_layout.html>`_.
13+
14+
.. tab-set-code::
15+
.. code-block:: java
16+
17+
// The parameter for loadFromResource() will be different depending on the game.
18+
AprilTagFieldLayout aprilTagFieldLayout = new ApriltagFieldLayout(AprilTagFieldLayout.loadFromResource(AprilTagFields.k2022RapidReact.m_resourceFile));
19+
20+
.. code-block:: c++
21+
22+
// Two example tags in our layout -- ID 0 at (3, 3) and 0 rotation, and
23+
// id 1 and (5, 5) and 0 rotation.
24+
std::vector<frc::AprilTag> tags = {
25+
{0, frc::Pose3d(units::meter_t(3), units::meter_t(3), units::meter_t(3),
26+
frc::Rotation3d())},
27+
{1, frc::Pose3d(units::meter_t(5), units::meter_t(5), units::meter_t(5),
28+
frc::Rotation3d())}};
29+
std::shared_ptr<frc::AprilTagFieldLayout> aprilTags =
30+
std::make_shared<frc::AprilTagFieldLayout>(tags, 54_ft, 27_ft);
31+
32+
Creating a ``RobotPoseEstimator``
33+
---------------------------------
34+
The RobotPoseEstimator has a constructor that takes an ``AprilTagFieldLayout`` (see above), ``PoseStrategy``, and ``ArrayList<Pair<PhotonCamera, Transform3d>>``. ``PoseStrategy`` has five possible values:
35+
36+
* LOWEST_AMBIGUITY
37+
* Choose the Pose with the lowest ambiguity
38+
* CLOSEST_TO_CAMERA_HEIGHT
39+
* Choose the Pose which is closest to the camera height
40+
* CLOSEST_TO_REFERENCE_POSE
41+
* Choose the Pose which is closest to the camera height
42+
* CLOSEST_TO_LAST_POSE
43+
* Choose the Pose which is closest to the last pose calculated
44+
* AVERAGE_BEST_TARGETS
45+
* Choose the Pose which is the average of all the poses from each tag
46+
47+
.. tab-set-code::
48+
.. code-block:: java
49+
50+
//Forward Camera
51+
cam = new PhotonCamera("testCamera");
52+
Transform3d robotToCam = new Transform3d(new Translation3d(0.5, 0.0, 0.5), new Rotation3d(0,0,0)); //Cam mounted facing forward, half a meter forward of center, half a meter up from center.
53+
54+
// ... Add other cameras here
55+
56+
// Assemble the list of cameras & mount locations
57+
var camList = new ArrayList<Pair<PhotonCamera, Transform3d>>();
58+
camList.add(new Pair<PhotonCamera, Transform3d>(cam, robotToCam));
59+
RobotPoseEstimator robotPoseEstimator = new RobotPoseEstimator(aprilTagFieldLayout, PoseStrategy.CLOSEST_TO_REFERENCE_POSE, camList);
60+
61+
.. code-block:: c++
62+
63+
// Forward Camera
64+
std::shared_ptr<photonlib::PhotonCamera> cameraOne =
65+
std::make_shared<photonlib::PhotonCamera>("testCamera");
66+
// Camera is mounted facing forward, half a meter forward of center, half a
67+
// meter up from center.
68+
frc::Transform3d robotToCam =
69+
frc::Transform3d(frc::Translation3d(0.5_m, 0_m, 0.5_m),
70+
frc::Rotation3d(0_rad, 0_rad, 0_rad));
71+
72+
// ... Add other cameras here
73+
74+
// Assemble the list of cameras & mount locations
75+
std::vector<
76+
std::pair<std::shared_ptr<photonlib::PhotonCamera>, frc::Transform3d>>
77+
cameras;
78+
cameras.push_back(std::make_pair(cameraOne, robotToCam));
79+
80+
photonlib::RobotPoseEstimator estimator(
81+
aprilTags, photonlib::CLOSEST_TO_REFERENCE_POSE, cameras);
82+
83+
Using a ``RobotPoseEstimator``
84+
------------------------------
85+
Calling ``update()`` on your ``RobotPoseEstimator`` will return a ``Pair<Pose3d, Double>``, which includes a ``Pose3d`` of the latest estimated pose (using the selected strategy) along with a ``Double`` of the latency in milliseconds. You should be updating your `drivetrain pose estimator <https://docs.wpilib.org/en/latest/docs/software/advanced-controls/state-space/state-space-pose-estimators.html>`_ with the result from the ``RobotPoseEstimator`` every loop using ``addVisionMeasurement()``. See our `code example <https://www.google.com/>`_ for more.
86+
87+
.. tab-set-code::
88+
.. code-block:: java
89+
90+
public Pair<Pose2d, Double> getEstimatedGlobalPose(Pose3d prevEstimatedRobotPose) {
91+
robotPoseEstimator.setReferencePose(prevEstimatedRobotPose);
92+
var currentTime = Timer.getFPGATimestamp();
93+
var result = robotPoseEstimator.update();
94+
if(result.getFirst() != null){
95+
return new Pair<Pose2d, Double>(result.getFirst().toPose2d(), currentTime - result.getSecond());
96+
} else {
97+
return new Pair<Pose2d, Double>(null, 0.0);
98+
}
99+
}
100+
101+
.. code-block:: c++
102+
103+
std::pair<frc::Pose2d, units::millisecond_t> getEstimatedGlobalPose(
104+
frc::Pose3d prevEstimatedRobotPose) {
105+
robotPoseEstimator.SetReferencePose(prevEstimatedRobotPose);
106+
units::millisecond_t currentTime = frc::Timer::GetFPGATimestamp();
107+
auto result = robotPoseEstimator.Update();
108+
if (result.second) {
109+
return std::make_pair<>(result.first.ToPose2d(),
110+
currentTime - result.second);
111+
} else {
112+
return std::make_pair(frc::Pose2d(), 0_ms);
113+
}
114+
}
115+
116+
Additional ``RobotPoseEstimator`` Methods
117+
-----------------------------------------
118+
119+
``setRefrencePose(Pose3d referencePose)``
120+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
121+
122+
Updates the stored reference pose when using the CLOSEST_TO_REFERENCE_POSE strategy.
123+
124+
``setLastPose(Pose3d lastPose)``
125+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
126+
127+
Update the stored last pose. Useful for setting the initial estimate when using the CLOSEST_TO_LAST_POSE strategy.

source/docs/programming/photonlib/using-target-data.rst

Lines changed: 44 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Using Target Data to Get Position
2-
=================================
1+
Using Target Data
2+
=================
33

44
A ``PhotonUtils`` class with helpful common calculations is included within ``PhotonLib`` to aid teams in using target data in order to get positional information on the field. This class contains two methods, ``calculateDistanceToTargetMeters()``/``CalculateDistanceToTarget()`` and ``estimateTargetTranslation2d()``/``EstimateTargetTranslation()`` (Java and C++ respectively).
55

@@ -20,6 +20,19 @@ If your camera is at a fixed height on your robot and the height of the target i
2020

2121
.. note:: The C++ version of PhotonLib uses the Units library. For more information, see `here <https://docs.wpilib.org/en/stable/docs/software/basic-programming/cpp-units.html>`_.
2222

23+
Calculating Distance Between Two Poses
24+
--------------------------------------
25+
``getDistanceToPose(Pose2d robotPose, Pose2d targetPose)`` allows you to calculate the distance between two poses. This is useful when using AprilTags, given that there may not be an AprilTag directly on the target.
26+
27+
.. tab-set-code::
28+
.. code-block:: java
29+
30+
double distanceToTarget = PhotonUtils.getDistanceToPose(robotPose, targetPose);
31+
32+
.. code-block:: c++
33+
34+
//TODO
35+
2336
Estimating Camera Translation to Target
2437
---------------------------------------
2538
You can get a `translation <https://docs.wpilib.org/en/latest/docs/software/advanced-controls/geometry/pose.html#translation>`_ to the target based on the distance to the target (calculated above) and angle to the target (yaw).
@@ -39,8 +52,9 @@ You can get a `translation <https://docs.wpilib.org/en/latest/docs/software/adva
3952

4053
.. note:: We are negating the yaw from the camera from CV (computer vision) conventions to standard mathematical conventions. In standard mathematical conventions, as you turn counter-clockwise, angles become more positive.
4154

42-
Estimating Field Relative Pose
43-
------------------------------
55+
Estimating Field Relative Pose (Traditional)
56+
--------------------------------------------
57+
4458
You can get your robot's ``Pose2D`` on the field using various camera data, target yaw, gyro angle, target pose, and camera position. This method estimates the target's relative position using ``estimateCameraToTargetTranslation`` (which uses pitch and yaw to estimate range and heading), and the robot's gyro to estimate the rotation of the target.
4559

4660
.. tab-set-code::
@@ -55,3 +69,29 @@ You can get your robot's ``Pose2D`` on the field using various camera data, targ
5569
// Calculate robot's field relative pose
5670
frc::Pose2D robotPose = photonlib::EstimateFieldToRobot(
5771
kCameraHeight, kTargetHeight, kCameraPitch, kTargetPitch, frc::Rotation2d(units::degree_t(-target.GetYaw())), frc::Rotation2d(units::degree_t(gyro.GetRotation2d)), targetPose, cameraToRobot);
72+
73+
Estimating Field Relative Pose with AprilTags
74+
---------------------------------------------
75+
``estimateFieldToRobotAprilTag(Transform3d cameraToTarget, Pose3d fieldRelativeTagPose, Transform3d cameraToRobot)`` returns your robot's ``Pose3d`` on the field using the pose of the AprilTag relative to the camera, pose of the AprilTag relative to the field, and the transform from the camera to the origin of the robot.
76+
77+
.. tab-set-code::
78+
.. code-block:: java
79+
80+
// Calculate robot's field relative pose
81+
Pose3D robotPose = PhotonUtils.estimateFieldToRobotAprilTag(target.getBestCameraToTarget(), aprilTagFieldLayout.getTagPose(target.getFiducialId()), cameraToRobot);
82+
.. code-block:: c++
83+
84+
//TODO
85+
86+
Getting the Yaw To a Pose
87+
-------------------------
88+
``getYawToPose(Pose2d robotPose, Pose2d targetPose)`` returns the ``Rotation2d`` between your robot and a target. This is useful when turning towards an arbitraty target on the field (ex. the center of the hub in 2022).
89+
90+
.. tab-set-code::
91+
.. code-block:: java
92+
93+
Rotation2d targetYaw = PhotonUtils.getYawToPose(robotPose, targetPose);
94+
.. code-block:: c++
95+
96+
//TODO
97+

0 commit comments

Comments
 (0)