Skip to content

Commit 4f90d72

Browse files
authored
Merge branch 'main' into self_inspect
2 parents 6a704c3 + b6d1d29 commit 4f90d72

25 files changed

+539
-34
lines changed
Lines changed: 217 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,217 @@
1+
AprilTag Challenges in DECODE presented by RTX
2+
==============================================
3+
4+
What are AprilTags?
5+
-------------------
6+
7+
Developed at the `University of Michigan
8+
<https://april.eecs.umich.edu/software/apriltag>`_, AprilTags are similar to a
9+
2D barcode or a simplified QR Code. It contains a numeric **ID code** and can
10+
be used for **location and orientation**.
11+
12+
In *FIRST* Tech Challenge during the DECODE presented by RTX season, AprilTags
13+
are used in three different ways:
14+
15+
1. On the OBELISK, the AprilTags are used to identify one of three MOTIFS that
16+
are randomized each MATCH.
17+
2. On the GOALS, AprilTags can be used to target the GOAL for teams to launch
18+
ARTIFACTS accurately into the correct GOAL.
19+
3. On the GOALS, AprilTags can be used as a visual odometry system, using the
20+
information that AprilTags can provide to calculate the position of the
21+
ROBOT on the FIELD (through a process called localization). See the
22+
:doc:`AprilTag Localization <../../vision_portal/apriltag_localization/apriltag-localization>`
23+
page for more information.
24+
25+
.. figure:: images/decode-apriltags.png
26+
:width: 50%
27+
:align: center
28+
:alt: Image showing the DECODE field and AprilTag locations
29+
30+
AprilTag IDs and Locations on the DECODE field.
31+
32+
AprilTags with Difficult Environmental Lighting
33+
-----------------------------------------------
34+
35+
One of the challenges teams will face this season is ensuring that their cameras
36+
are able to see the AprilTags correctly. AprilTags rely on the fact that the
37+
white and black portions of the AprilTags are contrasting colors - if the lighting
38+
in the environment doesn't allow for enough contrast, the AprilTag algorithm
39+
may not properly detect the AprilTag. Fortunately, there are things we can do
40+
with virtually every webcam that can help correct for environmental issues.
41+
42+
An excellent example situation came up in a warehouse. The DECODE field was set
43+
up in the warehouse, and using default settings with the
44+
``ConceptAprilTagEasy`` sample. When viewing the camera stream preview, the
45+
AprilTag on the OBELISK was completely washed out by sunlight striking the
46+
OBELISK directly on a sunny day, making the AprilTag unable to be seen. A
47+
different camera at a slightly different angle took another picture of the same
48+
scene, and the AprilTag can be seen but there is definitely too much direct light
49+
reflecting off the AprilTag, making it unable to be recognized. This scenario is
50+
very similar to a gymnasium where an event could be hosted, and on a sunny day
51+
light can interfere with a camera's ability to view an AprilTag. What can be done?
52+
53+
.. only:: html
54+
55+
.. grid:: 1 2 2 3
56+
:gutter: 2
57+
58+
.. grid-item-card::
59+
:class-header: sd-bg-dark font-weight-bold sd-text-white
60+
:class-body: sd-text-left body
61+
62+
Image #1 - Example
63+
64+
^^^
65+
66+
.. figure:: images/1-decode-washed-out-obelisk.*
67+
:align: center
68+
:width: 95%
69+
:alt: Image of DECODE field with obelisk AprilTag unable to be seen
70+
71+
+++
72+
73+
Washed Out AprilTag on OBELISK
74+
75+
.. grid-item-card::
76+
:class-header: sd-bg-dark font-weight-bold sd-text-white
77+
:class-body: sd-text-left body
78+
79+
Image #2 - Alternate View
80+
81+
^^^
82+
83+
.. figure:: images/2-decode-washed-out-obelisk.*
84+
:align: center
85+
:width: 85%
86+
:alt: Image of DECODE field from another perspective
87+
88+
+++
89+
90+
Alternate View of OBELISK
91+
92+
.. grid-item-card::
93+
:class-header: sd-bg-dark font-weight-bold sd-text-white
94+
:class-body: sd-text-left body
95+
96+
Image #3 - Alternate View
97+
98+
^^^
99+
100+
.. figure:: images/5-decode-warehouse-lighting.*
101+
:align: center
102+
:width: 85%
103+
:alt: Image showing light coming in from windows of warehouse
104+
105+
+++
106+
107+
Sunlight Entering Warehouse
108+
109+
.. only:: latex
110+
111+
.. list-table:: Different Views of Challenging Scenario
112+
:class: borderless
113+
114+
* - .. image:: images/1-decode-washed-out-obelisk.*
115+
- .. image:: images/2-decode-washed-out-obelisk.*
116+
- .. image:: images/5-decode-warehouse-lighting.*
117+
118+
The best way to counter this environmental lighting is to use the webcam
119+
settings within the SDK to adjust both the Gain and the Exposure settings at
120+
the same time. By simultaneously minimizing the exposure (lessening the amount of
121+
time light is allowed to strike the sensor each image frame) and maximizing
122+
the gain (amplifying the signal from the sensor) the resulting image will be
123+
darker than a normal image but elements of high contrast will be accentuated,
124+
like AprilTags, allowing them to be recognized. This can be experimented with
125+
using the ``ConceptAprilTagOptimizeExposure`` sample.
126+
127+
Sure enough, by minimizing the Exposure and maximizing the Gain of the webcam,
128+
the resulting images from the webcam were able to be used to recognize the
129+
problematic AprilTags. For more examples, the ``RobotAutoDriveToAprilTag...``
130+
sample OpModes also use this technique for adjusting the exposure and gain
131+
settings of the camera to ensure the AprilTags are readable under most
132+
conditions.
133+
134+
.. tip::
135+
One big advantage is that this technique (minimizing exposure while
136+
maximizing gain) is ALSO very popular in reducing motion blur for reading
137+
AprilTags while moving - so this has more than one benefit!
138+
139+
Here are examples of the images once the exposure and gain are set appropriately,
140+
one image has the AprilTag processing enabled to show that the AprilTag is
141+
being detected properly, and the other has processing disabled so that we can
142+
see the raw image being returned by the webcam.
143+
144+
.. only:: html
145+
146+
.. grid:: 1 2 2 2
147+
:gutter: 2
148+
149+
.. grid-item-card::
150+
:class-header: sd-bg-dark font-weight-bold sd-text-white
151+
:class-body: sd-text-left body
152+
153+
Image #4 - Processed Image
154+
155+
^^^
156+
157+
.. figure:: images/3-decode-recognized-obelisk.*
158+
:align: center
159+
:width: 95%
160+
:alt: Image of DECODE field with obelisk AprilTag being processed
161+
162+
+++
163+
164+
Processed Image showing Detections
165+
166+
.. grid-item-card::
167+
:class-header: sd-bg-dark font-weight-bold sd-text-white
168+
:class-body: sd-text-left body
169+
170+
Image #5 - Raw Processed Image
171+
172+
^^^
173+
174+
.. figure:: images/4-decode-recognized-obelisk-raw.*
175+
:align: center
176+
:width: 95%
177+
:alt: Image of raw processed DECODE field
178+
179+
+++
180+
181+
Image without AprilTag processing
182+
183+
.. only:: latex
184+
185+
.. list-table:: Resulting Images
186+
:class: borderless
187+
188+
* - .. image:: images/3-decode-recognized-obelisk.*
189+
- .. image:: images/4-decode-recognized-obelisk-raw.*
190+
191+
Reading Multiple AprilTags on the OBELISK
192+
-----------------------------------------
193+
194+
The OBELISK is an equilateral triangular prism (we know, real obelisks have 4
195+
sides) which is positioned with 1 of the rectangular faces centered on the
196+
GOAL-side of the FIELD, just outside of the FIELD perimeter. When ROBOTS are
197+
set up on the field contacting their ALLIANCE'S GOAL, it is a very real
198+
possibility that the ROBOT's camera will see and process multiple AprilTags.
199+
200+
.. warning::
201+
It might seem logical to read both AprilTags and use those two tags to
202+
determine (and verify) which AprilTag is actually being seen. However, there
203+
is no defined order for AprilTags on an OBELISK, so this is not reliable.
204+
205+
206+
.. figure:: images/6-decode-obelisk-tags.*
207+
:align: center
208+
:width: 75%
209+
:alt: Image showing OBELISK with more than one AprilTag visible
210+
211+
View of AprilTags on OBELISK from BLUE GOAL
212+
213+
A reliable way to determine which AprilTag is truly showing on the FIELD
214+
is to move the ROBOT into a position where the AprilTag on the front face of
215+
the OBELISK is the only tag that can be viewed.
216+
217+
Good Luck this season!
327 KB
Loading
457 KB
Loading
212 KB
Loading
224 KB
Loading
401 KB
Loading
80.8 KB
Loading
166 KB
Loading

docs/source/color_processing/color-locator-challenge/color-locator-challenge.rst

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -555,16 +555,9 @@ example, that can overlap if desired.
555555

556556
Using two processors
557557

558-
This ends the tutorial's 3 pages on ColorLocator:
559558

560-
* :doc:`Discover <../color-locator-discover/color-locator-discover>`,
561-
* :doc:`Explore <../color-locator-explore/color-locator-explore>`,
562-
* **Challenge**
563-
564-
The final page of this tutorial provides optional info on :doc:`Color Spaces
565-
<../color-spaces/color-spaces>`.
566-
567-
Best of luck as you apply these tools to your Autonomous and TeleOp OpModes!
559+
The next ColorLocator page called :doc:`Color Locator (Round Blobs) <../color-locator-round-blobs/color-locator-round-blobs>`
560+
covers detection of round objects.
568561

569562
============
570563

docs/source/color_processing/color-locator-discover/color-locator-discover.rst

Lines changed: 24 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -75,15 +75,17 @@ Java section below:
7575

7676
2. Click ``Create New OpMode``\ , enter a new name such as
7777
"ColorLocator_Monica_v01", and select the Sample OpMode
78-
``ConceptVisionColorLocator``.
78+
``ConceptVisionColorLocator_Rectangle``.
7979

80-
3. At the top of the Blocks screen, you can change the type from "TeleOp"
80+
3. Near the beginning of the OpMode code, change `ARTIFACT_PURPLE` to `BLUE`.
81+
82+
4. At the top of the Blocks screen, you can change the type from "TeleOp"
8183
to "Autonomous", since this Sample OpMode does not use gamepads.
8284

83-
4. If using the built-in camera of an RC phone, drag out the relevant
85+
5. If using the built-in camera of an RC phone, drag out the relevant
8486
Block from the left-side ``VisionPortal.Builder`` toolbox.
8587

86-
5. Save the OpMode, time to try it!
88+
6. Save the OpMode, time to try it!
8789

8890
.. tab-item:: Java
8991
:sync: java
@@ -92,15 +94,17 @@ Java section below:
9294

9395
2. In the ``teamcode`` folder, add/create a new OpMode with a name such
9496
as "ColorLocator_Javier_v01.java", and select the Sample OpMode
95-
``ConceptVisionColorLocator.java``.
97+
``ConceptVisionColorLocator_Rectangle.java``.
98+
99+
3. Near the beginning of the OpMode code, change `ARTIFACT_PURPLE` to `BLUE`.
96100

97-
3. At about Line 63, you can change ``@TeleOp`` to ``@Autonomous``\ ,
101+
4. At about Line 63, you can change ``@TeleOp`` to ``@Autonomous``\ ,
98102
since this Sample OpMode does not use gamepads.
99103

100-
4. If using the built-in camera of an RC phone, follow the OpMode
104+
5. If using the built-in camera of an RC phone, follow the OpMode
101105
comments to specify that camera.
102106

103-
5. Click "Build", time to try it!
107+
6. Click "Build", time to try it!
104108

105109
Running the Sample OpMode
106110
+++++++++++++++++++++++++
@@ -197,12 +201,16 @@ In this example, the Region of Interest (ROI) contains only one Blob of the
197201
default target color BLUE. You could probably move your camera to achieve the
198202
same result - with the help of previews.
199203

200-
The **first column** shows the **Area**, in pixels, of the Blob (contour, not
204+
The **first column** shows the (X, Y) position of the **Center** of the boxFit
205+
rectangle. With the origin at the full image's top left corner, X increases to
206+
the right and Y increases downward.
207+
208+
The **second column** shows the **Area**, in pixels, of the Blob (contour, not
201209
boxFit). By default, the Sample OpMode uses a **filter** to show Blobs between
202210
50 and 20,000 pixels. Also by default, the Sample uses a **sort** tool to
203211
display multiple Blobs in descending order of Area (largest is first).
204212

205-
The **second column** shows the **Density** of the Blob contour. From the
213+
The **third column** shows the **Density** of the Blob contour. From the
206214
Sample comments:
207215

208216
..
@@ -212,7 +220,7 @@ Sample comments:
212220
contour. The density is the ratio of Contour-area to Convex Hull-area.*
213221

214222

215-
The **third column** shows the **Aspect Ratio of the boxFit**, the best-fit
223+
The **fourth column** shows the **Aspect Ratio of the boxFit**, the best-fit
216224
rectangle around the contour:
217225

218226
..
@@ -226,9 +234,8 @@ rectangle around the contour:
226234
**tilted** at some angle, namely not horizontal. This will be discussed
227235
more in a later page.
228236

229-
The **fourth column** shows the (X, Y) position of the **Center** of the boxFit
230-
rectangle. With the origin at the full image's top left corner, X increases to
231-
the right and Y increases downward.
237+
The fifth and sixth columns are described in a later page called
238+
:doc:`Color Locator (Round Blobs) <../color-locator-round-blobs/color-locator-round-blobs>`.
232239

233240
Blob Formation
234241
--------------
@@ -326,6 +333,9 @@ After that, the following page called :doc:`Challenge
326333
<../color-locator-challenge/color-locator-challenge>` shows how to **access
327334
more OpenCV features** not covered in the Sample OpMode.
328335

336+
Then a page called :doc:`Color Locator (Round Blobs)
337+
<../color-locator-round-blobs/color-locator-round-blobs>` covers detection of round objects.
338+
329339
============
330340

331341
*Questions, comments and corrections to [email protected]*

0 commit comments

Comments
 (0)