You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Models/model_library/AutoSpeed/README.md
+21Lines changed: 21 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,27 @@ Maintaining the vehicle speed and keep safe distance from the vehicle in front i
6
6
determine the closest inpath object AutoSpeed network is used. This network is inspired by YOLOv11 architecture, with
7
7
substituted c3K2 by a new block ASC block to improve CIPO object detection.
8
8
9
+
The AutoSpeed model detects all foreground objects and classifies into three categories depending on the object's position with respect to the predicted future driving path of the ego-car:
10
+
11
+
- objects directly within the future driving path of the ego-car
12
+
- objects cutting-in/cutting-out of the future driving path of the ego-car
13
+
- objects outside of the future driving path of the ego-car
14
+
15
+
We use an inverse-perspective mapping to convert the image pixels to world coordinates. We take the bottom/centre point of the in-path object's bounding box and measure its real-world distance through the inverse-perspective mapping. A Kalman filter is separately used to track the distance and infer the speed of this object.
16
+
17
+
You can find more information about how we calculate the inverse-perspective mapping for Waymo Open Dataset here: https://github.com/autowarefoundation/autoware_vision_pilot/tree/main/VisionPilot/Middleware_Recipes/Calibration
18
+
19
+
### You can see our object tracking implementation here:
0 commit comments