-
-
Notifications
You must be signed in to change notification settings - Fork 4
Object Detection
Context: As of this year, URC Rules specify two new waypoints during the Autonomy mission that require the Rover to detect and navigate toward two objects placed on the ground.
The 2 objects will have GNSS coordinates within the vicinity of the objects (<10 m). Autonomous detection of the tools will be required. The first object will be an orange rubber mallet. The second object will be a standard 1 L wide-mouthed plastic water bottle of unspecified color/markings (approximately 21.5 cm tall by 9 cm diameter).
Currently, the perception system does not support detection of objects besides ARTags so we must experiment with and implement such a detection system. One way to do this is using a learning-based instance segmentation model such as YOLO to extract the mallet and waterbottle from the ZED camera feed.
**Interface** (Subject to change)
Node: detect_objects
Subscribes: sensor_msgs/Image
Publishes:
Object.msg
string object_type
float32 detection_confidence
float32 image_x
float32 image_y
Rough Steps:
- Collect data and train an image segmentation model (such as YOLO) to detect the objects
- Create a subscriber for the Image topic
- Write a function that takes an Image and passes it into the model to detect the objects
- Create a publisher for the Object topic
- Write a function that publishes the detected Objects from the Image message to the Object topic
We also need a node to put the detected object(s) into the TF tree so the rover can navigate toward them. This involves translating the tag from image_x, image_y pixel space to an SE3 pose relative to the rover.