Skip to content

Workshop 10 ‐ Object detection and counting

gcielniak edited this page Dec 11, 2025 · 3 revisions

Object detection

Task 1.1 - Object detection in 3D

The repository contains a new node detector_3d.py which demonstrates how to detect colour objects in images and then perform image to camera to global coordinates transformations. The node subscribes to colour and depth images and utilises the 'camera_info' topic which includes camera geometry information (focal point, resolution, etc.). The colour detector is very similar to the basic one used in previous workshops. The additional functionality includes transforming the image coordinates to a camera frame and subsequently from a camera frame to a global frame. Study the file first so that you familiarise yourself with the basic workflow. Ask for questions if something is not quite clear. Add the script file to your repository project (don't forget to update your setup.py).

  • Now, launch the simulator with the default environment: ros2 launch limo_gazebosim limo_gazebo_diff.launch.py.
  • Insert a coloured red object in Gazebo and place the object in front of the robot. You might need to move the robot and the ball manually into a location with more free space around.
  • Run the detector_3d node by issuing ros2 run rob2002_<project> detector_3d. You should see the debug windows visualising the image processing pipeline with colour and depth images. The node outputs the detected objects in 3d as the /limo/object_location topic.
  • Move the colour object around and note its position in Gazebo (right-click on the object and inspect its pose property in the left panel of the simulator) and compare to the output of the detector. Why is there a discrepancy between the calculated location and values read from the simulator?

Task 1.2 - Object localisation in a map

  • Change the detector so that the global_frame corresponds now to the map frame.
  • In addition to the simulator, run the navigation node together with rviz visualisation.
  • Set up the simulation as above with the colour object in front of the robot.
  • Run the updated detector, and add /object_location topic to rviz.
  • Note the calculated object location in relation to its real location.
  • Try the setup out with multiple objects of the same colour.

Object counting

Task 2.1 - Object counting in 3D

The repository contains a new node counter_3d.py demonstrating how to count the detected objects in global coordinates with a simple filter preventing double counting. The node subscribes to object_location topic and keeps track of all detected objects. The new detection is first checked for its distance to all counted objects so far and if it is detected close to the existing object (distance below detection_threshold), then it is ignored. This allows the robot to detect the objects from multiple viewpoints without registering multiple counts. Add the script file to your repository project.

  • To see how the counter works, launch the simulator with the default environment: ros2 launch limo_gazebosim limo_gazebo_diff.launch.py.
  • Insert a number of small red objects in Gazebo and place them around the robot so that not all of them are immediately visible. You might need to move the robot and the objects manually into a location with more free space around. Remember that you can copy and paste one object to avoid editing multiple objects.
  • Run the detector node by issuing ros2 run rob2002_tutorial detector_3d. This time, the detector is not visualising the image processing steps to reduce processing delays but you can change that by setting visualisation=True if required.
  • Run the counter node by issuing ros2 run rob2002_<project> counter_3d. The node prints out a full list of all objects in the terminal.
  • You can also visualise the counted objects in rviz. Run rviz, change Fixed Frame to odom, add Robot Model with Description Topic /robot_description and then Add/By topic /object_count_array. You should see arrows indicating the location of individual objects.

Task 2.2 - Counting and moving

The new counter allows the robot to register objects from different viewpoints.

  • Run the keyboard teleoperation node ros2 run teleop_twist_keyboard teleop_twist_keyboard and slowly move the robot around so that you register also those objects which were outside of the robot's field of view.
  • The counter's key parameter is detection_threshold. Set that to different values and note its behaviour with different robot speeds and object sizes.

Real robot

Try the detector and counter on the real robot with real colour objects. You will need to set the real_robot variable in the detector to True and adjust the ranges of the colour filters. Try it out with the odometry first and then use the map-based navigation.

Clone this wiki locally