Skip to content

Workshop 4 ‐ Robot tools

Riccardo Polvara edited this page Oct 20, 2025 · 12 revisions

Preparations

Task 1: Tools

  1. RQt tools are very convenient for inspecting image topics. First, install the rqt_image_view package by issuing sudo apt-get install ros-humble-rqt-image-view. Whilst the cameras on the real/simulated robot are running, issue the following command to visualise the colour image (check for image topic, as these are slightly different for real/simulated robot):

    ros2 run rqt_image_view rqt_image_view --ros-args -r image:=/limo/depth_camera_link/image_raw.

    You can also skip the image topic arguments and select an image topic from the list available through GUI.

    image

  2. The basic tool for viewing and saving images is image_view (sudo apt-get install ros-humble-image-view) but has some limitations when it comes to accepting topics with different QoS settings. We can still use those tools for the image stream originating from the simulated robot:

    ros2 run image_view image_view --ros-args -r image:=/limo/depth_camera_link/image_raw

  3. The rosbag2 package allows for convenient recording and replaying of different types of ROS topics, including the images. To record the rosbag file, issue

    ros2 bag record [-o bagfilename] <topics>

    You can stop the recording using CTRL + C in the terminal when you are recording. After the recording, replay the recorded file:

    ros2 bag play <filename>

    Some of the sensor topics on the real robot might require overriding QoS policies which is covered in the following article.

    As part of this task, meanwhile, you navigate your robot around using the teleoperation node ( ros2 run teleop_twist_keyboard teleop_twist_keyboard), record a rosbag containing the topic /odom, /limo/depth_camera_link/points, and /scan. Now, open Rviz with the following command

    rviz2 -d /opt/ros/lcas/install/limo_gazebosim/share/limo_gazebosim/rviz/urdf.rviz

    select as Fixed frame the base_link if it's not already set, and add an Odometry visualization marker by clicking on the Add button and then on the Topic tab.

    image

    Once added, expand the drop-down menu and untick the Covariance option. Now, kill the gazebo simulation or the Zenoh connection to your robot. You can verify there are no topics streamed to your workstation if ros2 topic list doesn't show anything else than /rosout and /parameter_events. At this point, please play the rosbag you previously recorded and look at the topics recorded in Rviz.

  4. (Bonus) Some scenarios might involve streaming directly from camera/images (e.g. for testing, training and annotation without a robot). The most straightforward way is to use the image_publisher package. The node can be run as follows

    ros2 run image_publisher image_publisher_node <input>

    where input can be both a video file name (e.g. test.mp4) or a camera device (e.g. /dev/video0).

Task 2 - Create a local workspace for developing your code

The following instructions do apply either if you are developing a solution on a PC and you are interfacing with the simulator, or if you are controlling the real LIMO robot.

  1. Basically, follow https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Creating-A-Workspace/Creating-A-Workspace.html
  • e.g. create a directory cmp9767_ws as the root of your workspace. Usually, this is located in the home directory of your system
    mkdir -p $HOME/cmp9767_ws/src
  1. Optional (only if you feel confident enough in ROS, not essential):
  • Discuss the dependencies you may need and consider
  • Complete the package.xml with your own information,
  1. decide on a name for your repository and create it to keep all your work in it (e.g, cmp9767_code) - you may want to follow the official instructions.

    cd $HOME/cmp9767_ws/src
    mkdir cmp9767_code
    cd cmp9767_code
  2. create your own package(s) in the workspace and keep track of all developments there. Only add your implementation to your source code repository (what is under src/ in your workspace)

    ros2 pkg create --build-type ament_python <package_name>
  • you may want to include here the tflistener or the mover script, or even the script you wrote as part of Workshop 2. To better understand the structure of a package, please refer to the cmp9767_tutorial package that comes with the Docker image of this module.

Always Please make sure you keep this implementation safe (i.e. commit it to GitHub) - to help you in that, look at useful resources if you don't know git workflows

Task 3 - Start moving the robot autonomously

For this task, you are required to work in simulation for the time being.

  1. First, install some missing packages for autonomous navigation, by opening a new terminal in your VS Code and running

    sudo apt install ros-humble-nav2-*

  2. Launch the simulation in the same way you did for Task #1.

  3. Launch the navigation stack by invoking the following command

    ros2 launch limo_navigation limo_navigation.launch.py

  4. Add a new Map type marker for your local_costmap topic, and set the Color scheme to costmap. Notice how obstacles got inflated by a safety area which is not traversable by the robot. You can read more about it here.

    image

  5. Try to send a robot goal by using the 2D Goal Pose button in the top bar in Rviz. A green arrow will appear at the point where you click on your mouse, and releasing the click the new goal will be sent to the robot.

    image

  6. Appreciate how your robot will never traverse the inflated area.

  7. Inspect the tf_tree (do you remember how to do it?) to see how it changed since the previous workshops, and inspect any new topics you may see in the console.

    image

Clone this wiki locally