-
Notifications
You must be signed in to change notification settings - Fork 8
Workshop 4 ‐ Robot tools
- Have the simulation and the LIMO robot ready for comparisons.
-
RQt tools are very convenient for inspecting image topics. First, install the
rqt_image_viewpackage by issuingsudo apt-get install ros-humble-rqt-image-view. Whilst the cameras on the real/simulated robot are running, issue the following command to visualise the colour image (check for image topic, as these are slightly different for real/simulated robot):ros2 run rqt_image_view rqt_image_view --ros-args -r image:=/limo/depth_camera_link/image_raw.You can also skip the image topic arguments and select an image topic from the list available through GUI.

-
The basic tool for viewing and saving images is
image_view(sudo apt-get install ros-humble-image-view) but has some limitations when it comes to accepting topics with different QoS settings. We can still use those tools for the image stream originating from the simulated robot:ros2 run image_view image_view --ros-args -r image:=/limo/depth_camera_link/image_raw -
The rosbag2 package allows for convenient recording and replaying of different types of ROS topics, including the images. To record the rosbag file, issue
ros2 bag record [-o bagfilename] <topics>You can stop the recording using
CTRL + Cin the terminal when you are recording. After the recording, replay the recorded file:ros2 bag play <filename>Some of the sensor topics on the real robot might require overriding QoS policies which is covered in the following article.
As part of this task, meanwhile, you navigate your robot around using the teleoperation node (
ros2 run teleop_twist_keyboard teleop_twist_keyboard), record a rosbag containing the topic/odom,/limo/depth_camera_link/points, and/scan. Now, openRvizwith the following commandrviz2 -d /opt/ros/lcas/install/limo_gazebosim/share/limo_gazebosim/rviz/urdf.rvizselect as
Fixed framethebase_linkif it's not already set, and add anOdometryvisualization marker by clicking on theAddbutton and then on theTopictab.
Once added, expand the drop-down menu and untick the
Covarianceoption. Now, kill the gazebo simulation or the Zenoh connection to your robot. You can verify there are no topics streamed to your workstation ifros2 topic listdoesn't show anything else than/rosoutand/parameter_events. At this point, please play the rosbag you previously recorded and look at the topics recorded in Rviz. -
(Bonus) Some scenarios might involve streaming directly from camera/images (e.g. for testing, training and annotation without a robot). The most straightforward way is to use the
image_publisherpackage. The node can be run as followsros2 run image_publisher image_publisher_node <input>where input can be both a video file name (e.g.
test.mp4) or a camera device (e.g./dev/video0).
The following instructions do apply either if you are developing a solution on a PC and you are interfacing with the simulator, or if you are controlling the real LIMO robot.
- Basically, follow https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Creating-A-Workspace/Creating-A-Workspace.html
- e.g. create a directory
cmp9767_wsas the root of your workspace. Usually, this is located in thehomedirectory of your systemmkdir -p $HOME/cmp9767_ws/src
- Optional (only if you feel confident enough in ROS, not essential):
- Discuss the dependencies you may need and consider
- Complete the
package.xmlwith your own information,
-
decide on a name for your repository and create it to keep all your work in it (e.g,
cmp9767_code) - you may want to follow the official instructions.cd $HOME/cmp9767_ws/src mkdir cmp9767_code cd cmp9767_code
-
create your own package(s) in the workspace and keep track of all developments there. Only add your implementation to your source code repository (what is under
src/in your workspace)ros2 pkg create --build-type ament_python <package_name>
- you may want to include here the
tflisteneror themoverscript, or even the script you wrote as part of Workshop 2. To better understand the structure of a package, please refer to thecmp9767_tutorialpackage that comes with the Docker image of this module.
Always Please make sure you keep this implementation safe (i.e. commit it to GitHub) - to help you in that, look at useful resources if you don't know git workflows
For this task, you are required to work in simulation for the time being.
-
First, install some missing packages for autonomous navigation, by opening a new terminal in your VS Code and running
sudo apt install ros-humble-nav2-* -
Launch the simulation in the same way you did for Task #1.
-
Launch the navigation stack by invoking the following command
ros2 launch limo_navigation limo_navigation.launch.py -
Add a new
Maptype marker for yourlocal_costmaptopic, and set theColor schemetocostmap. Notice how obstacles got inflated by a safety area which is not traversable by the robot. You can read more about it here.
-
Try to send a robot goal by using the
2D Goal Posebutton in the top bar in Rviz. A green arrow will appear at the point where you click on your mouse, and releasing the click the new goal will be sent to the robot.
-
Appreciate how your robot will never traverse the inflated area.
-
Inspect the
tf_tree(do you remember how to do it?) to see how it changed since the previous workshops, and inspect any new topics you may see in the console.