-
Notifications
You must be signed in to change notification settings - Fork 30
CMP3103 Week 3
Make sure you keep all your code you develop in the workshops and also note down any commands you used (create a README.md file for your notes). Maybe, a good idea is to actually keep all this in your own https://github.com repository (even share within your group). You will need this again as you go along in this module.
In the lecture, you have been introduced to ways to conduct image processing in Python using OpenCV. In this workshop, you shall learn how to
- retrieve images from ROS topics
- convert images into the OpenCV format
- perform image processing operations on the image
- (optionally) command the robot based on your image processing output
To get you off the ground quickly, all the source code shown in the lecture is available online. In particular, have a look at
-
opencv_intro.pywhich shows you how to load an image in OpenCV without ROS and address it. Also look at the official OpenCV Python tutorials to gain a better understanding -
opencv_bridge.pyshowing you how to use CvBridge to read image from a topic. -
color_contours.pyto get an idea about colour slicing as introduced in the lecture. Also read about Changing Colour Spaces.
-
Develop Python code with the following abilities:
- Take the example code fragment
opencv_bridge.pyfrom the lecture and modify it so you can read from the camera of your (simulated) turtlebots. - read images from your robot, display them using OpenCV methods, and try out colour slicing as presented in the lecture to segment a coloured object of your choice. When trying this in simulation, put some nice coloured objects in front of the robot. Find suitable parameters to robustly segment that blob. You may take
color_contours.pyas a starting point for your implementation. - Use the output from above to then publish
std_msgs/Stringmessages on aPublisherthat contains information about the outcome of the operation (e.g. print the mean value of the pixel intensities of the resulting image). (Hint: You'll need to create a Publisher with typestd_msgs/Stringfor this:p=rospy.Publisher('/result_topic', String)and then publish to it.
- Take the example code fragment
-
Try out the "followbot" presented in the lecture. Take the code from https://github.com/LCAS/teaching/tree/lcas_melodic/ros_book_line_follower/src described in chapter 12 of the "Programming Robots with ROS" book, available also on blackboard. You need to run
sudo apt update && sudo apt install ros-melodic-ros-book-line-followerto install the required ROS package, and afterwards either start a new terminal, our run the commandsource /opt/ros/melodic/setup.bashin your active terminal to ensure gazebo finds all its resources. Then launch the course withroslaunch ros_book_line_follower course.launchin afterwards. Then, in a separate terminal (or VSCode) run one of the files taken from https://github.com/LCAS/teaching/tree/lcas_melodic/ros_book_line_follower/src.To summarise the minimal requirements for this week:
- Develop Python code that subscribes to the image stream from the robot
- Publish the output of some image processing as a
std_msgs/Stringon a topic named/result_topic - Run the line follower and understand its code
-
(Optional) Research about Hough Transform and see how it can be used to detect lines with OpenCV for Python. Understand the concepts of Hough transform from your research and then also look at the circle detection code in
hough_circle.py. Make it work with actual image data received via a topic from your (simulated/real) robot. -
(Optional) Get a real robot and make it find a colourful object. Note: You cannot easily use display the OpenCV Image on the robots from Jupyter. You will have to use the VPN setup to connect your computer to the turtlebots for it to work. You can of course run just some python code without any graphical windows easily in Jupyter.
-
(Optional) The real fun starts here: Develop code that makes the real robot chase a colour.
Also, browse through this collection of useful resources beyond what has been presented in the lecture: OpenCV and ROS
Copyright by Lincoln Centre for Autonomous Systems
-
CMP3103
- Using the Docker Image
- Week 1
- Week 2
- Week 3
- Week 4
- Week 5: Start working towards Coursework 2526
- Week 6: Enhancement Week, no timetabled sessions
- Week 7 and beyond
- Using the real Limo Robot