-
Notifications
You must be signed in to change notification settings - Fork 1
Developer's Guide
This is the Developer's Guide. It contains the technical details of the implementation. The guide assumes that you have read the User's Guide and have advanced programming skills. If you are not familiar with ROS (Robot Operating System) please read the ROS Documentation.
The robot used for this project is a Kobuki Turtlebot 2. It can receive commands from other devices using the Robot Operating System (ROS). The robot itself is shaped like a disk (see two figures below) and on top a platform can be attached to mount the computing unit and sensors.


The computing unit used in the project is a Intel NUC with a Celeron Processor. To avoid tilting of the robot, when it accelerates, the robot was build as small as possible with only one platform attached. To make this working, some 3D-printed parts had to be added. One is a base plate to attach the NUC to the robot:



Another 3D-printed part is an adapter to attach a Kinect-like system called Astra. This adapter fits into the slot, which is normally for cable management, but by reversing the orientation of the platform, it can be used to attach the Astra this way:

The NUC is powered by the battery of the Turtlebot and is connected to the 12V output. For the data connection between the robot and the NUC, a USB-B cable is used. The Astra and the RPLIDAR are connected via USB to the NUC. The RPLIDAR is attached to a so called ”charging capable” USB-port, which is able to deliver the necessary current.
The initial setup was already done by Kuboki. The robot only has to be connected
to the same Wifi. To facilitate the startup, a .launch-file has been written,
which contains the following:
TODOIt starts up the robot, after that the RPLIDAR and than creates a tf for the laser
sensor, which is about 3cm above the ground.
For the simulator the stage ros package has been used. For the robot model, a predefined
Turtlebot model was used. Only the laser sensor parameters were changed to match the
RPLIDAR. This is done in the turtlebot_model.inc at top in the ranger-section.
The field of view (fov) is changed to 360, as well as the samples. The map in the
simulator shows the 3rd floor of the SA building at the University of Duisburg-Essen,
which is the place where the real robot was tested. The localization in the simulator is done by the
fake amcl-package which provides a pseudo Monte Carlo localization.

For the localization of the real robot adaptive Monte Carlo localization is used (amcl). The corresponding ROS package can be used out of the box with the following parameters:
-
use_map_topic = falseto get the map over the global frame -
scan_topic = scanto set the correct topic for the laser scanner initial_pose_(x,y,z) = 0.0-
odom_frame_id = odomto set the odometry frame identifier -
base_frame_id = base_linkto set the base frame identifier -
global_frame_id = mapto set the global frame identifier
The planner node in scripts/planner.py performs high level task planning with a SMACH state machine.
With the smach_viewer you can view the current state of the planning node:

The state machine starts in the state WaitForZone. When there is no zone data (the deque is empty), the planner fetches the current result of the zone generator and stays in the WaitForZone state. Due to lack of deep object deserialization the nested data has to be deserialized explicitly. This data is not updated till the zone deque is completely empty. When there is zone data (the deque is not empty and contains deque's of waypoints), the state machine changes into the GotZone state.
This state is similar to the WaitForZone state. It process also a deque, but the deque contains Waipoints. These waypoints are our actual measurement points. If there is a waypoint in the deque, the state machine changes into the GotWaypoint state. If there is no waypoint (the deque is empty), the planner removes the current zone and changes into the WaitForZone state.
This state is responsible for the actual measurement. It stays in this state, till the waypoint is reached (move_base action server returns GoalStatus SUCCEEDED), an error occurred or the timer limit was exceeded (default is 60s, increase this if you get a lot of false positives). Each time the planer sends a MoveBaseAction message to the move_base action server. The move_base action server is responsible for the actual movement to the goal point. If the waypoint is not reachable, the robot will stay at it's current position (probably the last successful reached waypoint) and does not start trying to move to the goal point. Currently, the only way to detect this, is to use a timeout. When the waypoint is estimated as reachable, but in reality is not, the robot will crash. To prevent this you have to set all parameters appropriate (e.g. radius of the robot).
When the waypoint is reached or the wait condition was not satisfied (error or timeout occurred), the planner removes the waypoint from the current zone. Then the state machine changes into the GotZone state.
For further help see this tutorial.