-
Notifications
You must be signed in to change notification settings - Fork 95
Client Server Guide
This guide will provide instructions for running an example MCPTAM system in the Client/Server mode. In this mode, the Tracking (mcptam_client) and Mapping (mcptam_server) stages of the MCPTAM system are run on separate machines and communicate via the network connection. A short overview of running ROS nodes across the network using the same roscore can be found on the ROS.org website here.
This mode is ideal for running the system on heterogeneous machines where the light-weight tracking process is run on-board a robot with limited computing resources but requires real-time localization, and the heavier mapping back-end is run on a ground-station computer with greater computation. The timing requirements for the mapping process are looser and the transmission delays between the client and server is acceptable.
An example set-up for this guide is used to illustrate the ideas. The demonstration system is configured as follows:
- Two cameras (camera1, camera2)
- Two machines (A and B)
- Mapping process will run on A
- Tracking process will run on B
- Cameras are attached to B
- IP address for A:
192.168.1.2 - IP address for B:
192.168.1.3
Install MCPTAM on both A and B and follow the instructions in the Quick-Start Guide to create the files:
mcptam/calibrations/camera1.yaml mcptam/calibrations/camera2.yaml mcptam/groups/group.yaml mcptam/poses/poses.dat
using the camera_calibrator and pose_calibrator nodes on one of the machines. Copy the files to these locations on both A and B.
The roscore will reside on A. In one terminal, set the ROS_MASTER_URI environment variable and start roscore:
$ export ROS_MASTER_URI=http://192.168.1.2:11311 $ roscore
In a second terminal, start the mcptam_server node using the included mcptam_server.launch file:
$ export ROS_MASTER_URI=http://192.168.1.2:11311 $ roslaunch mcptam mcptam_server.launch group_name:=group
Open a terminal on machine B for each camera and start the camera node. Assuming that the cameras use the uvc_camera node:
$ export ROS_MASTER_URI=http://192.168.1.2:11311 $ roslaunch mcptam uvc_camera.launch camera_name:=camera1 device:=/dev/video0
and:
$ export ROS_MASTER_URI=http://192.168.1.2:11311 $ roslaunch mcptam uvc_camera.launch camera_name:=camera2 device:=/dev/video1
Next, in a third terminal on B run the mcptam_client node using the included mcptam_client.launch file:
$ export ROS_MASTER_URI=http://192.168.1.2:11311 $ roslaunch mcptam mcptam_client.launch group_name:=group headless:=false
The parameter headless determines whether to render the GUI for the client-side. For applications where the client GUI should NOT be drawn, use headless:=true.
The nodes mcptam_client and mcptam_server should locate each other on the network and present a similar GUI as mcptam on the server side. It will run identically to mcptam where pressing [Spacebar] will initialize the map and begin tracking on B. When the client determines that it should add a new keyframe to the map, it will signal A and send the data over the network to allow mcptam_server to begin to build the map on A. Upon convergence, updates to the map are sent back to mcptam_client on B to serve as the new map to track against.