|
2 | 2 |
|
3 | 3 | ROS Wrapper for openpose https://github.com/CMU-Perceptual-Computing-Lab/openpose |
4 | 4 |
|
5 | | -## Description |
6 | | -Provides a service interface for openpose. Returns the skeleton when an image is send |
| 5 | +## Installation notes |
7 | 6 |
|
8 | | -## Installation |
9 | | -ROS Kinetic uses OpenCV 3.2 as default. Therefore it is important to compile openpose and caffe against OpenCV 3.2 as well |
| 7 | +This ROS wrapper makes use of the [Openpose python interface](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/modules/python_module.md). |
| 8 | +Please follow the [installation manual](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation.md) and ensure that the `BUILD_PYTHON` flag is turned on while running CMake. |
10 | 9 |
|
11 | | -#### A simple example using the opencv 3.2 version distributed by ROS kinetic: |
12 | | -``` |
13 | | -sudo apt remove opencv* libopencv* |
14 | | -sudo apt install ros-kinetic-opencv3 |
15 | | -
|
16 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_core3.so /usr/lib/libopencv_core.so |
17 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_highgui3.so /usr/lib/libopencv_highgui.so |
18 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_imgcodecs3.so /usr/lib/libopencv_imgcodecs.so |
19 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_imgproc3.so /usr/lib/libopencv_imgproc.so |
20 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_videoio3.so /usr/lib/libopencv_videoio.so |
21 | | -sudo ln -fs /opt/ros/kinetic/lib/libopencv_objdetect3.so /usr/lib/libopencv_objdetect.so |
22 | | -sudo ln -fs /opt/ros/kinetic/include/opencv-3.2.0-dev/opencv2 /usr/include/opencv2 |
23 | | -``` |
| 10 | +## Scripts |
24 | 11 |
|
25 | | -Next compile openpose using the [openpose installation manual](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation.md) |
| 12 | +### detect_poses |
26 | 13 |
|
27 | | -Make sure at the end a symbolic link is added to the ROS package, for example if the openpose folder is in your home dir: |
28 | | -``` |
29 | | -roscd image_recognition_openpose |
30 | | -ln -s ~/openpose |
| 14 | +Example for the following picture: |
| 15 | + |
| 16 | + |
| 17 | + |
| 18 | +```bash |
| 19 | +export MODEL_FOLDER=~/dev/openpose/models |
| 20 | +rosrun image_recognition_openpose detect_poses $MODEL_FOLDER image `rospack find image_recognition_openpose`/doc/example.jpg |
31 | 21 | ``` |
32 | 22 |
|
33 | | -If the symbolic link is not present a mock node will be used for testing. |
| 23 | +Output: |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +It also works with a webcam stream, usage: |
| 28 | + |
| 29 | +```bash |
| 30 | +usage: detect_poses [-h] [--pose_model POSE_MODEL] |
| 31 | + [--net_input_size NET_INPUT_SIZE] |
| 32 | + [--net_output_size NET_OUTPUT_SIZE] |
| 33 | + [--num_scales NUM_SCALES] [--scale_gap SCALE_GAP] |
| 34 | + [--num_gpu_start NUM_GPU_START] |
| 35 | + [--overlay_alpha OVERLAY_ALPHA] |
| 36 | + [--python_path PYTHON_PATH] |
| 37 | + model_folder {image,cam} ... |
| 38 | + |
| 39 | +Detect poses in an image |
| 40 | + |
| 41 | +positional arguments: |
| 42 | + model_folder Path where the models are stored |
| 43 | + {image,cam} Mode |
| 44 | + image Use image mode |
| 45 | + cam Use cam mode |
| 46 | + |
| 47 | +optional arguments: |
| 48 | + -h, --help show this help message and exit |
| 49 | + --pose_model POSE_MODEL |
| 50 | + What pose model to use (default: BODY_25) |
| 51 | + --net_input_size NET_INPUT_SIZE |
| 52 | + Net input size (default: -1x368) |
| 53 | + --net_output_size NET_OUTPUT_SIZE |
| 54 | + Net output size (default: -1x-1) |
| 55 | + --num_scales NUM_SCALES |
| 56 | + Num scales (default: 1) |
| 57 | + --scale_gap SCALE_GAP |
| 58 | + Scale gap (default: 0.3) |
| 59 | + --num_gpu_start NUM_GPU_START |
| 60 | + What GPU support (default: 0) |
| 61 | + --overlay_alpha OVERLAY_ALPHA |
| 62 | + Overlay alpha for the output image (default: 0.6) |
| 63 | + --python_path PYTHON_PATH |
| 64 | + Python path where Openpose is stored (default: |
| 65 | + /usr/local/python/) |
| 66 | +``` |
34 | 67 |
|
35 | | -(After creating the symlink, do not forget to clean first) |
| 68 | +### openpose_node |
36 | 69 |
|
37 | 70 | ## How-to |
38 | 71 |
|
39 | 72 | Run the image_recognition_openpose node in one terminal, e.g.: |
40 | 73 |
|
41 | | - rosrun image_recognition_openpose image_recognition_openpose_node _net_input_width:=368 _net_input_height:=368 _net_output_width:=368 _net_output_height:=368 _model_folder:=/home/ubuntu/openpose/models/ |
| 74 | +```bash |
| 75 | +export MODEL_FOLDER=~/dev/openpose/models |
| 76 | +rosrun image_recognition_openpose openpose_node _model_folder:=$MODEL_FOLDER |
| 77 | +``` |
42 | 78 |
|
43 | 79 | Next step is starting the image_recognition_Rqt test gui (https://github.com/tue-robotics/image_recognition_rqt) |
44 | 80 |
|
45 | 81 | rosrun image_recognition_rqt test_gui |
46 | | - |
47 | | -Again configure the service you want to call with the gear-wheel in the top-right corner of the screen. If everything is set-up, draw a rectangle in the image and ask the service for detections: |
| 82 | + |
| 83 | +Configure the service you want to call with the gear-wheel in the top-right corner of the screen. If everything is set-up, draw a rectangle in the image and ask the service for detections: |
48 | 84 |
|
49 | 85 |  |
50 | 86 |
|
|
0 commit comments