Skip to content

nasongCool/qrb_ros_nn_inference

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

75 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

QRB ROS NN Inference

ROS2 package for performing neural network model

Qualcomm Ubuntu Jazzy

๐Ÿ‘‹ Overview

WARNING: The inference of TFLite model is NOT supported on version 1.1.0-jazzy.

qrb_ros_nn_inference is a ROS2 package for performing neural network model, providing ๐Ÿค–AI-based perception for robotics applications. It provides:

  • โœจmodel inference API which supports three kinds of model format: .tflite, .so, .bin
  • ๐Ÿš€model inference acceleration based on Qualcomm platforms
architecture

qrb_ros_nn_inference is a ROS2 package based on qrb_inference_manager which is a C++ library encapsulates the APIs of Qualcomm AI Engine Direct and QNN Delegate for TensorFlow Lite.

qrb_ros_nn_inference receives data from a specific topic, then without any processing, directly uses the received data for model inference. Afterwards, the results of the model inference are sent out directly through another specific topic.


๐Ÿ”Ž Table of Contents


โš“ APIs

๐Ÿ”น qrb_ros_nn_inference APIs

ROS node parameters

Parameter Type Default Value Description
backend_option string "" Hardware acceleration option for model inference, vaild values are listed here
model_path string "" Path of model file

ROS topics

Topic Name Message Type Description
qrb_inference_input_tensor TensorList Subscribed topic
qrb_inference_output_tensor TensorList Published topic

๐Ÿ”น qrb_inference_manager APIs

Please see qrb_inference_manager APIs.


๐ŸŽฏ Supported Targets

Development Hardware Hardware Overview
Qualcomm Dragonwingโ„ข RB3 Gen2
Qualcomm Dragonwingโ„ข IQ-9075 EVK

โœจ Installation

Important

PREREQUISITES: The following steps need to be run on Qualcomm Ubuntu and ROS Jazzy.
Reference Install Ubuntu on Qualcomm IoT Platforms and Install ROS Jazzy to setup environment.
For Qualcomm Linux, please check out the Qualcomm Intelligent Robotics Product SDK documents.

Note: The GPU hardware acceleration is not support on Ubuntu Desktop

Add Qualcomm IOT PPA for Ubuntu:

sudo add-apt-repository ppa:ubuntu-qcom-iot/qcom-noble-ppa
sudo add-apt-repository ppa:ubuntu-qcom-iot/qirp
sudo apt update

Install Debian package:

sudo apt install ros-jazzy-qrb-ros-nn-inference

๐Ÿš€ Usage

  1. install the qrb_ros_nn_inference by steps above.

  2. prepare the pre-process node and post-process node for model inference

  # qrb_ros_nn_inference/test includes the pre-process node and post-process node
  mkdir -p ~/ros-ws/src && cd ~/ros-ws/src && \
  git clone https://github.com/qualcomm-qrb-ros/qrb_ros_nn_inference
  1. test qrb_ros_nn_inference with YOLOv8 detection model

    3.1 Please follow the guides to get yolov8_det.tflite model. For the model run on RB3 gen2, you can run the commands:

    python3 -m qai_hub_models.models.yolov8_det.export --target-runtime tflite --device "QCS6490 (Proxy)"

    3.2 prepare a image for object detecion

    mkdir -p ~/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_pre_process/image/
    cp /path/to/image.jpg ~/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_pre_process/image/
    python3 ~/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_post_process/scripts/yolov8_input_pre_process.py

    3.3 point out the raw image path and model path in ~/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_post_process/launch/nn_node_test.launch.py

    pre_process_node = ComposableNode(
       package = "qrb_ros_pre_process",
       plugin = "qrb_ros::pre_process::QrbRosPreProcessNode",
       name = "pre_process_node",
       parameters=[
         {
           "image_path": os.environ['HOME'] + "/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_pre_process/image/image.raw"
         }
       ]
    )
    
    nn_inference_node = ComposableNode(
       package = "qrb_ros_nn_inference",
       plugin = "qrb_ros::nn_inference::QrbRosInferenceNode",
       name = "nn_inference_node",
       parameters=[
         {
           "backend_option": "",
           "model_path": "/path/to/model"
         }
       ]
    )

    3.4 build the pre-process node and post-process node

      source /opt/ros/jazzy/setup.bash && \
      cd ~/ros-ws && \
      rm ./src/qrb_ros_nn_inference/test/qrb_ros_post_process/COLCON_IGNORE && \
      rm ./src/qrb_ros_nn_inference/test/qrb_ros_pre_process/COLCON_IGNORE && \
      colcon build --packages-select qrb_ros_pre_process qrb_ros_post_process

    3.5 execute the inference

      cd ~/ros-ws && \
      source install/local_setup.bash && \
      ros2 launch qrb_ros_post_process nn_node_test.launch.py

    You can see the result tensor in ~/ros-ws/src/qrb_ros_nn_inference/test/qrb_ros_post_process/inference_result


๐Ÿ‘จโ€๐Ÿ’ป Build from Source

Install dependencies:

sudo apt install -y software-properties-common
sudo add-apt-repository ppa:ubuntu-qcom-iot/qcom-noble-ppa
sudo apt update
sudo apt install -y libtensorflow-lite-c-qcom1 libtensorflow-lite-qcom-dev libqnn-dev libqnn1

Download the source code and build with colcon:

source /opt/ros/jazzy/setup.bash && \
mkdir -p ~/ros-ws/src && \
cd ~/ros-ws/src && \
git clone https://github.com/qualcomm-qrb-ros/qrb_ros_nn_inference && \
git clone https://github.com/qualcomm-qrb-ros/qrb_ros_interfaces && \
cd ~/ros-ws/ && \
colcon build --packages-up-to qrb_ros_nn_inference

๐Ÿค Contributing

We love community contributions! Get started by reading our CONTRIBUTING.md. Feel free to create an issue for bug report, feature requests or any discussion.


๐Ÿ“œ License

Project is licensed under the BSD-3-Clause License. See LICENSE for the full license text.

About

qrb_ros_nn_inference is a ros2 package for performing neural network model, providing AI-based perception for robotics applications.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • C++ 85.7%
  • Python 8.1%
  • CMake 6.2%