|
1 | | -# Yolo-V11-cpp-TensorRT |
2 | | - The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT |
| 1 | +# YOLOv11 C++ TensorRT |
| 2 | + |
| 3 | + |
| 4 | + |
| 5 | +<a href="https://github.com/hamdiboukamcha/Yolo-V11-cpp-TensorRT" style="margin: 0 2px;"> |
| 6 | + <img src="https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub" alt="GitHub"> |
| 7 | +</a> |
| 8 | + |
| 9 | +<a href="https://github.com/yourusername/YOLOv11-TensorRT/blob/main/LICENSE" style="margin: 0 2px;"> |
| 10 | + <img src="https://img.shields.io/badge/License-MIT-lightgreen?style=flat&logo=License" alt="License"> |
| 11 | +</a> |
| 12 | + |
| 13 | +## 📜 Citation |
| 14 | + |
| 15 | +The **YOLOv11 C++ TensorRT Project** is a high-performance object detection solution implemented in **C++** and optimized using **NVIDIA TensorRT**. This project leverages the YOLOv11 model to deliver fast and accurate object detection, utilizing TensorRT to maximize inference efficiency and performance. |
| 16 | + |
| 17 | +--- |
| 18 | + |
| 19 | +## 📢 Updates |
| 20 | + |
| 21 | +### Key Features: |
| 22 | +- **Model Conversion**: Convert ONNX models to TensorRT engine files to accelerate inference. |
| 23 | +- **Inference on Videos**: Efficiently perform object detection on video files. |
| 24 | +- **Inference on Images**: Execute object detection on individual images. |
| 25 | +- **High Efficiency**: Optimized for real-time object detection using NVIDIA GPUs. |
| 26 | +- **Preprocessing with CUDA**: CUDA-enabled preprocessing for faster input handling. |
| 27 | + |
| 28 | + |
| 29 | + |
| 30 | +--- |
| 31 | +## 📂 Project Structure |
| 32 | + |
| 33 | + YOLOv11-TensorRT/ |
| 34 | + ├── CMakeLists.txt # Build configuration for the project |
| 35 | + ├── include/ # Header files |
| 36 | + ├── src/ |
| 37 | + │ ├── main.cpp # Main entry point for the application |
| 38 | + │ ├── yolov11.cpp # YOLOv11 implementation |
| 39 | + │ └── preprocess.cu # CUDA preprocessing code |
| 40 | + ├── assets/ # Images and benchmarks for README |
| 41 | + └── build/ # Compiled binaries |
| 42 | + |
| 43 | +## 🛠️ Setup |
| 44 | + |
| 45 | +### Prerequisites |
| 46 | + |
| 47 | +- **CMake** (version 3.18 or higher) |
| 48 | +- **TensorRT** (V8.6.1.6: For optimized inference with YOLOv11.) |
| 49 | +- **CUDA Toolkit** (V11.7: For GPU acceleration) |
| 50 | +- **OpenCV** (V4.10.0: For image and video processing) |
| 51 | +- **NVIDIA GPU** (with compute capability 7.5 or higher) |
| 52 | + |
| 53 | +### Installation |
| 54 | + |
| 55 | +1. Clone the repository: |
| 56 | + ```bash |
| 57 | + git clone https://github.com/hamdiboukamcha/Yolo-V11-cpp-TensorRT.git |
| 58 | + cd YOLOv11-TensorRT |
| 59 | +2. Update the TensorRT and OpenCV paths in CMakeLists.txt: |
| 60 | + ```bash |
| 61 | + set(TENSORRT_PATH "F:/Program Files/TensorRT-8.6.1.6") # Adjust this to your path |
| 62 | +
|
| 63 | +4. Build the project: |
| 64 | + ```bash |
| 65 | + mkdir build |
| 66 | + cd build |
| 67 | + cmake .. |
| 68 | + make -j$(nproc) |
| 69 | +## 🚀 Usage |
| 70 | + |
| 71 | +### Convert Yolov11 To ONNX Model |
| 72 | + from ultralytics import YOLO |
| 73 | + Load the YOLO model |
| 74 | + model = YOLO("yolo11s.pt") |
| 75 | + #Export the model to ONNX format |
| 76 | + export_path = model.export(format="onnx") |
| 77 | + |
| 78 | +### Convert ONNX Model to TensorRT Engine |
| 79 | + |
| 80 | +To convert an ONNX model to a TensorRT engine file, use the following command: |
| 81 | + |
| 82 | + ./YOLOv11TRT convert path_to_your_model.onnx path_to_your_engine.engine. |
| 83 | + |
| 84 | +path_to_your_model.onnx: Path to the ONNX model file. |
| 85 | + |
| 86 | +path_to_your_engine.engine: Path where the TensorRT engine file will be saved. |
| 87 | + |
| 88 | +### Run Inference on Video |
| 89 | +To run inference on a video, use the following command: |
| 90 | + |
| 91 | + ./YOLOv11TRT infer_video path_to_your_video.mp4 path_to_your_engine.engine |
| 92 | + |
| 93 | +path_to_your_video.mp4: Path to the input video file. |
| 94 | + |
| 95 | +path_to_your_engine.engine: Path to the TensorRT engine file. |
| 96 | + |
| 97 | +### Run Inference on Video |
| 98 | +Run Inference on Image |
| 99 | +To run inference on an image, use the following command: |
| 100 | + |
| 101 | + ./YOLOv11TRT infer_image path_to_your_image.jpg path_to_your_engine.engine |
| 102 | + |
| 103 | +path_to_your_image.jpg: Path to the input image file. |
| 104 | + |
| 105 | +path_to_your_engine.engine: Path to the TensorRT engine file. |
| 106 | + |
| 107 | +## ⚙️ Configuration |
| 108 | + |
| 109 | +### CMake Configuration |
| 110 | +In the CMakeLists.txt, update the paths for TensorRT and OpenCV if they are installed in non-default locations: |
| 111 | + |
| 112 | +#### Set the path to TensorRT installation |
| 113 | + |
| 114 | + #Define the path to TensorRT installation |
| 115 | + set(TENSORRT_PATH "F:/Program Files/TensorRT-8.6.1.6") # Update this to the actual path for TensorRT |
| 116 | + |
| 117 | +Ensure that the path points to the directory where TensorRT is installed. |
| 118 | + |
| 119 | +### Troubleshooting |
| 120 | +Cannot find nvinfer.lib: Ensure that TensorRT is correctly installed and that nvinfer.lib is in the specified path. Update CMakeLists.txt to include the correct path to TensorRT libraries. |
| 121 | + |
| 122 | +Linker Errors: Verify that all dependencies (OpenCV, CUDA, TensorRT) are correctly installed and that their paths are correctly set in CMakeLists.txt. |
| 123 | + |
| 124 | +Run-time Errors: Ensure that your system has the correct CUDA drivers and that TensorRT runtime libraries are accessible. Add TensorRT’s bin directory to your system PATH. |
| 125 | + |
| 126 | +## 📞 Contact |
| 127 | + |
| 128 | +For advanced inquiries, feel free to contact me on LinkedIn: <a href="https://www.linkedin.com/in/hamdi-boukamcha/" target="_blank"> <img src="assets/blue-linkedin-logo.png" alt="LinkedIn" width="32" height="32"></a> |
| 129 | + |
| 130 | +## 📜 Citation |
| 131 | + |
| 132 | +If you use this code in your research, please cite the repository as follows: |
| 133 | + |
| 134 | + @misc{boukamcha2024yolov11, |
| 135 | + author = {Hamdi Boukamcha}, |
| 136 | + title = {Yolo-V11-cpp-TensorRT}, |
| 137 | + year = {2024}, |
| 138 | + publisher = {GitHub}, |
| 139 | + howpublished = {\url{https://github.com/hamdiboukamcha/Yolo-V11-cpp-TensorRT/}}, |
| 140 | + } |
| 141 | + |
| 142 | + |
| 143 | + |
| 144 | + |
| 145 | + |
| 146 | + |
0 commit comments