|
| 1 | +[简体中文](README.md) | English |
| 2 | + |
| 3 | +# Pose Estimation Inference Example |
| 4 | + |
| 5 | +This example uses the yolo11n-pose model to demonstrate how to perform pose estimation inference using the Command Line Interface (CLI), Python, and C++. |
| 6 | + |
| 7 | +[yolo11n-pose.pt](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.pt),[【TestImages】COCO-Pose-part.zip](https://www.ilanzou.com/s/kBby4w1D) |
| 8 | + |
| 9 | +Please download the required `yolo11n-pose.pt` model file and test images through the provided link, and save the model file to the `models` folder, and place the extracted test images into the `images` folder after unzipping. |
| 10 | + |
| 11 | +## Model Export |
| 12 | + |
| 13 | +> [!IMPORTANT] |
| 14 | +> |
| 15 | +> If you only want to export the ONNX model (with TensorRT plugins) that can be used for inference in this project through the `tensorrt_yolo` provided Command Line Interface (CLI) tool `trtyolo`, you can install it via [PyPI](https://pypi.org/project/tensorrt-yolo) by simply executing the following command: |
| 16 | +> |
| 17 | +> ```bash |
| 18 | +> pip install -U tensorrt_yolo |
| 19 | +> ``` |
| 20 | +> |
| 21 | +> If you want to experience the same inference speed as C++, please refer to [Install-tensorrt_yolo](../../docs/en/build_and_install.md#install-tensorrt_yolo) to build the latest version of `tensorrt_yolo` yourself. |
| 22 | +
|
| 23 | +Use the following command to export the ONNX format with the [EfficientRotatedNMS](../../plugin/efficientRotatedNMSPlugin/) plugin. For detailed `trtyolo` CLI export methods, please read [Model Export](../../docs/en/model_export.md): |
| 24 | +
|
| 25 | +```bash |
| 26 | +trtyolo export -w models/yolo11n-pose.pt -v yolo11 -o models -s |
| 27 | +``` |
| 28 | +
|
| 29 | +After running the above command, a `yolo11n-pose.onnx` file with a `batch_size` of 1 will be generated in the `models` folder. Next, use the `trtexec` tool to convert the ONNX file to a TensorRT engine (fp16): |
| 30 | + |
| 31 | +```bash |
| 32 | +trtexec --onnx=models/yolo11n-pose.onnx --saveEngine=models/yolo11n-pose.engine --fp16 --staticPlugins=/path/to/your/TensorRT-YOLO/lib/plugin/libcustom_plugins.so --setPluginsToSerialize=/path/to/your/TensorRT-YOLO/lib/plugin/libcustom_plugins.so |
| 33 | +``` |
| 34 | + |
| 35 | +## Model Inference |
| 36 | + |
| 37 | +> [!IMPORTANT] |
| 38 | +> |
| 39 | +> The `tensorrt_yolo` installed via [PyPI](https://pypi.org/project/tensorrt-yolo) only provides the ONNX model (with TensorRT plugins) for inference in this project and does not provide inference capabilities. |
| 40 | +> If you want to experience the same inference speed as C++, please refer to [Install-tensorrt_yolo](../../docs/en/build_and_install.md#install-tensorrt_yolo) to build the latest version of `tensorrt_yolo` yourself. |
| 41 | +
|
| 42 | +### Inference Using CLI |
| 43 | + |
| 44 | +> [!NOTE] |
| 45 | +> The `--cudaGraph` command added from version 4.0 can further accelerate the inference process, but this feature only supports static models. |
| 46 | +> |
| 47 | +> From version 4.3 and later, support for pose estimation inference is added. The command `-m 2, --mode 2` is used to select the pose estimation. |
| 48 | +
|
| 49 | +1. Use the `trtyolo` command-line tool from the `tensorrt_yolo` library for inference. Run the following command to view help information: |
| 50 | + |
| 51 | + ```bash |
| 52 | + trtyolo infer --help |
| 53 | + ``` |
| 54 | + |
| 55 | +2. Run the following command for inference: |
| 56 | + |
| 57 | + ```bash |
| 58 | + trtyolo infer -e models/yolo11n-pose.engine -m 3 -i images -o output -l labels.txt --cudaGraph |
| 59 | + ``` |
| 60 | + |
| 61 | + The inference results will be saved in the `output` folder, and a visualization result will be generated. |
| 62 | + |
| 63 | +### Inference Using Python |
| 64 | + |
| 65 | +1. Use the `tensorrt_yolo` library to run the example script `pose.py` for inference. |
| 66 | +2. Run the following command for inference: |
| 67 | + |
| 68 | + ```bash |
| 69 | + python pose.py -e models/yolo11n-pose.engine -i images -o output -l labels.txt --cudaGraph |
| 70 | + ``` |
| 71 | + |
| 72 | +### Inference Using C++ |
| 73 | + |
| 74 | +1. Ensure that the project has been compiled according to the [`TensorRT-YOLO` Compilation](../../docs/en/build_and_install.md#tensorrt-yolo-compile). |
| 75 | +2. Compile `pose.cpp` into an executable: |
| 76 | + |
| 77 | + ```bash |
| 78 | + # Compile using xmake |
| 79 | + xmake f -P . --tensorrt="/path/to/your/TensorRT" --deploy="/path/to/your/TensorRT-YOLO" |
| 80 | + xmake -P . -r |
| 81 | +
|
| 82 | + # Compile using cmake |
| 83 | + mkdir -p build && cd build |
| 84 | + cmake -DTENSORRT_PATH="/path/to/your/TensorRT" -DDEPLOY_PATH="/path/to/your/TensorRT-YOLO" .. |
| 85 | + cmake --build . -j8 --config Release |
| 86 | + ``` |
| 87 | + |
| 88 | + After compilation, the executable file will be generated in the `bin` folder of the project root directory. |
| 89 | + |
| 90 | +3. Run the following command for inference: |
| 91 | + |
| 92 | + ```bash |
| 93 | + cd bin |
| 94 | + ./pose -e ../models/yolo11n-pose.engine -i ../images -o ../output -l ../labels.txt --cudaGraph |
| 95 | + ``` |
| 96 | + |
| 97 | +Through the above methods, you can successfully complete model inference. |
0 commit comments