55Thanks to [ cardboardcode] ( https://github.com/cardboardcode ) , we have [ the documentation] ( https://onnx-runtime-cpp.readthedocs.io/en/latest/index.html ) for this small library.
66Hope that they both are helpful for your work.
77
8+ <details >
9+ <summary >Table of Contents</summary >
10+ <ol >
11+ <li><a href="#todo">TODO</a></li>
12+ <li><a href="#installation">Installation</a></li>
13+ <li>
14+ <a href="#how-to-build">How to Build</a>
15+ <ul>
16+ <li><a href="#how-to-run-with-docker">How to Run with Docker</a></li>
17+ </ul>
18+ </li>
19+ <li><a href="#how-to-test-apps">How to test apps</a></li>
20+ </ol >
21+ </details >
22+
823## TODO
924
1025- [x] Support inference of multi-inputs, multi-outputs
@@ -30,7 +45,8 @@ Hope that they both are helpful for your work.
3045
3146---
3247
33- - CPU
48+ <details >
49+ <summary >CPU</summary >
3450
3551``` bash
3652make default
@@ -39,27 +55,36 @@ make default
3955make apps
4056```
4157
42- - GPU with CUDA
58+ </details >
59+
60+ <details >
61+ <summary >GPU with CUDA</summary >
4362
4463``` bash
4564make gpu_default
4665
4766make gpu_apps
4867```
4968
50- ### :whale : How to Run with Docker
69+ </details >
70+
71+ ### How to Run with Docker
5172
52- - CPU
73+ <details >
74+ <summary >CPU</summary >
5375
5476``` bash
5577# build
56- docker build -f ./dockerfiles/ubuntu2004_gpu .dockerfile -t onnx_runtime .
78+ docker build -f ./dockerfiles/ubuntu2004 .dockerfile -t onnx_runtime .
5779
5880# run
5981docker run -it --rm -v ` pwd` :/workspace onnx_runtime
6082```
6183
62- - GPU with CUDA
84+ </details >
85+
86+ <details >
87+ <summary >GPU with CUDA</summary >
6388
6489``` bash
6590# build
@@ -71,6 +96,8 @@ docker build -f ./dockerfiles/ubuntu2004_gpu.dockerfile -t onnx_runtime_gpu .
7196docker run -it --rm --gpus all -v ` pwd` :/workspace onnx_runtime_gpu
7297```
7398
99+ </details >
100+
74101## How to test apps
75102
76103---
@@ -79,6 +106,9 @@ docker run -it --rm --gpus all -v `pwd`:/workspace onnx_runtime_gpu
79106
80107---
81108
109+ <details >
110+ <summary >Usage</summary >
111+
82112``` bash
83113# after make apps
84114./build/examples/TestImageClassification ./data/squeezenet1.1.onnx ./data/images/dog.jpg
@@ -94,10 +124,21 @@ the following result can be obtained
94124230 : Shetland sheepdog, Shetland sheep dog, Shetland : 0.020529
95125```
96126
127+ </details >
128+
129+ <p align =" right " >(<a href =" #readme-top " >back to top</a >)</p >
130+
97131### Object Detection With Tiny-Yolov2 trained on VOC dataset (with 20 classes)
98132
99133---
100134
135+ <p align =" center " width =" 100% " >
136+ <img width="30%" src="docs/images/tiny_yolov2_result.jpg">
137+ </p >
138+
139+ <details >
140+ <summary >Usage</summary >
141+
101142- Download model from onnx model zoo: [ HERE] ( https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov2 )
102143
103144- The shape of the output would be
@@ -114,14 +155,22 @@ the following result can be obtained
114155./build/examples/tiny_yolo_v2 [path/to/tiny_yolov2/onnx/model] ./data/images/dog.jpg
115156```
116157
117- - Test result
158+ </ details >
118159
119- ![ tinyyolov2 test result ] ( ./data/images/result.jpg )
160+ < p align = " right " >(< a href = " #readme-top " >back to top</ a >)</ p >
120161
121162### Object Instance Segmentation With MaskRCNN trained on MS CoCo Dataset (80 + 1(background) clasess)
122163
123164---
124165
166+ <p align =" center " width =" 100% " >
167+ <img width="45%" align=top src="docs/images/dogs_maskrcnn_result.jpg">
168+ <img width="45%" align=top src="docs/images/indoor_maskrcnn_result.jpg">
169+ </p >
170+
171+ <details >
172+ <summary >Usage</summary >
173+
125174- Download model from onnx model zoo: [ HERE] ( https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn )
126175
127176- As also stated in the url above, there are four outputs: boxes(nboxes x 4), labels(nboxes), scores(nboxes), masks(nboxesx1x28x28)
@@ -132,16 +181,21 @@ the following result can be obtained
132181./build/examples/mask_rcnn [path/to/mask_rcnn/onnx/model] ./data/images/dogs.jpg
133182```
134183
135- - Test results:
136-
137- ![ dogs maskrcnn result] ( ./data/images/dogs_maskrcnn_result.jpg )
184+ </details >
138185
139- ![ indoor maskrcnn result ] ( ./data/images/indoor_maskrcnn_result.jpg )
186+ < p align = " right " >(< a href = " #readme-top " >back to top</ a >)</ p >
140187
141188### Yolo V3 trained on Ms CoCo Dataset
142189
143190---
144191
192+ <p align =" center " width =" 100% " >
193+ <img width="50%" src="docs/images/no_way_home_result.jpg">
194+ </p >
195+
196+ <details >
197+ <summary >Usage</summary >
198+
145199- Download model from onnx model zoo: [ HERE] ( https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov3 )
146200
147201- Test yolo-v3 inference apps
@@ -151,16 +205,21 @@ the following result can be obtained
151205./build/examples/yolov3 [path/to/yolov3/onnx/model] ./data/images/no_way_home.jpg
152206```
153207
154- - Test result
208+ </ details >
155209
156- <p align =" center " >
157- <img width =" 1000 " height =" 667 " src =" ./data/images/no_way_home_result.jpg " >
158- </p >
210+ <p align =" right " >(<a href =" #readme-top " >back to top</a >)</p >
159211
160212### [ Ultra-Light-Fast-Generic-Face-Detector-1MB] ( https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB )
161213
162214---
163215
216+ <p align =" center " width =" 100% " >
217+ <img width="50%" src="docs/images/endgame_result.jpg">
218+ </p >
219+
220+ <details >
221+ <summary >Usage</summary >
222+
164223- App to use onnx model trained with famous light-weight [ Ultra-Light-Fast-Generic-Face-Detector-1MB] ( https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB )
165224- Sample weight has been saved [ ./data/version-RFB-640.onnx] ( ./data/version-RFB-640.onnx )
166225- Test inference apps
@@ -170,13 +229,21 @@ the following result can be obtained
170229./build/examples/ultra_light_face_detector ./data/version-RFB-640.onnx ./data/images/endgame.jpg
171230```
172231
173- - Test results:
174- ![ ultra light weight face result] ( ./data/images/endgame_result.jpg )
232+ </details >
233+
234+ <p align =" right " >(<a href =" #readme-top " >back to top</a >)</p >
175235
176236### [ YoloX: high-performance anchor-free YOLO by Megvii] ( https://github.com/Megvii-BaseDetection/YOLOX )
177237
178238---
179239
240+ <p align =" center " width =" 100% " >
241+ <img width="50%" src="docs/images/matrix_result.jpg">
242+ </p >
243+
244+ <details >
245+ <summary >Usage</summary >
246+
180247- Download onnx model trained on COCO dataset from [ HERE] ( https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ONNXRuntime )
181248
182249``` bash
@@ -191,13 +258,26 @@ wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yo
191258./build/examples/yolox ./data/yolox_l.onnx ./data/images/matrix.jpg
192259```
193260
194- - Test results:
195- ![ yolox result] ( ./data/images/matrix_result.jpg )
261+ </details >
262+
263+ <p align =" right " >(<a href =" #readme-top " >back to top</a >)</p >
196264
197265### [ Semantic Segmentation Paddle Seg] ( https://github.com/PaddlePaddle/PaddleSeg )
198266
199267---
200268
269+ <p align =" center " width =" 100% " >
270+ <img width="20%" src="docs/images/cityscapes_legend.jpg">
271+ </p >
272+
273+ <p align =" center " width =" 100% " >
274+ <img width="45%" align=top src="docs/images/sample_city_scapes_result.jpg">
275+ <img width="45%" align=top src="docs/images/odaiba_result.jpg">
276+ </p >
277+
278+ <details >
279+ <summary >Usage</summary >
280+
201281- Download PaddleSeg's bisenetv2 trained on cityscapes dataset that has been converted to onnx [ HERE] ( https://drive.google.com/file/d/1e-anuWG_ppDXmoy0sQ0sgrdutCTGlk95/view?usp=sharing ) and copy to [ ./data directory] ( ./data )
202282
203283<details >
@@ -215,27 +295,20 @@ wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yo
215295./build/examples/semantic_segmentation_paddleseg_bisenetv2 ./data/bisenetv2_cityscapes.onnx ./data/images/odaiba.jpg
216296```
217297
218- - Test results:
219-
220- - cityscapes dataset's color legend
221-
222- ![ city scapes color legend] ( ./data/images/cityscapes_legend.jpg )
223-
224- + test result on sample image of cityscapes dataset (this model is trained on cityscapes dataset)
225-
226- ![ paddleseg city scapes] ( ./data/images/sample_city_scapes_result.jpg )
227-
228- + test result on a new scene at Odaiba, Tokyo, Japan
298+ </details >
229299
230- ![ paddleseg odaiba ] ( ./data/images/odaiba_result.jpg )
300+ < p align = " right " >(< a href = " #readme-top " >back to top</ a >)</ p >
231301
232302### [ SuperPoint] ( https://arxiv.org/pdf/1712.07629.pdf )
233303
234304---
235305
236- ![ super_point_good_matches] ( ./data/images/super_point_good_matches.jpg )
306+ <p align =" center " width =" 100% " >
307+ <img width="80%" src="docs/images/super_point_good_matches.jpg">
308+ </p >
237309
238310<details >
311+ <summary >Usage</summary >
239312
240313- Convert SuperPoint's pretrained weights to onnx format
241314
@@ -256,7 +329,9 @@ wget https://raw.githubusercontent.com/StaRainJ/Multi-modality-image-matching-da
256329- Test inference apps
257330
258331``` bash
259- ./build/examples/super_point ./scripts/superpoint /super_point.onnx data/VisionCS_0a.png data/VisionCS_0b.png
332+ ./build/examples/super_point /path/to /super_point.onnx data/VisionCS_0a.png data/VisionCS_0b.png
260333```
261334
262335</details >
336+
337+ <p align =" right " >(<a href =" #readme-top " >back to top</a >)</p >
0 commit comments