Skip to content

Commit 2b715d9

Browse files
committed
Bump to 0.2.0+ort1.17.1+ocv4.9.0
1 parent 01b7de9 commit 2b715d9

File tree

1 file changed

+9
-74
lines changed

1 file changed

+9
-74
lines changed

README.md

Lines changed: 9 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ add_executable(lite_yolov5 examples/test_lite_yolov5.cpp)
111111
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
112112
```
113113
<details>
114-
<summary> 🔑️ Supported Models Matrix </summary>
114+
<summary> 🔑️ Supported Models Matrix!Click here! </summary>
115115

116116
## Supported Models Matrix
117117
<div id="lite.ai.toolkit-Supported-Models-Matrix"></div>
@@ -228,6 +228,9 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
228228
</details>
229229

230230

231+
<details>
232+
<summary> 🔑️ Model Zoo!Click here! </summary>
233+
231234
## Model Zoo.
232235

233236
<div id="lite.ai.toolkit-Model-Zoo"></div>
@@ -253,8 +256,7 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
253256
docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02 # (217M) + YOLO5Face
254257
```
255258

256-
<details>
257-
<summary> 🔑️ How to download Model Zoo from Docker Hub?</summary>
259+
### 🔑️ How to download Model Zoo from Docker Hub?
258260

259261
* Firstly, pull the image from docker hub.
260262
```shell
@@ -290,11 +292,12 @@ target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})
290292
cp -rf mnn/cv share/
291293
```
292294

293-
</details>
294295

295296
### Model Hubs
296297
The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see [Model Zoo](#lite.ai.toolkit-Model-Zoo) and [ONNX Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md), [MNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.mnn.md), [TNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.tnn.md), [NCNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.ncnn.md) for more details.
297298

299+
</details>
300+
298301

299302
## Examples.
300303

@@ -975,81 +978,13 @@ auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50
975978
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.
976979
```
977980

978-
## License.
981+
## License
979982

980983
<div id="lite.ai.toolkit-License"></div>
981984

982985
The code of [Lite.Ai.ToolKit](#lite.ai.toolkit-Introduction) is released under the GPL-3.0 License.
983986

984-
985-
## References.
986-
987-
<div id="lite.ai.toolkit-References"></div>
988-
989-
Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.
990-
991-
* [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting) (🔥🔥🔥new!!↑)
992-
* [nanodet](https://github.com/RangiLyu/nanodet) (🔥🔥🔥↑)
993-
* [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) (🔥🔥🔥new!!↑)
994-
* [YOLOP](https://github.com/hustvl/YOLOP) (🔥🔥new!!↑)
995-
* [YOLOR](https://github.com/WongKinYiu/yolor) (🔥🔥new!!↑)
996-
* [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) (🔥🔥🔥↑)
997-
* [insightface](https://github.com/deepinsight/insightface) (🔥🔥🔥↑)
998-
* [yolov5](https://github.com/ultralytics/yolov5) (🔥🔥💥↑)
999-
* [TFace](https://github.com/Tencent/TFace) (🔥🔥↑)
1000-
* [YOLOv4-pytorch](https://github.com/argusswift/YOLOv4-pytorch) (🔥🔥🔥↑)
1001-
* [Ultra-Light-Fast-Generic-Face-Detector-1MB](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB) (🔥🔥🔥↑)
1002-
1003-
<details>
1004-
<summary> Expand for More References.</summary>
1005-
1006-
* [headpose-fsanet-pytorch](https://github.com/omasaht/headpose-fsanet-pytorch) (🔥↑)
1007-
* [pfld_106_face_landmarks](https://github.com/Hsintao/pfld_106_face_landmarks) (🔥🔥↑)
1008-
* [onnx-models](https://github.com/onnx/models) (🔥🔥🔥↑)
1009-
* [SSR_Net_Pytorch](https://github.com/oukohou/SSR_Net_Pytorch) (🔥↑)
1010-
* [colorization](https://github.com/richzhang/colorization) (🔥🔥🔥↑)
1011-
* [SUB_PIXEL_CNN](https://github.com/niazwazir/SUB_PIXEL_CNN) (🔥↑)
1012-
* [torchvision](https://github.com/pytorch/vision) (🔥🔥🔥↑)
1013-
* [facenet-pytorch](https://github.com/timesler/facenet-pytorch) (🔥↑)
1014-
* [face.evoLVe.PyTorch](https://github.com/ZhaoJ9014/face.evoLVe.PyTorch) (🔥🔥🔥↑)
1015-
* [center-loss.pytorch](https://github.com/louis-she/center-loss.pytorch) (🔥🔥↑)
1016-
* [sphereface_pytorch](https://github.com/clcarwin/sphereface_pytorch) (🔥🔥↑)
1017-
* [DREAM](https://github.com/penincillin/DREAM) (🔥🔥↑)
1018-
* [MobileFaceNet_Pytorch](https://github.com/Xiaoccer/MobileFaceNet_Pytorch) (🔥🔥↑)
1019-
* [cavaface.pytorch](https://github.com/cavalleria/cavaface.pytorch) (🔥🔥↑)
1020-
* [CurricularFace](https://github.com/HuangYG123/CurricularFace) (🔥🔥↑)
1021-
* [face-emotion-recognition](https://github.com/HSE-asavchenko/face-emotion-recognition) (🔥↑)
1022-
* [face_recognition.pytorch](https://github.com/grib0ed0v/face_recognition.pytorch) (🔥🔥↑)
1023-
* [PFLD-pytorch](https://github.com/polarisZhao/PFLD-pytorch) (🔥🔥↑)
1024-
* [pytorch_face_landmark](https://github.com/cunjian/pytorch_face_landmark) (🔥🔥↑)
1025-
* [FaceLandmark1000](https://github.com/Single430/FaceLandmark1000) (🔥🔥↑)
1026-
* [Pytorch_Retinaface](https://github.com/biubug6/Pytorch_Retinaface) (🔥🔥🔥↑)
1027-
* [FaceBoxes](https://github.com/zisianw/FaceBoxes.PyTorch) (🔥🔥↑)
1028-
1029-
</details>
1030-
1031-
1032-
## Compilation Options.
1033-
1034-
In addition, [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN) support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by [ONNXRuntime C++](https://github.com/microsoft/onnxruntime) can run through [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN). So, if you want to use all the models supported by this repo and don't care about the performance gap of *1~2ms*, just let [ONNXRuntime](https://github.com/microsoft/onnxruntime) as default inference engine for this repo. However, you can follow the steps below if you want to build with [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) or [TNN](https://github.com/Tencent/TNN) support.
1035-
1036-
* change the `build.sh` with `DENABLE_MNN=ON`,`DENABLE_NCNN=ON` or `DENABLE_TNN=ON`, such as
1037-
```shell
1038-
cd build && cmake \
1039-
-DCMAKE_BUILD_TYPE=MinSizeRel \
1040-
-DINCLUDE_OPENCV=ON \ # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself.
1041-
-DENABLE_MNN=ON \ # Whether to build with MNN, default OFF, only some models are supported now.
1042-
-DENABLE_NCNN=OFF \ # Whether to build with NCNN, default OFF, only some models are supported now.
1043-
-DENABLE_TNN=OFF \ # Whether to build with TNN, default OFF, only some models are supported now.
1044-
.. && make -j8
1045-
```
1046-
* use the MNN, NCNN or TNN version interface, see [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_nanodet.cpp), such as
1047-
```C++
1048-
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
1049-
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
1050-
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);
1051-
```
1052-
## 10. Contribute
987+
## Contribute
1053988
<div id="lite.ai.toolkit-Contribute"></div>
1054989

1055990
How to add your own models and become a contributor? See [CONTRIBUTING.zh.md](https://github.com/DefTruth/lite.ai.toolkit/issues/191).

0 commit comments

Comments
 (0)