You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see [Model Zoo](#lite.ai.toolkit-Model-Zoo) and [ONNX Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md), [MNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.mnn.md), [TNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.tnn.md), [NCNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.ncnn.md) for more details.
297
298
299
+
</details>
300
+
298
301
299
302
## Examples.
300
303
@@ -975,81 +978,13 @@ auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50
975
978
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.
976
979
```
977
980
978
-
## License.
981
+
## License
979
982
980
983
<divid="lite.ai.toolkit-License"></div>
981
984
982
985
The code of [Lite.Ai.ToolKit](#lite.ai.toolkit-Introduction) is released under the GPL-3.0 License.
983
986
984
-
985
-
## References.
986
-
987
-
<divid="lite.ai.toolkit-References"></div>
988
-
989
-
Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.
In addition, [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN) support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by [ONNXRuntime C++](https://github.com/microsoft/onnxruntime) can run through [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) and [TNN](https://github.com/Tencent/TNN). So, if you want to use all the models supported by this repo and don't care about the performance gap of *1~2ms*, just let [ONNXRuntime](https://github.com/microsoft/onnxruntime) as default inference engine for this repo. However, you can follow the steps below if you want to build with [MNN](https://github.com/alibaba/MNN), [NCNN](https://github.com/Tencent/ncnn) or [TNN](https://github.com/Tencent/TNN) support.
1035
-
1036
-
* change the `build.sh` with `DENABLE_MNN=ON`,`DENABLE_NCNN=ON` or `DENABLE_TNN=ON`, such as
1037
-
```shell
1038
-
cd build && cmake \
1039
-
-DCMAKE_BUILD_TYPE=MinSizeRel \
1040
-
-DINCLUDE_OPENCV=ON \ # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself.
1041
-
-DENABLE_MNN=ON \ # Whether to build with MNN, default OFF, only some models are supported now.
1042
-
-DENABLE_NCNN=OFF \ # Whether to build with NCNN, default OFF, only some models are supported now.
1043
-
-DENABLE_TNN=OFF \ # Whether to build with TNN, default OFF, only some models are supported now.
1044
-
.. && make -j8
1045
-
```
1046
-
* use the MNN, NCNN or TNN version interface, see [demo](https://github.com/DefTruth/lite.ai.toolkit/blob/main/examples/lite/cv/test_lite_nanodet.cpp), such as
1047
-
```C++
1048
-
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
1049
-
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
1050
-
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);
1051
-
```
1052
-
## 10. Contribute
987
+
## Contribute
1053
988
<divid="lite.ai.toolkit-Contribute"></div>
1054
989
1055
990
How to add your own models and become a contributor? See [CONTRIBUTING.zh.md](https://github.com/DefTruth/lite.ai.toolkit/issues/191).
0 commit comments