You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- 不同于常规的移动端预测引擎基于 Python 脚本工具转化模型, Lite 架构上有完整基于 C++ 开发的 IR 及相应 Pass 集合,以支持操作融合,计算剪枝,存储优化,量化计算等多类计算图优化。更多的优化策略可以简单通过 [新增 Pass](https://paddle-lite.readthedocs.io/zh/latest/develop_guides/add_new_pass.html) 的方式模块化支持。
57
+
- 支持多平台:涵盖 Android、iOS、嵌入式 Linux 设备、Windows、macOS 和 Linux 主机
Copy file name to clipboardExpand all lines: README_en.md
+3-24Lines changed: 3 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,31 +11,10 @@ For tutorials, please see [PaddleLite Document](https://paddle-lite.readthedocs.
11
11
12
12
## Key Features
13
13
14
-
### Light Weight
14
+
- Multiple platform support, covering Android and iOS devices, embedded Linux, windows, macOS and Linux computer.
15
+
- Diverse language support, which includes Java, C++, and Python.
16
+
- High performance and light weight: optimized for on-device machine learning, reduced model and binary size, efficient inference and reduced memory usage.
15
17
16
-
On mobile devices, execution module can be deployed without third-party libraries, because our excecution module and analysis module are decoupled.
17
-
18
-
On ARM V7, only 800KB are taken up, while on ARM V8, 1.3MB are taken up with the 80 operators and 85 kernels in the dynamic libraries provided by Paddle Lite.
19
-
20
-
Paddle Lite enables immediate inference without extra optimization.
21
-
22
-
### High Performance
23
-
24
-
Paddle Lite enables device-optimized kernels, maximizing ARM CPU performance.
25
-
26
-
It also supports INT8 quantizations with [PaddleSlim model compression tools](https://github.com/PaddlePaddle/models/tree/v1.5/PaddleSlim), reducing the size of models and increasing the performance of models.
27
-
28
-
On Huawei NPU and FPGA, the performance is also boosted.
29
-
30
-
The latest benchmark is located at [benchmark](https://paddlepaddle.github.io/Paddle-Lite/develop/benchmark/)
31
-
32
-
### High Compatibility
33
-
34
-
Hardware compatibility: Paddle Lite supports a diversity of hardwares — ARM CPU, Mali GPU, Adreno GPU, Nvidia GPU, Apple GPU, Huawei NPU and FPGA. In the near future, we will also support AI microchips from Cambricon and Bitmain.
35
-
36
-
Model compatibility: The Op of Paddle Lite is fully compatible to that of PaddlePaddle. The accuracy and performance of 18 models (mostly CV models and OCR models) and 85 operators have been validated. In the future, we will also support other models.
37
-
38
-
Framework compatibility: In addition to models trained on PaddlePaddle, those trained on Caffe and TensorFlow can also be converted to be used on Paddle Lite, via [X2Paddle](https://github.com/PaddlePaddle/X2Paddle). In the future to come, we will also support models of ONNX format.
0 commit comments