You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/backends-nxp.md
+44-5Lines changed: 44 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,14 +10,14 @@ For up-to-date status about running ExecuTorch on Neutron Backend please visit t
10
10
11
11
## Features
12
12
13
-
Executorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700).
13
+
ExecuTorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700).
14
14
Among currently supported machine learning models are:
15
15
- Convolution-based neutral networks
16
-
- Full support for MobileNetv2 and CifarNet
16
+
- Full support for MobileNetV2 and CifarNet
17
17
18
18
## Prerequisites (Hardware and Software)
19
19
20
-
In order to succesfully build executorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Windows or Linux.
20
+
In order to successfully build ExecuTorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Linux.
21
21
22
22
If you want to test the runtime, you'll also need:
23
23
- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a testing board like MIMXRT700-AVK
@@ -32,9 +32,48 @@ To test converting a neural network model for inference on NXP eIQ Neutron Backe
32
32
./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)]
33
33
```
34
34
35
-
For a quick overview how to convert a custom PyTorch model, take a look at our [exmple python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).
35
+
For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py).
36
+
37
+
### Partitioner API
38
+
39
+
The partitioner is defined in `NeutronPartitioner` in `backends/nxp/neutron_partitioner.py`. It has the following
40
+
arguments:
41
+
*`compile_spec` - list of key-value pairs defining compilation. E.g. for specifying platform (i.MXRT700) and Neutron Converter flavor.
42
+
*`custom_delegation_options` - custom options for specifying node delegation.
43
+
44
+
### Quantization
45
+
46
+
The quantization for Neutron Backend is defined in `NeutronQuantizer` in `backends/nxp/quantizer/neutron_quantizer.py`.
47
+
The quantization follows PT2E workflow, INT8 quantization is supported. Operators are quantized statically, activations
48
+
follow affine and weights symmetric per-tensor quantization scheme.
49
+
50
+
#### Supported operators
51
+
52
+
List of Aten operators supported by Neutron quantizer:
To learn how to run the converted model on the NXP hardware, use one of our example projects on using executorch runtime from MCUXpresso IDE example projects list.
78
+
To learn how to run the converted model on the NXP hardware, use one of our example projects on using ExecuTorch runtime from MCUXpresso IDE example projects list.
40
79
For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html).
0 commit comments