|
1 | 1 | # NXP eIQ Neutron Backend
|
2 | 2 |
|
3 |
| -See |
4 |
| -[NXP eIQ Neutron Backend](https://github.com/pytorch/executorch/blob/main/backends/nxp/README.md) |
5 |
| -for current status about running ExecuTorch on NXP eIQ Neutron Backend. |
| 3 | +This manual page is dedicated to introduction of using the ExecuTorch with NXP eIQ Neutron Backend. |
| 4 | +NXP offers accelerated machine learning models inference on edge devices. |
| 5 | +To learn more about NXP's machine learning acceleration platform, please refer to [the official NXP website](https://www.nxp.com/applications/technologies/ai-and-machine-learning:MACHINE-LEARNING). |
| 6 | + |
| 7 | +<div class="admonition tip"> |
| 8 | +For up-to-date status about running ExecuTorch on Neutron Backend please visit the <a href="https://github.com/pytorch/executorch/blob/main/backends/nxp/README.md">manual page</a>. |
| 9 | +</div> |
| 10 | + |
| 11 | +## Features |
| 12 | + |
| 13 | +Executorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700). |
| 14 | +Among currently supported machine learning models are: |
| 15 | +- Convolution-based neutral networks |
| 16 | +- Full support for MobileNetv2 and CifarNet |
| 17 | + |
| 18 | +## Prerequisites (Hardware and Software) |
| 19 | + |
| 20 | +In order to succesfully build executorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Windows or Linux. |
| 21 | + |
| 22 | +If you want to test the runtime, you'll also need: |
| 23 | +- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a testing board like MIMXRT700-AVK |
| 24 | +- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC) |
| 25 | + |
| 26 | +## Using NXP backend |
| 27 | + |
| 28 | +To test converting a neural network model for inference on NXP eIQ Neutron Backend, you can use our example script: |
| 29 | + |
| 30 | +```shell |
| 31 | +# cd to the root of executorch repository |
| 32 | +./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)] |
| 33 | +``` |
| 34 | + |
| 35 | +For a quick overview how to convert a custom PyTorch model, take a look at our [exmple python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py). |
| 36 | + |
| 37 | +## Runtime Integration |
| 38 | + |
| 39 | +To learn how to run the converted model on the NXP hardware, use one of our example projects on using executorch runtime from MCUXpresso IDE example projects list. |
| 40 | +For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html). |
0 commit comments