Skip to content

Commit 2955ef6

Browse files
alisonwhroman-janik-nxp
authored andcommitted
Add User Guide about ExecuTorch example
Co-authored-by: Robert Kalmar <[email protected]> Co-authored-by: Jiri Ocenasek <[email protected]> Signed-off-by: Alison Wang <[email protected]>
1 parent da765e0 commit 2955ef6

File tree

12 files changed

+261
-0
lines changed

12 files changed

+261
-0
lines changed

docs/nxp/images/figure1.png

86.5 KB
Loading

docs/nxp/images/figure2.png

151 KB
Loading

docs/nxp/images/figure3.png

105 KB
Loading

docs/nxp/images/figure4.png

41.1 KB
Loading

docs/nxp/images/figure5.png

385 KB
Loading

docs/nxp/images/figure6.png

336 KB
Loading

docs/nxp/images/figure7.png

52.6 KB
Loading

docs/nxp/topics/convert_model.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# PyTorch Model Conversion to ExecuTorch Format
2+
3+
In this guideline we will show how to use the ExecuTorch AoT part to convert a PyTorch model to ExecuTorch format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutron Backend.
4+
5+
First we will start with an example script converting the model. This example show the CifarNet model preparation. It is the same model which is part of the `example_cifarnet`
6+
7+
The steps are expected to be executed from the executorch root folder, in our case the `mcuxsdk-middleware-executorch`
8+
1. After building the ExecuTorch you shall have the `libquantized_ops_aot_lib.so` located in the `pip-out` folder. We will need this library when generating the quantized cifarnet ExecuTorch model. So as first step we will find it:
9+
```commandline
10+
$ find ./pip-out -name `libquantized_ops_aot_lib.so`
11+
./pip-out/temp.linux-x86_64-cpython-310/cmake-out/kernels/quantized/libquantized_ops_aot_lib.so
12+
./pip-out/lib.linux-x86_64-cpython-310/executorch/kernels/quantized/libquantized_ops_aot_lib.so
13+
```
14+
15+
2. Configure python to add the current directory to PYTHONPATH and copy the generated `program.fbs` and `scalar_type.fbs` to
16+
run the example. The code snipet bellow assumes you are using a virtual environment. Adjust according to your true setup.
17+
```commandline
18+
$ export PYTHONPATH=`cd ..; pwd`
19+
$ cp venv/lib/python3.10/site-packages/executorch/exir/_serialize/program.fbs exir/_serialize/program.fbs
20+
$ cp venv/lib/python3.10/site-packages/executorch/exir/_serialize/scalar_type.fbs exir/_serialize/scalar_type.fbs
21+
```
22+
23+
3. Now run the `aot_neutron_compile.py` example with the `cifar10` model
24+
```commandline
25+
$ python examples/nxp/aot_neutron_compile.py \
26+
--quantize --so_library ./pip-out/lib.linux-x86_64-cpython-310/executorch/kernels/quantized/libquantized_ops_aot_lib.so \
27+
--delegate --neutron_converter_flavor SDK_25_03 -m cifar10
28+
```
29+
30+
3. It will generate you `cifar10_nxp_delegate.pte` file which can be used with the MXUXpresso SDK `cifarnet_example` project.
31+
32+
The generated PTE file is used in the executorch_cifarnet example application, see [example_application](example_applications.md).
33+
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# MCUXpresso SDK Example applications
2+
3+
The MCUXpresso SDK provides a set of projects and example application with the eIQ ExecuTorch. For details, see [Table 1](example_applications.md#TABLE_LISTOFEXAMPLEAPP).
4+
5+
The eIQ ExecuTorch library is provided with a set of example applications. For details, see [Table 1](example_applications.md#TABLE_LISTOFEXAMPLEAPP). The applications demonstrate the usage of the library in several use cases.
6+
7+
|Name| Description |Availability|
8+
|----|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|
9+
|`executorch_lib`| This project contains the ExecuTorch Runtime Library source code and is used to build the ExecuTorch Runtime Library. The library is further used to build a full application using the leveraging ExecuTorch. | MIMXRT700-EVK \(no camera and display support\) |
10+
|`executorch_cifarnet`| Example application demonstrating the use of the ExecuTorch running a CifarNet classification model accelerated on the eIQ Neutron NPU. The Cifarnet is a small Convolutional Neural Network \(CNN\), trained on [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) [1] dataset. The model clasifies the input images into 10 caterories. | MIMXRT700-EVK \(no camera and display support\) |
11+
12+
For details on how to build and run the example applications with supported toolchains, see *Getting Started with MCUXpresso SDK User’s Guide* \(document: MCUXSDKGSUG\).
13+
14+
### How to build and run `executorch_cifarnet` example
15+
16+
The example needs ExecuTorch Runtime Library and Neutron Libraries.
17+
18+
ExecuTorch Runtime Library:
19+
* `middleware/eiq/executorch/lib/cm33/armgcc/libexecutorch.a`
20+
21+
Neutron Libraries:
22+
* `middleware/eiq/executorch/third-party/neutron/rt700/libNeutronDriver.a` and
23+
* `middleware/eiq/executorch/third-party/neutron/rt700/libNeutronFirmware.a`
24+
25+
In the example the model and the input image is already embedded into the program and ready to build and deploy to i.MX RT700, so you can continue right to the [building and deployment ](#build-deploy-and-run) section.
26+
27+
#### Convert the model and example input to C array
28+
In this section we describe where the model and example input is located in the example application sources, and how it was generated.
29+
30+
The **cifar10 model** ExecuTorch model is stored in `boards/mimxrt700evk/eiq_examples/executorch_cifarnet/cm33_core0/model_pte.h`.
31+
and was generated from the `cifar10_nxp_delegate.pte` \(see [convert_model](convert_model.md)\).
32+
33+
We use the `xxd` command to get the C array containing the model data and array size:
34+
```commandline
35+
$ xxd -i cifar10_nxp_delegate.pte > model_pte_data.h
36+
```
37+
then use the array data and size in the `model_pte.h`.
38+
39+
As **input image** we use the image from [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset [1]. After preprocessing and normalization it is converted to bytes and located here `boards/mimxrt700evk/eiq_examples/executorch_cifarnet/cm33_core0/image_data.h`.
40+
The preprocessing is performed as follows:
41+
```python
42+
import torch
43+
import torchvision
44+
import numpy as np
45+
46+
batch_size = 1
47+
48+
transform = torchvision.transforms.Compose([
49+
torchvision.transforms.ToTensor(),
50+
torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
51+
])
52+
53+
test_set = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
54+
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=0)
55+
56+
index = 0
57+
num_images = 10
58+
for data in test_loader:
59+
images, labels = data
60+
for image, label in zip(images,labels):
61+
arr = image.numpy().astype(np.float32)
62+
arr.tofile("img" + str(index) + "_" + str(int(label)) + ".bin")
63+
index = index + 1
64+
if index >= num_images:
65+
break
66+
if index >= num_images:
67+
break
68+
```
69+
This generates the `num_images` count of images from Cifar10 dataset, as input tensors for the cifar10 model and store them in corresponding `.bin` files. Then we use the xxd command to get the C array data and size:
70+
```commandline
71+
$ xxd -i img0_3.bin > image_data_base.h
72+
```
73+
and again copy the array data and size in the `image_data.h`
74+
75+
Note, the `img0` is the image picturing a cat, what is a class number 3.
76+
77+
#### Build, Deploy and Run
78+
1. When using ARMGCC toolchain, the example application can be built as below. After building the example application, download it to the target with JLink as shown in [Figure 3](example_applications.md#FIG_ARMGCCJLINKCOMMAND), an output message displays on the connected terminal as [Figure 4](example_applications.md#FIG_ARMGCCOUTPUT).
79+
```commandline
80+
$ boards/mimxrt700evk/eiq_examples/executorch_cifarnet/cm33_core0/armgcc$ ./build_flash_release.sh
81+
```
82+
![](../images/figure3.png "ARMGCC jlink command") ![](../images/figure4.png "ARMGCC output")
83+
84+
2. When using MCUXpresso IDE, the example applications can be imported through the SDK Import Wizard as shown in [Figure 5](example_applications.md#FIG_IDEIMPORTWIZARD).
85+
86+
![](../images/figure5.png "MCUXpresso SDK import projects wizard")
87+
88+
After building the example application and downloading it to the target, the execution stops in the *main* function. When the execution resumes, an output message displays on the connected terminal. For example, [Figure 6](example_applications.md#FIG_IDECOUTPUT) shows the output of the `executorch_cifarnet` example application.
89+
90+
![](../images/figure6.png "IDE output")
91+
92+
In case of missing probabilities in the printed output, add PRINTF_FLOAT_ENABLE=1 to the Preprocessor settings for C++ and C compiler:
93+
94+
![](../images/figure7.png "FP output enable")
95+
96+
### How to build `executorch_lib` example
97+
98+
If you want to build a new ExecuTorch Runtime Library, follow the commands as below and use the new library to replace the default Runtime library `middleware/eiq/executorch/lib/cm33/armgcc/libexecutorch.a`.
99+
100+
1. When using ARMGCC toolchain, the example application can be built as below.
101+
```commandline
102+
$ boards/mimxrt700evk/eiq_examples/executorch_lib/cm33_core0/armgcc$ ./build_release.sh
103+
$ boards/mimxrt700evk/eiq_examples/executorch_lib/cm33_core0/armgcc$ cp release/libexecutorch_lib_cm33_core0.a.a ../../../../../../middleware/eiq/executorch/lib/cm33/armgcc/libexecutorch.a
104+
```
105+
106+
2. When using MCUXpresso IDE, the example applications can be imported through the SDK Import Wizard as shown in the above [Figure 5](example_applications.md#FIG_IDEIMPORTWIZARD).
107+
108+
After building the example application, copy the new library `mimxrt700evk_executorch_lib_cm33_core0\Debug\libmimxrt700evk_executorch_lib_cm33_core0.a` to replace the default Runtime library `mimxrt700evk_executorch_cifarnet_cm33_core0\eiq\executorch\lib\cm33\armgcc\libexecutorch.a`.
109+
110+
[1] [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf), Alex Krizhevsky, 2009
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Getting the MCUXpresso SDK with eIQ ExecuTorch
2+
3+
The eIQ ExecuTorch library is part of the eIQ machine learning software package, which is an optional middleware component of MCUXpresso SDK. The eIQ component is integrated into the MCUXpresso SDK Builder delivery system available on [mcuxpresso.nxp.com](https://mcuxpresso.nxp.com). To include eIQ machine learning into the MCUXpresso SDK package, the eIQ middleware component is selected in the software component selector on the SDK Builder page when building a new package. See [Figure 1](deployment.md#FIG_BUILDERCOMPONENTSELECTOR).
4+
5+
![](../images/figure1.png "MCUXpresso SDK Builder software component selector")
6+
7+
Once the MCUXpresso SDK package is downloaded, it can be extracted on a local machine or imported into the MCUXpresso IDE. For more information on the MCUXpresso SDK folder structure, see the Getting Started with MCUXpresso SDK User’s Guide \(document: MCUXSDKGSUG\). The package directory structure is similar to [Figure 2](deployment.md#FIG_DIRECTORYSTRUCTURE).
8+
9+
![](../images/figure2.png "MCUXpresso SDK directory structure")
10+
11+
The *boards* directory contains example application projects for supported toolchains. For the list of supported toolchains, see the *MCUXpresso SDK Release Notes*. The *middleware* directory contains the eIQ library source code and example application source code and data.

0 commit comments

Comments
 (0)