diff --git a/docs/source/backends-nxp.md b/docs/source/backends-nxp.md deleted file mode 100644 index f4f7762c769..00000000000 --- a/docs/source/backends-nxp.md +++ /dev/null @@ -1,79 +0,0 @@ -# NXP eIQ Neutron Backend - -This manual page is dedicated to introduction of using the ExecuTorch with NXP eIQ Neutron Backend. -NXP offers accelerated machine learning models inference on edge devices. -To learn more about NXP's machine learning acceleration platform, please refer to [the official NXP website](https://www.nxp.com/applications/technologies/ai-and-machine-learning:MACHINE-LEARNING). - -
-For up-to-date status about running ExecuTorch on Neutron Backend please visit the manual page. -
- -## Features - -ExecuTorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700). -Among currently supported machine learning models are: -- Convolution-based neutral networks -- Full support for MobileNetV2 and CifarNet - -## Prerequisites (Hardware and Software) - -In order to successfully build ExecuTorch project and convert models for NXP eIQ Neutron Backend you will need a computer running Linux. - -If you want to test the runtime, you'll also need: -- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a testing board like MIMXRT700-AVK -- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC) - -## Using NXP backend - -To test converting a neural network model for inference on NXP eIQ Neutron Backend, you can use our example script: - -```shell -# cd to the root of executorch repository -./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)] -``` - -For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py). - -### Partitioner API - -The partitioner is defined in `NeutronPartitioner` in `backends/nxp/neutron_partitioner.py`. It has the following -arguments: -* `compile_spec` - list of key-value pairs defining compilation. E.g. for specifying platform (i.MXRT700) and Neutron Converter flavor. -* `custom_delegation_options` - custom options for specifying node delegation. - -### Quantization - -The quantization for Neutron Backend is defined in `NeutronQuantizer` in `backends/nxp/quantizer/neutron_quantizer.py`. -The quantization follows PT2E workflow, INT8 quantization is supported. Operators are quantized statically, activations -follow affine and weights symmetric per-tensor quantization scheme. - -#### Supported operators - -List of Aten operators supported by Neutron quantizer: - -`abs`, `adaptive_avg_pool2d`, `addmm`, `add.Tensor`, `avg_pool2d`, `cat`, `conv1d`, `conv2d`, `dropout`, -`flatten.using_ints`, `hardtanh`, `hardtanh_`, `linear`, `max_pool2d`, `mean.dim`, `pad`, `permute`, `relu`, `relu_`, -`reshape`, `view`, `softmax.int`, `sigmoid`, `tanh`, `tanh_` - -#### Example -```python -import torch -from executorch.backends.nxp.quantizer.neutron_quantizer import NeutronQuantizer -from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e - -# Prepare your model in Aten dialect -aten_model = get_model_in_aten_dialect() -# Prepare calibration inputs, each tuple is one example, example tuple has items for each model input -calibration_inputs: list[tuple[torch.Tensor, ...]] = get_calibration_inputs() -quantizer = NeutronQuantizer() - -m = prepare_pt2e(aten_model, quantizer) -for data in calibration_inputs: - m(*data) -m = convert_pt2e(m) -``` - -## Runtime Integration - -To learn how to run the converted model on the NXP hardware, use one of our example projects on using ExecuTorch runtime from MCUXpresso IDE example projects list. -For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html). diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index bfa17bc9a9c..11c3fd7dcc4 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -29,7 +29,7 @@ Backends are the bridge between your exported model and the hardware it runs on. | [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | | [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | | [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | -| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | +| [NXP](backends/nxp/nxp-overview.md) | Embedded | NPU | NXP SoCs | | [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | | [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung SoCs | @@ -59,6 +59,6 @@ backends-mediatek backends-arm-ethos-u backends-arm-vgf build-run-openvino -backends-nxp +backends/nxp/nxp-overview backends-cadence backends-samsung-exynos diff --git a/docs/source/backends/nxp/nxp-overview.md b/docs/source/backends/nxp/nxp-overview.md new file mode 100644 index 00000000000..973bffe6f19 --- /dev/null +++ b/docs/source/backends/nxp/nxp-overview.md @@ -0,0 +1,71 @@ +# NXP eIQ Neutron Backend + +This manual page is dedicated to introduction NXP eIQ Neutron backend. +NXP offers accelerated machine learning models inference on edge devices. +To learn more about NXP's machine learning acceleration platform, please refer to [the official NXP website](https://www.nxp.com/applications/technologies/ai-and-machine-learning:MACHINE-LEARNING). + +
+For up-to-date status about running ExecuTorch on Neutron backend please visit the manual page. +
+ +## Features + + +ExecuTorch v1.0 supports running machine learning models on selected NXP chips (for now only i.MXRT700). +Among currently supported machine learning models are: +- Convolution-based neutral networks +- Full support for MobileNetV2 and CifarNet + +## Target Requirements + +- Hardware with NXP's [i.MXRT700](https://www.nxp.com/products/i.MX-RT700) chip or a evaluation board like MIMXRT700-EVK. + +## Development Requirements + +- [MCUXpresso IDE](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) or [MCUXpresso Visual Studio Code extension](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-for-visual-studio-code:MCUXPRESSO-VSC) +- [MCUXpresso SDK 25.06](https://mcuxpresso.nxp.com/mcuxsdk/25.06.00/html/index.html) +- eIQ Neutron Converter for MCUXPresso SDK 25.06, what you can download from eIQ PyPI: + +```commandline +$ pip install --index-url https://eiq.nxp.com/repository neutron_converter_SDK_25_06 +``` + +Instead of manually installing requirements, except MCUXpresso IDE and SDK, you can use the setup script: +```commandline +$ ./examples/nxp/setup.sh +``` + +## Using NXP eIQ Backend + +To test converting a neural network model for inference on NXP eIQ Neutron backend, you can use our example script: + +```shell +# cd to the root of executorch repository +./examples/nxp/aot_neutron_compile.sh [model (cifar10 or mobilenetv2)] +``` + +For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py). + + +## Runtime Integration + +To learn how to run the converted model on the NXP hardware, use one of our example projects on using ExecuTorch runtime from MCUXpresso IDE example projects list. +For more finegrained tutorial, visit [this manual page](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html). + +## Reference + +**→{doc}`nxp-partitioner` — Partitioner options.** + +**→{doc}`nxp-quantization` — Supported quantization schemes.** + +**→{doc}`tutorials/nxp-tutorials` — Tutorials.** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: NXP Backend + +nxp-partitioner +nxp-quantization +tutorials/nxp-tutorials +``` diff --git a/docs/source/backends/nxp/nxp-partitioner.rst b/docs/source/backends/nxp/nxp-partitioner.rst new file mode 100644 index 00000000000..d6ef1c216fd --- /dev/null +++ b/docs/source/backends/nxp/nxp-partitioner.rst @@ -0,0 +1,43 @@ +=============== +Partitioner API +=============== + +The Neutron partitioner API allows for configuration of the model delegation to Neutron. Passing an ``NeutronPartitioner`` instance with no additional parameters will run as much of the model as possible on the Neutron backend. This is the most common use-case. + +It has the following arguments: + +* `compile_spec` - list of key-value pairs defining compilation: +* `custom_delegation_options` - custom options for specifying node delegation. + +-------------------- +Compile Spec Options +-------------------- +To generate the Compile Spec for Neutron backend, you can use the `generate_neutron_compile_spec` function or directly the `NeutronCompileSpecBuilder().neutron_compile_spec()` +Following fields can be set: + +* `config` - NXP platform defining the Neutron NPU configuration, e.g. "imxrt700". +* `neutron_converter_flavor` - Flavor of the neutron-converter module to use. Neutron-converter module named neutron_converter_SDK_25_06' has flavor 'SDK_25_06'. You shall set the flavour to the MCUXpresso SDK version you will use. +* `extra_flags` - Extra flags for the Neutron compiler. +* `operators_not_to_delegate` - List of operators that will not be delegated. + +------------------------- +Custom Delegation Options +------------------------- +By default the Neutron backend is defensive, what means it does not delegate operators which cannot be decided statically during partitioning. But as the model author you typically have insight into the model and so you can allow opportunistic delegation for some cases. For list of options, see +`CustomDelegationOptions `_ + +================ +Operator Support +================ + +Operators are the building blocks of the ML model. See `IRs `_ for more information on the PyTorch operator set. + +This section lists the Edge operators supported by the Neutron backend. +For detailed constraints of the operators see the conditions in the ``is_supported_*`` functions in the `Node converters `_ + + +.. csv-table:: Operator Support + :file: op-support.csv + :header-rows: 1 + :widths: 20 15 30 30 + :align: center \ No newline at end of file diff --git a/docs/source/backends/nxp/nxp-quantization.md b/docs/source/backends/nxp/nxp-quantization.md new file mode 100644 index 00000000000..4ec65d6a83e --- /dev/null +++ b/docs/source/backends/nxp/nxp-quantization.md @@ -0,0 +1,84 @@ +# NXP eIQ Neutron Quantization + +The eIQ Neutron NPU requires the operators delegated to be quantized. To quantize the PyTorch model for the Neutron backend, use the `NeutronQuantizer` from `backends/nxp/quantizer/neutron_quantizer.py`. +The `NeutronQuantizer` is configured to quantize the model with quantization scheme supported by the eIQ Neutron NPU. + +### Supported Quantization Schemes + +The Neutron delegate supports the following quantization schemes: + +- Static quantization with 8-bit symmetric weights and 8-bit asymmetric activations (via the PT2E quantization flow), per-tensor granularity. + - Following operators are supported at this moment: + - `aten.abs.default` + - `aten.adaptive_avg_pool2d.default` + - `aten.addmm.default` + - `aten.add.Tensor` + - `aten.avg_pool2d.default` + - `aten.cat.default` + - `aten.conv1d.default` + - `aten.conv2d.default` + - `aten.dropout.default` + - `aten.flatten.using_ints` + - `aten.hardtanh.default` + - `aten.hardtanh_.default` + - `aten.linear.default` + - `aten.max_pool2d.default` + - `aten.mean.dim` + - `aten.pad.default` + - `aten.permute.default` + - `aten.relu.default` and `aten.relu_.default` + - `aten.reshape.default` + - `aten.view.default` + - `aten.softmax.int` + - `aten.tanh.default`, `aten.tanh_.default` + - `aten.sigmoid.default` + +### Static 8-bit Quantization Using the PT2E Flow + +To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model to edge: + +1) Create an instance of the `NeutronQuantizer` class. +2) Use `torch.export.export` to export the model to ATen Dialect. +3) Call `prepare_pt2e` with the instance of the `NeutronQuantizer` to annotate the model with observers for quantization. +4) As static quantization is required, run the prepared model with representative samples to calibrate the quantized tensor activation ranges. +5) Call `convert_pt2e` to quantize the model. +6) Export and lower the model using the standard flow. + +The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.nxp.quantizer.neutron_quantizer import NeutronQuantizer +from executorch.backends.nxp.neutron_partitioner import NeutronPartitioner +from executorch.backends.nxp.nxp_backend import generate_neutron_compile_spec +from executorch.exir import to_edge_transform_and_lower +from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e + +model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +quantizer = NeutronQuantizer() # (1) + +training_ep = torch.export.export(model, sample_inputs).module() # (2) +prepared_model = prepare_pt2e(training_ep, quantizer) # (3) + +for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs + prepared_model(cal_sample) # (4) Calibrate + +quantized_model = convert_pt2e(prepared_model) # (5) + +compile_spec = generate_neutron_compile_spec( + "imxrt700", + operators_not_to_delegate=None, + neutron_converter_flavor="SDK_25_06", +) + +et_program = to_edge_transform_and_lower( # (6) + torch.export.export(quantized_model, sample_inputs), + partitioner=[NeutronPartitioner(compile_spec=compile_spec)], +).to_executorch() +``` + +See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information. diff --git a/docs/source/backends/nxp/op-support.csv b/docs/source/backends/nxp/op-support.csv new file mode 100644 index 00000000000..0f7ec34811e --- /dev/null +++ b/docs/source/backends/nxp/op-support.csv @@ -0,0 +1,19 @@ +Operator,Compute DType,Quantization,Constraints +aten.abs.default,int8,static int8, +aten._adaptive_avg_pool2d.default,int8,static int8,"ceil_mode=False, count_include_pad=False, divisor_override=False" +aten.addmm.default,int8,static int8,2D tensor only +aten.add.Tensor,int8,static int8,"alpha = 1, input tensor of rame rank" +aten.avg_pool2d.default,int8,static int8,"ceil_mode=False, count_include_pad=False, divisor_override=False" +aten.cat.default,int8,static int8,"input_channels % 8 = 0, output_channels %8 = 0" +aten.clone.default,int8,static int8, +aten.constant_pad_nd.default,int8,static int8,"H or W padding only" +aten.convolution.default,int8,static int8,"1D or 2D convolution, constant weights, groups=1 or groups=channels_count (depthwise)" +aten.hardtanh.default,int8,static int8,"supported ranges: <0,6>, <-1, 1>, <0,1>, <0,inf>" +aten.max_pool2d.default,int8,static int8,"dilation=1, ceil_mode=False" +aten.max_pool2d_with_indices.default,int8,static int8,"dilation=1, ceil_mode=False" +aten.mean.dim,int8,static int8,"4D tensor only, dims = [-1,-2] or [-2,-1]" +aten.mm.default,int8,static int8,2D tensor only +aten.relu.default,int8,static int8, +aten.tanh.default,int8,static int8, +aten.view_copy.default,int8,static int8, +aten.sigmoid.default,int8,static int8, diff --git a/docs/source/backends/nxp/tutorials/nxp-basic-tutorial.md b/docs/source/backends/nxp/tutorials/nxp-basic-tutorial.md new file mode 100644 index 00000000000..90bf58d1c3a --- /dev/null +++ b/docs/source/backends/nxp/tutorials/nxp-basic-tutorial.md @@ -0,0 +1,25 @@ +# Preparing a Model for NXP eIQ Neutron Backend + +This guide demonstrating the use of ExecuTorch AoT flow to convert a PyTorch model to ExecuTorch +format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutron Backend. + +## Step 1: Environment Setup + +This tutorial is intended to be run from a Linux and uses Conda or Virtual Env for Python environment management. For full setup details and system requirements, see [Getting Started with ExecuTorch](/getting-started). + +Create a Conda environment and install the ExecuTorch Python package. +```bash +conda create -y --name executorch python=3.12 +conda activate executorch +conda install executorch +``` + +Run the setup.sh script to install the neutron-converter: +```commandline +$ ./examples/nxp/setup.sh +``` + +## Step 2: Model Preparation and Running the Model on Target + +See the example `aot_neutron_compile.py` and its [README](https://github.com/pytorch/executorch/blob/release/1.0/examples/nxp/README.md) file. + diff --git a/docs/source/backends/nxp/tutorials/nxp-tutorials.md b/docs/source/backends/nxp/tutorials/nxp-tutorials.md new file mode 100644 index 00000000000..eb5b164d668 --- /dev/null +++ b/docs/source/backends/nxp/tutorials/nxp-tutorials.md @@ -0,0 +1,10 @@ +# NXP Tutorials + +**→{doc}`nxp-basic-tutorial` — Lower and run a model on the NXP eIQ Neutron backend.** + +```{toctree} +:hidden: +:maxdepth: 1 + +nxp-basic-tutorial +``` diff --git a/docs/source/embedded-nxp.md b/docs/source/embedded-nxp.md index 35d8f0ab75d..65ae8daff43 100644 --- a/docs/source/embedded-nxp.md +++ b/docs/source/embedded-nxp.md @@ -1 +1 @@ -```{include} backends-nxp.md +```{include} backends/nxp/nxp-overview.md