Skip to content

Commit d45304b

Browse files
roman-janik-nxpGregoryComer
authored andcommitted
NXP backend: Update user guide and docs Readme (pytorch#14852)
This PR updates NXP backend Readmes in backend and examples directories. - cc @robert-kalmar @JakeStevens @digantdesai
1 parent 8c84780 commit d45304b

File tree

2 files changed

+22
-20
lines changed

2 files changed

+22
-20
lines changed

backends/nxp/README.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,24 +5,26 @@ This subtree contains the ExecuTorch Backend implementation for the
55

66
The eIQ® Neutron NPU is a highly scalable accelerator core architecture providing machine learning (ML) acceleration,
77
able to support common and critical tasks for edge AI such as anomaly detection, speech recognition,
8-
image classification, object detection, facial recognition, image segmentation, and generative AI use cases like
8+
image classification, object detection, facial recognition, image segmentation, and generative AI use cases like
99
large and small language models (LLMs & SLMs) and text-to-speech (TTS).
10-
The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of
10+
The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of
1111
microcontrollers and applications processors.
1212

13-
The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer
13+
The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer
1414
networks, as well as the ability to adapt and scale to new model architectures, topologies and layer types introduced
15-
to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the
15+
to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the
1616
[eIQ machine learning software development environment](https://www.nxp.com/design/design-center/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT).
1717
The eIQ AI SW Stack provides a streamlined development experience for developers and end-users of NXP products.
18+
eIQ extensions connect broader AI ecosystems to the edge, such as the NVIDIA TAO extension, which enables developers
19+
to bring AI models trained and fine-tuned with TAO to NXP-powered edge devices.
1820

1921

2022
## Supported NXP platforms
2123
At this moment following eIQ® Neutron NPU variants and NXP platforms are supported by the NXP eIQ Neutron Backend:
2224

2325
* **eIQ Neutron N3-64**, available on [i.MX RT700](https://www.nxp.com/products/i.MX-RT700)
2426

25-
In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS)
27+
In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS)
2628
with eIQ Neutron NPU, like the [i.MX 95](https://www.nxp.com/products/iMX95).
2729

2830

@@ -33,7 +35,7 @@ The eIQ Neutron NPU Backend should be considered as prototype quality at this mo
3335
improvements. NXP and the ExecuTorch community is actively developing this codebase.
3436

3537
## Neutron Backend implementation and SW architecture
36-
Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode.
38+
Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode.
3739
The Neutron Converter accepts the ML model in LiteRT format, for the **eIQ Neutron N3** class therefore the Neutron Backend
3840
uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Converter ML compiler.
3941

@@ -44,10 +46,10 @@ uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Conv
4446
`node_conveters` is structured as single module for each Edge operator.
4547
* `backend/ir/lib` - automatically generated handlers from LiteRT flatbuffers schema.
4648
* `backend/ir/tflite_generator` and `backend/ir/tflite_optimizer` handle the serialization
47-
of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers
49+
of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers
4850
representation. Code taken from the onnx2tflite tool.
49-
* `edge_passes` - Various passes operating on Edge dialect level.
50-
* `quantizer` - Neutron Backend quantizer implementation.
51+
* `edge_passes` - Various passes operating on Edge dialect level.
52+
* `quantizer` - Neutron Backend quantizer implementation.
5153
* `runtime` - Neutron Backend runtime implementation. For running compiled on device.
5254
* `tests/` - Unit tests for Neutron backend.
5355
* `tests/converter/node_converter` - Operator level unit tests.

examples/nxp/README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutr
44

55
## Layout
66
* `experimental/` - contains CifarNet model example.
7-
* `models` - demo models instantiation used in examples
7+
* `models` - various example models.
88
* `aot_neutron_compile.py` - script with end-to-end ExecuTorch AoT Neutron Backend workflow.
99
* `README.md` - this file.
10-
* `run_aot_example.sh` - utility script to launch _aot_neutron_compile.py_. Primarily for CI purpose.
11-
* `setup.sh` - setup script to install NeutronBackend dependencies.
10+
* `run_aot_example.sh` - utility script for aot_neutron_compile.py.
11+
* `setup.sh` - setup script for Neutron Converter installation.
1212

1313
## Setup
1414
Please finish tutorial [Setting up ExecuTorch](https://pytorch.org/executorch/main/getting-started-setup).
@@ -23,24 +23,24 @@ $ ./examples/nxp/setup.sh
2323
* MobileNetV2
2424

2525
## PyTorch Model Delegation to Neutron Backend
26-
First we will start with an example script converting the model. This example show the CifarNet model preparation.
27-
It is the same model which is part of the `example_cifarnet` in
26+
First we will start with an example script converting the model. This example show the CifarNet model preparation.
27+
It is the same model which is part of the `example_cifarnet` in
2828
[MCUXpresso SDK](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-software-development-kit-sdk:MCUXpresso-SDK).
2929

30-
The NXP MCUXpresso software and tools offer comprehensive development solutions designed to help accelerate embedded
31-
system development of applications based on MCUs from NXP. The MCUXpresso SDK includes a flexible set of peripheral
30+
The NXP MCUXpresso software and tools offer comprehensive development solutions designed to help accelerate embedded
31+
system development of applications based on MCUs from NXP. The MCUXpresso SDK includes a flexible set of peripheral
3232
drivers designed to speed up and simplify development of embedded applications.
3333

3434
The steps are expected to be executed from the `executorch` root folder.
3535

36-
1. Run the `aot_neutron_compile.py` example with the `cifar10` model
36+
1. Run the `aot_neutron_compile.py` example with the `cifar10` model
3737
```commandline
3838
$ python -m examples.nxp.aot_neutron_compile --quantize \
39-
--delegate --neutron_converter_flavor SDK_25_06 -m cifar10
39+
--delegate --neutron_converter_flavor SDK_25_09 -m cifar10
4040
```
4141
42-
2. It will generate you `cifar10_nxp_delegate.pte` file which can be used with the MCUXpresso SDK `cifarnet_example`
42+
2. It will generate you `cifar10_nxp_delegate.pte` file which can be used with the MCUXpresso SDK `cifarnet_example`
4343
project, presented [here](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html#how-to-build-and-run-executorch-cifarnet-example).
4444
This project will guide you through the process of deploying your PTE model to the device.
4545
To get the MCUXpresso SDK follow this [guide](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/getting_mcuxpresso.html),
46-
use the MCUXpresso SDK v25.06.00.
46+
use the MCUXpresso SDK v25.09.00.

0 commit comments

Comments
 (0)