Skip to content

Commit 8a0f86c

Browse files
Add README.md with Pack Overview (#41)
* Update HelloWorld example * Add a README Overview for the pack * Update README.md * Update README.md
1 parent 47bbf4a commit 8a0f86c

File tree

3 files changed

+64
-2
lines changed

3 files changed

+64
-2
lines changed
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# LiteRT CMSIS Pack (tensorflow::tensorflow-lite-micro)
2+
3+
## Overview
4+
5+
The LiteRT CMSIS Pack integrates LiteRT (formerly TensorFlow Lite Micro) into the CMSIS ecosystem. This enables developers to seamlessly incorporate machine learning capabilities into embedded applications on ARM Cortex-M based microcontrollers, leveraging the familiar CMSIS-Pack format for straightforward software component management.
6+
7+
## Using the LiteRT stack
8+
9+
LiteRT (short for Lite Runtime), is Google's high-performance runtime for on-device AI. You can find ready-to-run LiteRT models for a wide range of ML/AI tasks, or convert and run TensorFlow, PyTorch, and JAX models to the TFLite format using the AI Edge conversion and optimization tools.
10+
11+
Refer to the official documentation to [Get Started with LiteRT in C++](https://ai.google.dev/edge/litert/inference#run-c) or check out the ["Hello World" Reference Application](https://github.com/MDK-Packs/tensorflow-pack/tree/main/tensorflow-build/add/examples/TFLiteRT_HelloWorld) that demonstrates a complete training and integration cycle.
12+
13+
## Components of the Pack
14+
15+
The LiteRT CMSIS Pack provides the LiteRT stack, offering flexibility and optimized performance across a range of Cortex-M devices through three distinct acceleration variants:
16+
17+
* **Software Reference**: A baseline implementation for broad compatibility.
18+
* **CMSIS-NN**: Leverages optimized neural network kernels from the CMSIS-NN framework for significant performance gains on all Cortex-M processors.
19+
* **Ethos-U**: Enables hardware acceleration for compatible Arm Ethos-U Neural Processing Units (NPUs), delivering maximum performance and efficiency.
20+
21+
This approach ensures that developers can select the most appropriate variant for their specific hardware and performance requirements.
22+
23+
## Source and Versioning
24+
25+
The LiteRT stack originates from the official TensorFlow project ([https://github.com/tensorflow/tflite-micro](https://github.com/tensorflow/tensorflow)). Versioning of the LiteRT CMSIS Pack is closely aligned with LiteRT releases, ensuring access to the latest advancements and optimizations. Specific version details are provided within the pack and its release notes.
26+
27+
## Dependencies
28+
29+
The LiteRT CMSIS Pack relies on the following key software components - all are available as individual CMSIS Packs:
30+
31+
* **CMSIS-DSP**: A collection of optimized digital signal processing functions, potentially utilized by machine learning models for tasks such as feature extraction.
32+
* **CMSIS-NN**: Provides highly optimized neural network kernels for ARM Cortex-M processors, crucial for achieving high-performance machine learning inference.
33+
* **Flatbuffers**: An efficient cross-platform serialization utility used for handling LiteRT models.
34+
* **KissFFT**: A compact Fast Fourier Transform (FFT) component, sometimes used in audio processing or other signal processing tasks within ML applications.
35+
* **Ruy**: A matrix multiplication component, essential for many neural network operations.
36+
* **Ethos-U Driver (Optional)**: For systems incorporating an Arm Ethos-U NPU, this driver facilitates hardware-accelerated machine learning, enhancing performance and reducing power consumption.
37+
38+
## Examples
39+
40+
### Hello World Reference Application
41+
42+
The pack includes a "Hello World" example, which serves as a Reference Application. This demonstrates the fundamental integration and usage of the LiteRT stack. For more information on CMSIS Reference Applications, please see the [CMSIS Toolbox Reference Applications Documentation](https://github.com/Open-CMSIS-Pack/cmsis-toolbox/blob/main/docs/ReferenceApplications.md).
43+
44+
* **Purpose**: To verify the LiteRT stack setup and demonstrate basic inference capabilities on a target microcontroller.
45+
* **Functionality**: The application loads a simple, pre-trained model (here: sine wave prediction) and performs inference. Output is directed to a serial terminal via STDIO, allowing observation of the model's predictions.
46+
* **Training**: The model for the Hello World example can be trained using an included Jupyter Notebook, offering insight into the model creation process and a playground for model design experiments.
47+
48+
This Reference Application provides a clear starting point for developers integrating LiteRT into their projects.
49+
50+
#### API Interfaces
51+
52+
The LiteRT Reference Applications are hardware agnostic, but require API interfaces that are expressed using the csolution project connections: node. The reference applications in this pack consume the following API Interfaces. These interfaces should be provided by the board layer that is part of the Board Support Pack (BSP).
53+
54+
55+
| Consumed API Interface | Description |
56+
|-------------------------|-------------------------------------------------------|
57+
| **Hello World** | |
58+
| STDIN, STDOUT | Standard I/O for user input and output via a console. |
59+

tensorflow-build/build_r.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,9 @@ python3 ./tensorflow-pack/tensorflow-build/clean_file_list.py \
8383
# Add ./tensorflow-pack/tensorflow-build/add/examples to /tensorflow-build/gen/build with rsync
8484
rsync -a ./tensorflow-pack/tensorflow-build/add/examples/ ./tensorflow-pack/tensorflow-build/gen/build/examples/
8585

86+
# Add ./tensorflow-pack/tensorflow-build/add/Documenation to /tensorflow-build/gen/build with rsync
87+
rsync -a ./tensorflow-pack/tensorflow-build/add/Documentation/ ./tensorflow-pack/tensorflow-build/gen/build/Documentation/
88+
8689
echo "\033[0;33m"
8790

8891
# If a folder ./patches/$1 exists, call the patch.sh in this folder

tensorflow-build/template/cmsis_pdsc.tpl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="PACK.xsd">
44
<vendor>tensorflow</vendor>
55
<name>tensorflow-lite-micro</name>
6-
<description>Deep learning framework for on-device inference.</description>
7-
<!-- web download link -->
6+
<description overview="Documentation/README.md">LiteRT, formerly known as TensorFlow Lite, is Google's high-performance runtime for on-device AI.</description>
7+
88
<url>https://github.com/MDK-Packs/tensorflow-pack/releases/download/%{RELEASE_VERSION}%/</url>
99
<license>LICENSE</license>
1010
<repository type="git">https://github.com/MDK-Packs/tensorflow-pack.git</repository>

0 commit comments

Comments
 (0)