TensorFlow Lite Micro port for Ingenic MIPS32 SoCs (T31, T40, T41, etc.)
This is a port of TensorFlow Lite Micro for embedded Linux systems running on Ingenic MIPS processors, modeled after esp-tflite-micro.
- Optimized for Ingenic MIPS32r2 architecture (XBurst1)
- Builds as a static library for easy integration
- Includes signal processing library for audio/spectrogram features
- Reference kernel implementations (no hardware acceleration)
- No OS dependencies beyond standard C/C++ libraries
The easiest way to build is as a Buildroot external package:
-
Copy the package files to your Buildroot external tree:
mkdir -p /path/to/your-external/package/ingenic-tflite-micro cp ingenic-tflite-micro.mk Config.in /path/to/your-external/package/ingenic-tflite-micro/
-
Add to your external tree's
Config.in:source "$BR2_EXTERNAL_YOUR_NAME_PATH/package/ingenic-tflite-micro/Config.in" -
Add to your external tree's
external.mk(if not already present):include $(sort $(wildcard $(BR2_EXTERNAL_YOUR_NAME_PATH)/package/*/*.mk)) -
Enable in menuconfig:
make menuconfig # Navigate to: External options -> ingenic-tflite-micro -
Build:
make ingenic-tflite-micro-rebuild
The library will be installed to $(STAGING_DIR)/usr/lib/libtflite-micro.a with headers in $(STAGING_DIR)/usr/include/.
Prerequisites:
- CMake 3.10+
- Ingenic SDK toolchain (mips-linux-gnu-gcc 5.4+)
# Set toolchain path
export INGENIC_SDK=$HOME/github/Ingenic-SDK-T31-1.1.1-20200508
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=../cmake/mips-linux-gnu.cmake ..
make -j$(nproc)mkdir build && cd build
cmake ..
make -j$(nproc)Link against libtflite-micro.a and include the headers:
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/system_setup.h"
// Initialize the runtime
tflite::InitializeTarget();
// Load model and create interpreter
const tflite::Model* model = tflite::GetModel(model_data);
tflite::MicroMutableOpResolver<10> resolver;
// Add needed ops...
tflite::MicroInterpreter interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter.AllocateTensors();
// Run inference
interpreter.Invoke();For wake word detection, use the signal processing library:
#include "signal/src/rfft.h"
#include "signal/src/filter_bank.h"
#include "signal/src/log.h"- T31 series (T31L, T31N, T31X, T31A, T31AL, T31ZL, T31ZX)
- T21 series (T21L, T21N, T21X, T21Z)
- T23 series (T23N, T23ZN)
- T30 series (T30L, T30N, T30X, T30A)
- T40/T41 series (with appropriate toolchain)
Apache 2.0 (same as TensorFlow)