EdgeNeuron is an Arduino-friendly wrapper around TensorFlow Lite Micro, simplifying the process of running Tiny Machine Learning (TinyML) models on Arduino boards. It provides a beginner-friendly, Arduino-style API, eliminating the need for complex C++ constructs like raw pointers, making it accessible to developers familiar with standard Arduino programming.
- Intuitive API: Simplified functions for deploying TensorFlow Lite Micro models on Arduino-compatible boards.
- No Raw Pointers: Designed to avoid complex C++ constructs, enabling easy integration into Arduino sketches.
- Optimized for TinyML: Tailored for experimentation and development on boards with sufficient computational power.
- Open the Arduino IDE.
- Navigate to Sketch > Include Library > Manage Libraries.
- Search for
EdgeNeuron
and click Install.
- Download the latest
.zip
file from this GitHub repository. - In the Arduino IDE, navigate to Sketch > Include Library > Add .ZIP Library.
- Select the downloaded
.zip
file to install the library.
The provided examples demonstrate how to:
- Deploy pre-trained models for inference.
- Read data from sensors, preprocess it, and input it into the model.
- Retrieve and process model outputs.
-
Data Collection:
Use an Arduino sketch to collect sensor data required for training your model. -
Model Creation:
Define and develop a deep neural network (DNN) in a TensorFlow environment (e.g., Google Colaboratory). -
Model Training:
Train the model using the collected dataset in TensorFlow. -
Model Conversion:
Convert the trained model to TensorFlow Lite format and export it as a.h
file containing a static byte array. -
Inference Sketch Preparation:
Write an Arduino sketch to deploy the trained model for real-time inference. -
Include Required Files:
AddEdgeNeuron.h
and your model's header file (model.h
) to your sketch. -
Tensor Arena Definition:
Allocate memory for model operations by declaring a byte array (tensor arena). -
Model Initialization:
Use themodelInit()
function to load the model and prepare input/output tensors. -
Input Data Assignment:
Populate the input tensor with data usingmodelSetInput()
. -
Run Inference:
Perform inference usingmodelRunInference()
. -
Retrieve Outputs:
Extract the results usingmodelGetOutput()
.
Initializes the TensorFlow Lite Micro environment, sets up the model, resolves operations, and allocates memory for tensors.
Returns:
true
if successful.false
if an error occurs.
Sets the value at a specific index in the input tensor.
Parameters:
inputValue
: The input value to be set.index
: The index within the input tensor.
Returns:true
if successful.false
if the index is invalid or an error occurs.
Executes the inference operation using TensorFlow Lite Micro interpreter.
Returns:
true
if successful.false
if inference fails.
Retrieves the output value at a specified index in the output tensor.
Parameters:
index
: The index within the output tensor.
Returns:- The output value if successful.
-1
if the index is invalid or an error occurs.
- Support for advanced machine learning models (e.g., object detection).
- Extended compatibility with additional TensorFlow Lite features.
- Improved optimization for memory-constrained devices.
We welcome contributions to enhance EdgeNeuron!
- Submit bug reports or feature requests via the GitHub Issues page.
- Fork the repository, make changes, and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for more details.
Visit Consentium IoT Documentation for tutorials, examples, and additional resources.
For further assistance, email us at [email protected].