Skip to content

Mark-MDO47/expts_grove_vision_ai_v2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

174 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

expts_grove_vision_ai_v2

Experiments using Grove Vision-AI version 2 and SEEED Sensecraft

Two 2-eye Skulls and Control Boxes, one with Viking hat and one with Pirate hat

Table Of Contents

References

Top
Documentation for using Vision-AI V2

Hardware Info

Top
The following link captures hardware information about the Seeed Studio Vision-AI V2 and XIAO ESP32-C3 used in this project.

The following link contains information about the Skull Project and its Adafruit HalloWing M4 Express

Analog Communications Info

Top
The following link captures hardware/software information about the analog communication from XIAO ESP32-C3 to HalloWing M4 Express SAMD51

Use of Existing Software

Top
The Skull Project software is found here in directory mdo_m4_skull_project (forked from Adafruit)

My LEDC PWM software for ESP32 analog outputs is here, though I might simplify it for this purpose.

My Over-The-Air (OTA) Updating software is here

My Universal Remote commanding over WiFi via ESP-NOW software is here

Reset or Update Flash Memory

Top
https://wiki.seeedstudio.com/grove_vision_ai_v2/ search for Boot / Reset / Flashed Driver or Bootloader Recovery Tool Manual

Installing Seeed_Arduino_SSCMA Communication Library into Arduino IDE

Top
Be aware that there are three things related to Seeed SenseCraft Model Assistant and referred to as SSCMA

  1. https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA - Seeed_Arduino_SSCMA - a library for use in Arduino IDE to assist in communicating with AI models running on a different hardware module - included in XIAO ESP32-C3 code
  2. https://github.com/Seeed-Studio/SSCMA-Micro - SSCMA-Micro - a cross-platform machine learning inference framework - implements digital image processing, neural network inferencing, AT command interaction, and more - included in Vision-AI V2 code
  3. https://wiki.seeedstudio.com/ModelAssistant_Introduce_Overview - Seeed SenseCraft (AI) Model Assistant - an open-source project focused on embedded AI including tools and a model zoo - on the web

I will use the Seeed_Arduino_SSCMA library on the XIAO ESP32-C3 to communicate with the Grove Vision-AI V2 which is using SSCMA-Micro. I will use Seeed SenseCraft (AI) Model Assistant to load my vision models onto the Vision-AI V2

https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA says to download a zip file and add the library to your Arduino IDE by selecting Sketch > Include Library > Add .ZIP Library

  • They don't explicitly say it but they expect you to click on the green Code button and choose Download ZIP
  • Then you point the Arduino IDE to that ZIP file.

The Seeed_Arduino_SSCMA library is alternatively now available by searching in the Arduino IDE library manager:

  • Go to Tools > Manage Libraries.
  • Search for and install Seeed_Arduino_SSCMA

Look in https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA/blob/main/src/Seeed_Arduino_SSCMA.h to see available options in class SSCMA.

According to ChatGPT-5 the speed difference between I2C and UART with SSCMA (SenseCraft Smart Camera Middleware API) on the Grove Vision-AI V2 is significant.

  • I2C speed
    • Runs at standard 100 kHz or fast 400 kHz bus speeds (configurable).
    • Effective throughput: ~20–30 KB/s max at 400 kHz.
    • Suitable for commands, control, and small inference results (like classification labels or bounding box data).
    • Not suitable for image streaming or large model data transfers — it will quickly bottleneck.
  • UART speed
    • SSCMA UART defaults to 921600 baud (~ 0.92 Mbps).
    • Can sometimes be lowered (115200, 460800) if needed, but 921600 is recommended for tasks transfering vision data.
    • Effective throughput: ~90–100 KB/s real-world.
    • Good enough for sending inference results quickly and even compressed image chunks if needed.
    • Must use hardware UART (software serial usually insufficient).
    • Much faster and more reliable for continuous AI tasks compared to I²C.

Usage of Seeed_Arduino_SSCMA Library

Top
When called without parameters, the SSCMA::begin() function typically uses a default I2C interface, assuming the hardware is configured for it. The library is designed to work with Seeed Studio's hardware, and the no-parameter version of begin() provides a convenient way to initialize the communication interface for devices that use a standard I2C connection, such as the Grove Vision-AI V2.
For example, when using the Grove Vision-AI V2, the begin() function with no arguments:

  • Initializes the I2C communication protocol using the default Wire object.
  • Uses the default I2C address (I2C_ADDRESS) to identify and communicate with the device.

In order to use with the XIAO ESP32-C3 hardware UART I started it this way:

// Define pins for Seeed XIAO ESP32-C3
// GPIO20 = RX, GPIO21 = TX
#define RX_PIN 20     // D7 on XIAO ESP32-C3
#define TX_PIN 21     // D6 on XIAO ESP32-C3

  // within setup() routine

  Serial.begin(115200);   // Serial monitor
  delay(1000);

  // Start hardware UART1 for communicating with Vision AI
  Serial1.begin(921600, SERIAL_8N1, RX_PIN, TX_PIN);
  // Initialize SSCMA library with Vision AI using UART
  if (!AI.begin(&Serial1)) {
    Serial.println("Failed to initialize Vision AI V2 over UART!");
    while (1);
  }
  Serial.println("Vision AI V2 initialized successfully!");

AI Models

Top
There are several ways to get a model into the Vision-AI V2 module; below are some of them I have found.

Be aware that there are three things related to Seeed SenseCraft Model Assistant and referred to as SSCMA

  1. https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA - Seeed_Arduino_SSCMA - a library for use in Arduino IDE to assist in communicating with AI models running on a different hardware module - included in XIAO ESP32-C3 code
  2. https://github.com/Seeed-Studio/SSCMA-Micro - SSCMA-Micro - a cross-platform machine learning inference framework - implements digital image processing, neural network inferencing, AT command interaction, and more - included in Vision-AI V2 code
  3. https://wiki.seeedstudio.com/ModelAssistant_Introduce_Overview - Seeed SenseCraft (AI) Model Assistant - an open-source project focused on embedded AI including tools and a model zoo - on the web

I will use the Seeed_Arduino_SSCMA library on the XIAO ESP32-C3 to communicate with the Grove Vision-AI V2 which is using SSCMA-Micro. I will use Seeed SenseCraft (AI) Model Assistant to load my vision models onto the Vision-AI V2

SenseCraft

Top
SenseCraft - https://wiki.seeedstudio.com/sensecraft-ai/overview/

Edge Impulse

Top
Edge Impulse - https://wiki.seeedstudio.com/edgeimpulse/

HimaxWiseEyePlus

Top
HimaxWiseEyePlus - https://github.com/HimaxWiseEyePlus/Seeed_Grove_Vision_AI_Module_V2

Ultralytics

Top
Ultralytics seems to concentrate on vision models.

  • Seems to allow moving trained models to different formats for implementation on inexpensive hardware.
  • They also have a cool Ultralytics AP that allows you to run it on your smart phone (either type).
  • Ultralytics is especially interesting to me since I can download the training software and (slowly) train on my own equipment.
    • I am not anxious to (for instance) upload photos or voice captures of my family to the internet.

Here are some entries to the Ultralytics world:

Any Model Export - example Ultralytics - then through SenseCraft

Top
Be aware that there are two things named Seeed SenseCraft Model Assistant and referred to as SSCMA

  1. https://github.com/Seeed-Studio/Seeed_Arduino_SSCMA - a library for use in Arduino IDE to assist in communicating with AI models running on a different hardware module
  2. https://wiki.seeedstudio.com/ModelAssistant_Introduce_Overview - Seeed SenseCraft (AI) Model Assistant - an open-source project focused on embedded AI including tools and a model zoo

In the Model Export section we will use tools from the SSCMA open-source project.

NOTE: Due to the size limitation, currently both XIAO ESP32S3 and Grove Vision-AI V2 only support int8 format models.

NOTE: here is the start of SEEED Studio docs on how to do this:

NOTE: here is a SEEED Wiki document on how to do this

NOTE: here is a Hackster.io example of how to do this

NOTE: this Youtube shows an example project to detect different birds: https://www.youtube.com/watch?v=zdrtL1XDRn0&list=PLmOy82pCgLFnYoYTYNyQQvGa9bUU7hC5K

  • Shows how to export a trained model to SenseCraft
  • Shows how to download the inference picture (third param = true on inference call)
  • Lots of good practical info from an actual Vision-AI V2 project

To deploy an Ultralytics model to a Seeed Grove Vision-AI V2, we convert the model to the specific format required by the device's Himax WiseEye2 processor. The deployment process is managed by Seeed Studio's SenseCraft AI platform, which handles the flashing of the converted model onto the hardware. [1] [2] [3]

Export an Ultralytics Model

Top
First, export a custom-trained or standard Ultralytics model (e.g., YOLOv8) to the ONNX format. This is done from the Python training environment or a notebook using the Ultralytics export function. [1] [4]

from ultralytics import YOLO

model = YOLO('yolov8n.pt')  # Load a trained model or use a custom trained model
model.export(format='onnx', opset=12) # Export the model to ONNX format

Convert ONNX to int8_vela.tflite

Top
Use the SenseCraft Model Assistant, a Google Colab-based tool provided by Seeed Studio, to convert an ONNX model to the specific format.

  1. Open the SenseCraft Model Assistant Colab notebook linked in the Seeed Studio Wiki. [8] [9] [10] [11] [12] [13] [14]
  2. Follow the notebook's instructions to upload the exported ONNX model.
  3. The notebook will perform the necessary quantization and conversion steps, producing a file with the int8_vela.tflite extension.
  4. Download the converted model file to your computer. [1]

NOTE: read these forum entries to get Python dependencies correct:

Use SenseCraft AI to deploy the model

Top
With the converted model, we can now use the SenseCraft AI platform to flash it to the Vision-AI V2 device.

  1. Connect the Grove Vision-AI V2 to your computer using a USB-C cable.
  2. Navigate to the SenseCraft AI Model Assistant webpage in your browser.
  3. In the web interface, click on Device Workspace, then select Grove - Vision-AI V2.
  4. Click the Connect button and select the serial port for the Vision-AI V2 device from the pop-up window.
  5. After connecting, select the option to Upload Custom AI Model.
  6. You will be prompted to provide the model name, your model file, and the list of labels used for your dataset.
  7. Click Send Model to begin the upload process. The flashing can take several minutes. [1] [5] [6] [7]

Test and view results

Top
Once the upload is complete, the SenseCraft web interface should automatically display the camera's live feed with your model's inference results overlaid on the video. [7]

References

Top

Experiment 01 - Face Following

Top
Follow Experiment 01 in detail here:

Experiment 01 Introduction

Top
There is a demo project that does face following with a fan:

My first usage will be to add face following to my Skull Project https://github.com/Mark-MDO47/Skull-Project. We want to make the eyes follow the face detected by the Vision-AI V2.

The Skull Project uses eyes made from Adafruit HalloWing M4 Express. These use the SAMD51 processor (ATSAMD51J19).

Because the Skull Project eyes are pretty busy just displaying the eyes, I don't want to interrupt them at random times with an I2C or UART message. Thus I plan to output the position information on two ESP32-C3 Analog channels, and the SAMD51 in the eyes can sample the information at any convenient time that doesn't interrupt its processing.

I want to communicate X and Y position on two separate analog outputs from the XIAO ESP32-C3 but it only has one actual DAC output pin (D10).

I will use the ESP32 LEDC library for analog output; that way I can use the same code for both analog outputs. I need to implement filtering on the analog outputs so the SAMD51 can do reliable sensing.

I previously did LEDC analog outputs in my https://github.com/Mark-MDO47/DuelWithBanjos project. I didn't need any analog filtering with the LED outputs. Analog filtering is definitely needed for this application. More detail here:

About

Experiments using Grove Vision AI version 2

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors