Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions Projects/Projects/Edge-AI-On-Mobile.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,20 @@ Other devices also support SME2, including both Apple and Android - [see the ful

### Project Summary

Select a **mobile edge AI application** that benefits from large matrix operations, multi-modal fusion, or transformer-based processing enabled by SME2. Build and optimize a proof-of-concept application on a vivo X300 phone or other device supporting SME2.
Select a **mobile edge AI application** that benefits from large matrix operations, multi-modal fusion, or transformer-based processing accelerated by SME2, with real-time speech-to-speech translation, profanity filtering or filler-word removal, and on-device Small Language Models (SLMs) as key example use cases.

Example project areas:
- Real-time video semantic segmentation (e.g., background removal + AR compositing)
- Live object detection + natural-language description (text summary of what the camera sees)
- Multi-sensor fusion (camera + IMU + microphone) for gesture + voice recognition
- On-device lightweight LLM or encoder-only transformer processing for mobile assistants
- Real-time speech-to-speech translation
- On-device SLM for contextual understanding, rewriting, or assistant tasks
- Profanity detection and filtering (masking, rewriting, audio bleeping)
- Filler-word removal / speech clean-up (e.g., removing “um”, “uh”, false starts)
- Real-time video semantic segmentation (e.g., AR captions + background removal)
- Live object detection with natural-language description
- Multi-sensor fusion (camera + IMU + microphone) for gesture + voice interaction

Identify a model architecture that maps to wide matrix operations (e.g., ViT, MLP-Mixer, multi-branch CNN with large FC layers). Utilise a mobile-friendly framework (e.g., ExecuTorch, LiteRT, ONNX Runtime, MediaPipe) to leverage SME2 optimizations. Optimize quantization, memory layout, and verify that the large matrix multiplications get scheduled efficiently on the SME2-enabled CPU. Build a mobile app (Android or iOS) that executes the model and utilises it for a compelling use-case.

Utilise the resources and learning paths below and create an exciting and challenging application. Optionally, you could also compare performance vs a reference phone without SME2.
Utilise the resources and learning paths below and create an exciting and challenging application. As a bonus, you could compare performance vs a reference phone without SME2.

## Resources from Arm and our partners

Expand Down
Loading