This project uses an ESP32-CAM (AI Thinker) module to run an Edge Impulse–trained AI object detection model and stream the results live through a web browser.
Camera frames are processed directly on the ESP32, objects are detected using the AI model, and bounding boxes with labels and confidence scores are drawn on the live video stream.
- ESP32-CAM (AI Thinker) support
- Object Detection with Edge Impulse
- Bounding boxes drawn on live video
- Built-in web server
- Real-time MJPEG streaming (/stream endpoint)
- ESP32-CAM (AI Thinker)
- USB-to-TTL adapter (FTDI, CP2102, etc.)
- Jumper wires
- Higher resolutions reduce FPS
- Many detections increase RAM usage
- ESP32-CAM has limited compute resources for large models
- Open Arduino IDE -> File -> Preferences
- Add following URL to Additional boards manager URLs;
https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json
- Complete the Getting Ready steps
- Open Arduino IDE -> Library Manager
- Install the
EloquentEsp32Camlibrary - Go File -> Examples -> EloquentEsp32Cam ->
Collect_Images_for_EdgeImpulse - Upload the code to tje ESP32-CAM (For programming, see Programming ESP32-CAM)
- Collect images of the object to be detected (collect at least 50 images)
The ESP32-CAM does not include an integrated USB-to-serial programmer so an external FTDI (USB-to-TTL) programmer is used for uploading code. (You can also use an Arduino Uno as a USB-to-serial adapter. For more information, refer to tutorial videos on YouTube.)
- Go to https://studio.edgeimpulse.com/
- Create new project
- Navigate to Data acquisition -> Add data -> Upload data
- Select the folder containing the collected images
- Go to label section and choose Enter label
- Enter your object name
- Click on upload data button
- Go Labelling Queue and label your images
- Navigate to Create Impulse
- Add an Image Processing block
- Add an Object Detection learning block
- Save the impulse
- Go to Image and set Color depth to Grayscale
- Save parameters and click Generate Features
- Go to Object Detection and set the Learning rate to
0.01 - Click Start Training
- After training is complete, go to Deployment
- Select Arduino Library as the deployment option
- Set the target to Espressif ESP-EYE (ESP32 240MHz)
- Click Build and download the generated library
- In Arduino IDE, go to Sketch → Include Library → Add .ZIP Library
- Open ObjectDetection.ino and replace
#include <Maden_suyu_detection_inferencing.h>with your own model header file
Update the WiFi credentials in the code before uploading it to the ESP32-CAM:
const char *ssid = "*******";
const char *password = "*******";- Upload the code and open the Serial Monitor at 115200 baud and copy the ip address
- Open a browser and navigate to:
http://ESP32_IP_ADDRESS/
You will see the live camera stream with detected objects highlighted.
This project is licensed under the MIT License - see the LICENSE file for details.
If you need any help contact me on LinkedIn.
⭐ If you like this project, don’t forget to give it a star on GitHub!


