A project developed in collaboration with Visimind – it uses the YOLO11 model for image analysis to detect traffic signs.
VisiSign consists of three main components:
- 🖥️ Backend – FastAPI server with JWT authentication, WebSocket support, and real-time image processing using a pretrained YOLOv11 model
- 📱 Frontend – Mobile app (React Native, WIP) for capturing and sending images
- 🧠 Model – (Optional) Data pipeline and training code for retraining YOLOv11 on custom traffic sign datasets
-
✅ For users/testers:
You only need the Backend, Frontend, and a pretrained model file (e.g.,VisiSign_Advanced.pt
) — no training required. -
🛠️ For developers/researchers:
You can use theModel/
tools to download custom datasets, apply augmentations, train YOLOv11 from scratch, and log results to MLflow.
- Detects traffic signs from images sent via WebSocket
- Real-time processing using YOLO11
- Reports user results and statistics
- Secure authentication using JWT
- Frontend: React Native (Expo)
- Backend: Python + FastAPI + PostgreSQL
- ML: YOLO11 (Ultralytics)
- Experiment tracking: MLflow
- Docker: Fully container-ready
Model:
Backend:
Frontend:
Documentation is available in the Docs folder
This project is licensed under the MIT License.