Real-time library seat occupancy detection using YOLOv7 object detection and tracking
SeatWatch is an AI-powered system that monitors library seat occupancy in real-time using computer vision. It detects when seats are occupied (with or without a person present) and tracks how long belongings have been left unattended, helping libraries manage space more efficiently.
- Real-time Detection: Uses YOLOv7 for accurate person and chair detection
- Object Tracking: Advanced SORT algorithm tracks individual seats over time
- Occupancy Timing: Monitors how long seats remain occupied without a person
- Automated Alerts: Flags seats held too long with "TIME EXCEEDED" warnings
- Video Processing: Supports various video formats (MP4, AVI, etc.)
- Python 3.8+ installed on your system
- ~3GB free disk space (for dependencies and models)
- Internet connection for downloading models
-
Clone the repository
git clone https://github.com/priyanshuharshbodhi1/Library-Seat-Occupancy-Detection cd Library-Seat-Occupancy-Detection -
Set up virtual environment (recommended)
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Download model weights
python download_models.py
This downloads the YOLOv7 weights (~74MB) required for detection.
-
Verify installation
python -c "import cv2, torch, numpy; print('β Installation successful!')"
The project now includes a full-featured REST API for video processing via HTTP requests.
# Quick start with Python script
python run_api.py
# Or with Docker
docker-compose up -dUpload video and get results:
import requests
# Upload video
response = requests.post(
"http://localhost:8000/api/detect",
files={"video": open("library_video.mp4", "rb")}
)
job_id = response.json()["job_id"]
# Check status
status = requests.get(f"http://localhost:8000/api/jobs/{job_id}").json()
# Download results
requests.get(f"http://localhost:8000/api/download/{job_id}")π Full API Documentation: See API_README.md
API Features:
- π RESTful HTTP endpoints
- π€ Video upload via multipart/form-data
- π JSON results with detection statistics
- π₯ Download processed videos
- π Real-time progress tracking
- π³ Docker support
- π Interactive API docs at
/docs
python detect_and_track.py --source your_video.mp4python detect_and_track.py \
--weights yolov7.pt \
--source "library_video.mp4" \
--conf-thres 0.4 \
--classes 0 56 \
--name "Library_Detection_Run" \
--view-img| Argument | Description | Default | Example |
|---|---|---|---|
--source |
Input video file path | Required | "video.mp4" |
--weights |
Model weights file | yolov7.pt |
yolov7.pt |
--conf-thres |
Detection confidence threshold | 0.25 |
0.4 |
--classes |
Specific classes to detect | All classes | 0 56 |
--name |
Output experiment name | Auto-generated | "my_test" |
--view-img |
Show real-time video preview | False |
Add flag |
- Class 0: Person (human detection) π€
- Class 56: Chair (furniture detection) πͺ
- Use
--classes 0 56to detect only people and chairs for better performance
Library-Seat-Occupancy-Detection/
βββ π detect_and_track.py # Main detection script
βββ π sort.py # SORT tracking algorithm
βββ π download_models.py # Model download utility
βββ π requirements.txt # Python dependencies
βββ π models/ # YOLOv7 model architecture
βββ π utils/ # Helper functions and utilities
βββ π data/ # Configuration files
βββ π doc/ # Documentation and images
βββ π runs/ # Output videos (auto-created)
βββ π§ yolov7.pt # Model weights (downloaded)
βββ π README.md # This file
Results are automatically saved to:
runs/detect/{experiment_name}/your_video.mp4
- π΄ Red boxes: People with unique tracking IDs
- π‘ Yellow boxes: Chairs and furniture
- β° Timing info: Duration of seat occupancy
β οΈ Alerts: "TIME EXCEEDED" for seats held too long
π Python Environment Issues
ModuleNotFoundError: No module named 'cv2'
# Ensure virtual environment is activated
source venv/bin/activate
# Reinstall OpenCV
pip install opencv-pythonVirtual environment not working
# Recreate environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtπ¦ Model and File Issues
Model weights not found
# Run the download script
python download_models.py
# Or download manually
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.ptVideo file issues
# Check supported formats: MP4, AVI, MOV, MKV
# Ensure video file path is correct
ls -la your_video.mp4β‘ Performance Issues
Slow processing
- Use smaller videos for testing (< 1GB)
- Increase
--conf-thresto 0.4 or higher - Close other applications
- Consider using GPU if available
High memory usage
- Process shorter video clips
- Reduce video resolution
- Monitor with
htopor Task Manager
# Test with sample data
python detect_and_track.py --source data/sample_video.mp4 --view-img- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For detailed technical documentation and research background, see:
| Component | Requirement |
|---|---|
| OS | Windows 10+, macOS 10.14+, Ubuntu 18.04+ |
| Python | 3.8 - 3.11 |
| RAM | 8GB minimum, 16GB recommended |
| Storage | 3GB free space |
| GPU | Optional (CUDA-compatible for faster processing) |
- Author: Asuman Sare ERGUT
- Email: asumansaree@gmail.com
- Issues: GitHub Issues
- YOLOv7 Object Tracking
- Anish Aralikatti et al 2020 J. Phys.: Conf. Ser. 1706 012149
- Redmon J, Divvala S, Girshick R and Farhadi A You Only Look Once: Unified, Real-Time Object Detection 2016 IEEE Conference on Computer Vision and Pattern Recognition
- COCO 2017 Dataset
This project is licensed under the MIT License - see the LICENSE file for details.
Made with β€οΈ for smarter library management



