Skip to content

priyanshuharshbodhi1/Library-Seat-Occupancy-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸͺ‘ SeatWatch: Advanced Library Seat Occupancy Detection

Real-time library seat occupancy detection using YOLOv7 object detection and tracking

teaser

🎯 About The Project

SeatWatch is an AI-powered system that monitors library seat occupancy in real-time using computer vision. It detects when seats are occupied (with or without a person present) and tracks how long belongings have been left unattended, helping libraries manage space more efficiently.

🌟 Key Features

  • Real-time Detection: Uses YOLOv7 for accurate person and chair detection
  • Object Tracking: Advanced SORT algorithm tracks individual seats over time
  • Occupancy Timing: Monitors how long seats remain occupied without a person
  • Automated Alerts: Flags seats held too long with "TIME EXCEEDED" warnings
  • Video Processing: Supports various video formats (MP4, AVI, etc.)

Project Overview

πŸ—οΈ System Architecture

System Architecture


πŸš€ Quick Start

Prerequisites

  • Python 3.8+ installed on your system
  • ~3GB free disk space (for dependencies and models)
  • Internet connection for downloading models

Installation

  1. Clone the repository

    git clone https://github.com/priyanshuharshbodhi1/Library-Seat-Occupancy-Detection
    cd Library-Seat-Occupancy-Detection
  2. Set up virtual environment (recommended)

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Download model weights

    python download_models.py

    This downloads the YOLOv7 weights (~74MB) required for detection.

  5. Verify installation

    python -c "import cv2, torch, numpy; print('βœ… Installation successful!')"

πŸ’» Usage

Option 1: REST API (Recommended for Production)

The project now includes a full-featured REST API for video processing via HTTP requests.

# Quick start with Python script
python run_api.py

# Or with Docker
docker-compose up -d

Upload video and get results:

import requests

# Upload video
response = requests.post(
    "http://localhost:8000/api/detect",
    files={"video": open("library_video.mp4", "rb")}
)
job_id = response.json()["job_id"]

# Check status
status = requests.get(f"http://localhost:8000/api/jobs/{job_id}").json()

# Download results
requests.get(f"http://localhost:8000/api/download/{job_id}")

πŸ“– Full API Documentation: See API_README.md

API Features:

  • 🌐 RESTful HTTP endpoints
  • πŸ“€ Video upload via multipart/form-data
  • πŸ“Š JSON results with detection statistics
  • πŸŽ₯ Download processed videos
  • πŸ“ˆ Real-time progress tracking
  • 🐳 Docker support
  • πŸ“ Interactive API docs at /docs

Option 2: Command Line (Direct Processing)

Basic Usage

python detect_and_track.py --source your_video.mp4

Advanced Usage

python detect_and_track.py \
    --weights yolov7.pt \
    --source "library_video.mp4" \
    --conf-thres 0.4 \
    --classes 0 56 \
    --name "Library_Detection_Run" \
    --view-img

Command Line Arguments

Argument Description Default Example
--source Input video file path Required "video.mp4"
--weights Model weights file yolov7.pt yolov7.pt
--conf-thres Detection confidence threshold 0.25 0.4
--classes Specific classes to detect All classes 0 56
--name Output experiment name Auto-generated "my_test"
--view-img Show real-time video preview False Add flag

Detected Classes

  • Class 0: Person (human detection) πŸ‘€
  • Class 56: Chair (furniture detection) πŸͺ‘
  • Use --classes 0 56 to detect only people and chairs for better performance

πŸ“ Project Structure

Library-Seat-Occupancy-Detection/
β”œβ”€β”€ πŸ“„ detect_and_track.py      # Main detection script
β”œβ”€β”€ πŸ“„ sort.py                  # SORT tracking algorithm
β”œβ”€β”€ πŸ“„ download_models.py       # Model download utility
β”œβ”€β”€ πŸ“„ requirements.txt         # Python dependencies
β”œβ”€β”€ πŸ“‚ models/                  # YOLOv7 model architecture
β”œβ”€β”€ πŸ“‚ utils/                   # Helper functions and utilities
β”œβ”€β”€ πŸ“‚ data/                    # Configuration files
β”œβ”€β”€ πŸ“‚ doc/                     # Documentation and images
β”œβ”€β”€ πŸ“‚ runs/                    # Output videos (auto-created)
β”œβ”€β”€ 🧠 yolov7.pt               # Model weights (downloaded)
└── πŸ“„ README.md               # This file

πŸ“Š Sample Output

Output Location

Results are automatically saved to:

runs/detect/{experiment_name}/your_video.mp4

Visual Features

  • πŸ”΄ Red boxes: People with unique tracking IDs
  • 🟑 Yellow boxes: Chairs and furniture
  • ⏰ Timing info: Duration of seat occupancy
  • ⚠️ Alerts: "TIME EXCEEDED" for seats held too long

Sample Detection Output


πŸ› οΈ Troubleshooting

🐍 Python Environment Issues

ModuleNotFoundError: No module named 'cv2'

# Ensure virtual environment is activated
source venv/bin/activate

# Reinstall OpenCV
pip install opencv-python

Virtual environment not working

# Recreate environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
πŸ“¦ Model and File Issues

Model weights not found

# Run the download script
python download_models.py

# Or download manually
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt

Video file issues

# Check supported formats: MP4, AVI, MOV, MKV
# Ensure video file path is correct
ls -la your_video.mp4
⚑ Performance Issues

Slow processing

  • Use smaller videos for testing (< 1GB)
  • Increase --conf-thres to 0.4 or higher
  • Close other applications
  • Consider using GPU if available

High memory usage

  • Process shorter video clips
  • Reduce video resolution
  • Monitor with htop or Task Manager

πŸ”§ Development

Running Tests

# Test with sample data
python detect_and_track.py --source data/sample_video.mp4 --view-img

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“– Technical Details

For detailed technical documentation and research background, see:


πŸ’Ύ System Requirements

Component Requirement
OS Windows 10+, macOS 10.14+, Ubuntu 18.04+
Python 3.8 - 3.11
RAM 8GB minimum, 16GB recommended
Storage 3GB free space
GPU Optional (CUDA-compatible for faster processing)

πŸ“ž Contact & Support


πŸ“œ References

  • YOLOv7 Object Tracking
  • Anish Aralikatti et al 2020 J. Phys.: Conf. Ser. 1706 012149
  • Redmon J, Divvala S, Girshick R and Farhadi A You Only Look Once: Unified, Real-Time Object Detection 2016 IEEE Conference on Computer Vision and Pattern Recognition
  • COCO 2017 Dataset

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❀️ for smarter library management

About

πŸͺ‘ AI-powered library seat occupancy detection using YOLOv7 object detection and tracking

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors