- Python 3.8+ installed
- ~3GB free disk space
- Internet connection for model downloads
cd /home/priyanshu/repos/Library-Seat-Occupancy-Detection# Create virtual environment (one-time setup)
python3 -m venv venv
# Activate virtual environment (do this every time)
source venv/bin/activate# Install all required packages (~3GB download)
pip install -r requirements.txt⏳ This takes 10-15 minutes depending on internet speed
# Check if key packages are installed
python -c "import cv2, torch, numpy; print('✅ All packages installed successfully!')"# Basic usage
python detect_and_track.py --source "library-demo-video.mp4"
# Advanced usage with custom settings
python detect_and_track.py \
--weights yolov7.pt \
--source "library-demo-video.mp4" \
--conf-thres 0.4 \
--classes 0 56 \
--name "Library_Seat_Detection_Test"# Output video will be saved to:
ls runs/detect/*/| Argument | Description | Default |
|---|---|---|
--source |
Input video file | Required |
--weights |
Model weights file | yolov7.pt |
--conf-thres |
Confidence threshold | 0.25 |
--classes |
Detect specific classes | All classes |
--name |
Output folder name | object_tracking |
--view-img |
Display real-time preview | False |
- Class 0: Person (human detection)
- Class 56: Chair (furniture detection)
- Use
--classes 0 56to detect only people and chairs
- Input:
library-demo-video.mp4 - Output:
runs/detect/{experiment_name}/library-demo-video.mp4 - Features:
- Red boxes around people with tracking IDs
- Yellow boxes around chairs
- Occupancy time tracking
- "TIME EXCEEDED" alerts for seats held too long
# Make sure virtual environment is activated
source venv/bin/activate
# Verify installation completed
pip list | grep opencv# First run will auto-download yolov7.pt (~74MB)
# Or download manually:
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt# Recreate virtual environment
rm -rf venv
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtLibrary-Seat-Occupancy-Detection/
├── detect_and_track.py # Main detection script
├── sort.py # Object tracking algorithm
├── models/ # YOLOv7 architecture
├── utils/ # Helper functions
├── data/ # Configuration files
├── requirements.txt # Python dependencies
├── venv/ # Virtual environment (created)
├── library-demo-video.mp4 # Your test video
└── runs/detect/ # Output videos (created)
# 1. Navigate and activate
cd /home/priyanshu/repos/Library-Seat-Occupancy-Detection
source venv/bin/activate
# 2. Run detection
python detect_and_track.py --source "your_video.mp4"
# 3. Check results
ls runs/detect/*/
# 4. Deactivate when done
deactivate- Project files: ~7MB
- Virtual environment: ~2.2GB
- YOLOv7 weights: ~74MB
- Total: ~2.3GB
- Use
--conf-thres 0.4for better accuracy - Add
--view-imgto see real-time detection - Use smaller videos for testing (under 1GB)
- Close other applications for better performance deactivate