Skip to content

ashishrai12/Object-Detection-YOLOP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

You Only Once for Panoptic Perception

YOLOP (You Only Look Once for Panoptic driving Perception) is a real-time, multi-task neural network for autonomous driving that performs traffic object detection, drivable area segmentation, and lane detection simultaneously.

{49260BA3-D870-4E8B-85C9-1BC0D28D24FF}

PWC


Quick Start

1. Installation

This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:

conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

2. Data Preparation

Download the BDD100K dataset and annotations:

Organize your dataset as follows and update the paths in ./lib/config/default.py (specifically _C.DATASET.DATAROOT and related fields):

├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val

3. Training

Train the model with default configuration:

python tools/train.py

For multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=N tools/train.py

4. Inference / Demo

Run inference on images or videos using the standard demo script:

# Run on a folder of images (ensure the path exists or is created)
python tools/demo.py --source inference/images --weights weights/End-to-end.pth

# Run on webcam (default 0)
python tools/demo.py --source 0

Side-by-Side Comparison Demo

For a better visualization that shows the original video and the processed perception result side-by-side, use the src/demo_side_by_side.py script:

python src/demo_side_by_side.py --source path/to/video.mp4 --weights weights/End-to-end.pth

5. Evaluation

Evaluate the model on the validation set:

python tools/test.py --weights weights/End-to-end.pth

6. Running Tests

This repository includes unit tests to verify the core utility functions:

python -m unittest discover tests

Project Structure

├─lib/                # Core library
│ ├─config           # Configuration files (Update default.py for your paths)
│ ├─core             # Core training/eval capabilities
│ ├─dataset          # Dataset loaders (BDD100K)
│ ├─models           # YOLOP model definition
│ ├─utils            # Utilities (logging, plotting, etc.)
├─tools/              # Main execution scripts
│ ├─demo.py          # Standard inference script
│ ├─test.py          # Evaluation script
│ ├─train.py         # Training script
├─src/                # Additional tools
│ ├─demo_side_by_side.py  # Side-by-side visualization demo
├─tests/              # Unit tests for the codebase
├─weights/            # Pre-trained weights (.pth and .onnx formats)

About

YOLOP is designed for autonomous driving perception, combining three crucial tasks into a single unified model: Traffic Object Detection: Identifies and localizes vehicles, pedestrians, and other traffic participants. Drivable Area Segmentation: Determines which areas of the road are safe for driving.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors