Skip to content

This notebook contains code of car phone violations tailored in Rwandan context with local datasets

Notifications You must be signed in to change notification settings

MUGWANEZAMANZI/Car-Phone-Violations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Car-Phone Violations — Driver Distraction Detection (YOLOv8)

This repository contains a YOLOv8-based object detection workflow for identifying distracted driving behaviors (e.g., phone use while driving). It includes the trained model artifact (driver_model.pt), a training/validation notebook, and dataset integration via Roboflow.

Contents

  • driver_model.pt: Trained YOLOv8 model (detect)
  • train_yolov8_object_detection_on_custom_dataset.ipynb: End-to-end setup, training, validation, inference
  • datasets/: Roboflow-exported dataset structure (train/valid/test)

Quick Start (Windows)

  1. Create and activate a virtual environment:
    python -m venv .venv
    .venv\Scripts\activate
  2. Install core packages:
    pip install ultralytics==8.2.103 supervision
  3. Install Roboflow. If you encounter Windows file lock issues (WinError 5), either:
    • Preferred:
      pip install requests-toolbelt
      pip install roboflow --no-deps
    • Or standard (may update OpenCV dependencies):
      pip install roboflow

Dataset

The dataset is downloaded via Roboflow in the notebook and placed under datasets/<Project-Version>/. It follows the YOLOv8 format with images/ and labels/ for each split (train/, valid/, test/).

To re-download programmatically in Python (with your API key and workspace/project/version):

from roboflow import Roboflow
rf = Roboflow(api_key="<YOUR_API_KEY>")
project = rf.workspace("<workspace>").project("<project>")
version = project.version(<version_number>)
dataset = version.download("yolov8")

Training Procedure

Training was conducted with YOLOv8 using the following typical configuration (see the notebook for exact commands and any changes):

  • Base model: yolov8s.pt
  • Task: detect
  • Epochs: 50
  • Image size: 800
  • Plots: True

CLI example (inside the notebook or terminal):

# from repository root
# ensure dataset.location points to the Roboflow-exported YAML
ultralytics yolo task=detect mode=train model=yolov8s.pt data=path/to/data.yaml epochs=50 imgsz=800 plots=True

Rename 'best.pt' to 'driver_model.pt'

Validation and Confusion Matrix

You can validate the trained model and generate a confusion matrix using either the CLI or Python API.

  • CLI validation of the provided model:

    ultralytics yolo task=detect mode=val model=./driver_model.pt data=./datasets/<Project-Version>/data.yaml plots=True

    This writes plots (including confusion_matrix.png) under runs/detect/val*/.

  • Python API validation:

    from ultralytics import YOLO
    model = YOLO("./driver_model.pt")
    metrics = model.val(data="./datasets/<Project-Version>/data.yaml", plots=True)
    # confusion_matrix.png will be saved under runs/detect/val*/

To export the confusion matrix for publication, copy the generated file to the repository root and optionally upscale/convert to PDF:

import os, shutil
ROOT = os.getcwd()
# locate latest confusion_matrix
runs_root = os.path.join(ROOT, "runs")
conf_paths = []
for r, d, f in os.walk(runs_root):
    for name in f:
        if name == "confusion_matrix.png":
            conf_paths.append(os.path.join(r, name))
conf_paths.sort(key=lambda p: os.path.getmtime(p))
conf = conf_paths[-1]
export = os.path.join(ROOT, "driver_model_confusion_matrix.png")
shutil.copyfile(conf, export)
print("Exported:", export)

# optional: high-resolution PNG and PDF
from PIL import Image
img = Image.open(export).convert("RGB")
new_w = 2400
new_h = int(img.height * new_w / img.width)
img_hi = img.resize((new_w, new_h), Image.Resampling.LANCZOS)
img_hi.save(os.path.join(ROOT, "driver_model_confusion_matrix_hi.png"), format="PNG", optimize=True)
img_hi.save(os.path.join(ROOT, "driver_model_confusion_matrix.pdf"), format="PDF")

Inference

Run predictions with the trained model on images or folders:

  • CLI:

    ultralytics yolo task=detect mode=predict model=./driver_model.pt conf=0.25 source=./datasets/<Project-Version>/test/images save=True
  • Python API:

    from ultralytics import YOLO
    model = YOLO("./driver_model.pt")
    results = model.predict(source="./datasets/<Project-Version>/test/images", conf=0.25, save=True)
    # Result images (with boxes) will be in runs/detect/predict*

Confidence Score Explained

YOLOv8 reports a confidence score conf per detection. Conceptually, it combines objectness and class probability:

  • Objectness: probability that an object exists in the predicted box.
  • Class probability: probability of a specific class given an object exists.

The effective confidence is commonly interpreted as:

  • $conf = p(\text{object}) \times p(\text{class}\mid \text{object})$

Where $p(\text{object})$ is derived from the objectness head and $p(\text{class}\mid \text{object})$ from the classification head. The conf threshold (default ~0.25) filters low-confidence detections.

Adjust threshold examples:

# CLI
ultralytics yolo task=detect mode=predict model=./driver_model.pt conf=0.5 source=path/to/images
# Python
model.predict(source="path/to/images", conf=0.5)

Reproducibility

  • Fix seeds where possible (see Ultralytics docs: seed argument).
  • Document data splits; ensure validation/test sets remain untouched during training.
  • Record training hyperparameters and versions (Ultralytics, Torch, Python).

References


For questions or collaboration, please reach out via your research lab site above.

About

This notebook contains code of car phone violations tailored in Rwandan context with local datasets

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published