This project demonstrates object detection in a smart retail setting using YOLOv8, with the goal of detecting key retail items such as chairs, tables, and sofas. The project also includes a foundation for multi-modal analytics.
smart_retail/
├─ notebooks/
│ └─ Smart_Retail_Object_Detection.ipynb
├─ src/
│ ├─ detect.py
│ └─ multimodal.py
├─ data/
│ ├─ images/train
│ ├─ images/val
│ ├─ labels/train
│ └─ labels/val
├─ demo/
│ └─ sample_videos/
├─ results/ # Example predictions and plots
├─ README.md
└─ .gitignore
Note: The full dataset is large. It can be accessed from Google Drive through a link below under 'Dataset'.
- Clone the repository:
git clone https://github.com/scouring/smart-retail-object-detection.git
cd smart-retail-object-detection- If using Google Colab, mount your Google Drive:
from google.colab import drive
drive.mount('/content/drive')The project uses a pretrained YOLOv8n model (for CPU/GPU efficiency) and fine-tunes it on a custom dataset:
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.train(
data='/content/drive/MyDrive/smart-retail-object-detection/data/my_dataset.yaml',
epochs=50,
imgsz=640,
batch=16,
project="/content/drive/MyDrive/smart-retail-object-detection/runs",
name="train_yolov8"
)After training, predictions can be made on validation images:
results = model.predict(
source="/content/drive/MyDrive/smart-retail-object-detection/data/images/val",
conf=0.25,
save=True,
project="/content/drive/MyDrive/smart-retail-object-detection/runs",
name="predictions"
)![]() |
![]() |
![]() |
| Chair | Sofa | Table |
![]() |
![]() |
![]() |
| Confusion Matrix | F1 Curve | Loss, Precision = 0.99, Recall = 0.995 |
The dataset used for this project is stored in Google Drive.
Link: Download here





