This project implements an apple detection system using two different object detection models: YOLO (You Only Look Once) and Faster R-CNN. The models are trained on a custom dataset and used for detecting apples in images and video streams. The system is designed to process images, apply non-maximum suppression (NMS) for filtering detections, and save annotated results.
- Apple detection using YOLOv8 and Faster R-CNN
- Training with a custom dataset
- Evaluation metrics: [email protected], [email protected]:0.95, Precision, Recall
- Non-Maximum Suppression (NMS) for filtering overlapping detections
- Batch processing of test images
- Saves annotated images with detection results
The dataset is defined in apple.yaml
, which contains the paths to training and validation image datasets. The dataset includes:
- Bounding box annotations for apples in each image.
- RGB images collected from orchards.
Ensure you have the following dependencies installed:
pip install ultralytics opencv-python numpy torch torchvision
from ultralytics import YOLO
model = YOLO("/path/to/yolov8s.pt") # Pretrained model
model.train(data="/path/to/apple.yaml", epochs=100, batch=32, imgsz=640)
checkout https://github.com/nicolaihaeni/MinneApple/tree/master for more information on training FRCNN and acquire data.
The detection pipeline applies NMS to filter redundant detections and then annotates the detected apples in images.
- Fine-tuning YOLO and Faster R-CNN with additional orchard datasets
- Implementing real-time apple detection on video feeds
- Deploying on Jetson Nano/AGX Orin for edge computing
This project is developed for advanced apple detection applications in agricultural automation.