This project integrates the powerful YOLOv9 object detection algorithm with DeepSORT for real-time multi-object tracking within the CARLA Simulator, a leading platform for autonomous vehicle research. The solution is designed to detect and track objects in dynamic environments, enabling advanced perception and trajectory planning.
- Real-time object detection and tracking using YOLOv9 and DeepSORT.
- Seamless integration with the CARLA Simulator.
- Highly adaptable for autonomous driving research and applications.
- YOLOv9: Advanced object detection.
- DeepSORT: Robust real-time tracking.
- CARLA Simulator: Autonomous driving research simulation.
- OpenCV: Image processing.
- PyTorch: Deep learning framework.
- NumPy: Numerical computations.
carla.mp4
- Python 3.7 or higher.
- CARLA Simulator 0.9.11.
-
Download and Set Up CARLA Simulator:
- Download CARLA 0.9.11 from the CARLA releases page.
- Extract the ZIP file and follow the CARLA official documentation for installation.
cd CARLA_0.9.11 -
Clone This Repository:
git clone https://github.com/ROBERT-ADDO-ASANTE-DARKO/YOLOv9-DeepSORT-realtime-object-tracking-CARLA.git cd YOLOv9-DeepSORT-realtime-object-tracking-CARLA -
Install Dependencies: Install all required Python libraries:
pip install -r requirements.txt
-
Copy the trajectory planning script:
cp trajectory_planning.py CARLA_0.9.11/PythonAPI/examples/
-
Launch the CARLA Simulator:
cd CARLA_0.9.11 ./CarlaUE4.exe ./CarlaUE4 -dx11 -
Run the trajectory planning script: Open a new terminal, navigate to the CARLA PythonAPI examples directory, and execute the script:
cd CARLA_0.9.11/PythonAPI/examples python trajectory_planning.py
-
Clone the YOLOv9 Repository:
git clone https://github.com/WongKinYiu/yolov9.git cd yolov9 -
Prepare the Recorded Video:
- Copy the video recorded in CARLA to the YOLOv9 directory.
-
Add the Tracking Script:
- Copy
detect_dual_tracking.pyto the YOLOv9 directory.
- Copy
-
Run the Tracking Script: Execute the following command to detect and track objects in the video:
python detect_dual_tracking.py --weights 'yolov9-c.pt' --source '/yolov9/video.mp4' --device 'cpu'
This project outputs:
- A video with detected and tracked objects.
- Trajectories and positional data for further analysis.
Contributions are welcome! If you'd like to improve this project:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature). - Commit your changes (
git commit -m 'Add some feature'). - Push to the branch (
git push origin feature/your-feature). - Open a pull request.
This project is licensed under the MIT License.