Research Project: Development of a resource-constrained autonomous navigation system combining F1-style control, social robotics (WALL-E style), and generative design.
This project addresses the trade-off between speed and passenger comfort in modern service robots. We propose "Mini Cyber-Rickshaw", a micro-scale (1:28) autonomous architecture that leverages End-to-End Deep Reinforcement Learning to achieve high-speed navigation while maintaining social compliance. The system is designed to operate on low-cost edge devices (Raspberry Pi Zero 2 W) using a Sim-to-Real transfer pipeline.
Key Research Question: Can we deploy F1-grade racing algorithms on a $15 computer to serve smart city tourism?
- Algorithm: Proximal Policy Optimization (PPO) with MobileNetV3 backbone.
- Behavior: Optimized for Apex Clipping (cutting corners) and dynamic velocity control based on track curvature.
- Reward Function: Custom multi-objective function balancing Velocity, Cross-Track Error, and Jerk (smoothness).
- Active Vision: Pan-Tilt camera mechanism (2-DOF) simulating saccadic eye movements to expand Field of View (FoV) at intersections.
- Affective Locomotion: State-machine based behaviors expressing internal states (e.g., Curiosity at landmarks, Fear at obstacles).
- Design: Voronoi lattice structure generated via Topology Optimization (Fusion 360).
- Performance: 40% weight reduction compared to stock chassis, maintaining structural integrity for high-speed impacts.
| Component | Specification |
|---|---|
| Platform | WLToys K989 (1:28 Scale RC Car) |
| Compute | Raspberry Pi Zero 2 W (Quad-core ARM Cortex-A53) |
| Sensors | Pi Camera V2 (160° FoV) |
| Actuation | PCA9685 PWM Driver + SG90 Servos (Head) |
| Simulation | Unity 3D / Donkey Car Simulator |
- Anaconda or Miniconda
- Python 3.9+
- NVIDIA GPU (Recommended for training)
# Clone the repository
git clone [https://github.com/minhquang0407/mini-cyber-rickshaw.git](https://github.com/minhquang0407/mini-cyber-rickshaw.git)
cd mini-cyber-rickshaw
# Create Conda environment
conda create -n donkey python=3.9 -y
conda activate donkey
# Install dependencies
pip install -e .[pc]
pip install -r requirements.txtEnsure myconfig.py is configured with DONKEY_GYM = True.
python manage.py driveAccess the web controller at: http://localhost:8887
We utilize a Behavioral Cloning approach augmented with DRL:
- Data Collection: Drive manually in the Simulator to generate ~10,000 samples.
- Training:
donkey train --tub ./data --model ./models/mypilot.h5
- Deploy: Transfer the
.h5model to Raspberry Pi Zero. - Inference:
python manage.py drive --model ./models/mypilot.h5
mini-cyber-rickshaw/
├── data/ # Raw training data (Git ignored)
├── docs/ # Research papers and diagrams
├── models/ # Trained Neural Networks
├── src/ # Custom source code (RL agents, Vision)
├── manage.py # Main entry point
└── myconfig.py # Configuration file
(Placeholders - Update with your real charts)
- Lap Time: Reduced by 15% compared to PID controller.
- Inference Speed: 22 FPS on Raspberry Pi Zero 2 W.
- Smoothness: Jerk metric reduced by 30% using the proposed Reward Function.
- NGUYEN MINH QUANG
- Researcher
- University of Science - VNU
- Email: minhquang04072005@gmail.com
Special thanks to the Donkey Car Community and Stable Baselines3 team for the open-source tools.