This project provides a comparative evaluation of three temporal deep learning models for vehicle steering angle prediction using video data. The models implemented include:
- Neural Circuit Policies (NCP) using Liquid Time-Constant (LTC) networks
- Spatio-Temporal LSTM Network (ConvLSTM-based)
- Temporal Residual Network with 3D Convolutions
Each model is implemented in its own directory with associated training and inference scripts, along with model-specific configuration files and best-performing weights.
(All models can be run on CUDA devices, whereas when using an MPS device, Conv3D for the LSTM and Temporal Residual Network is not implemented, hence it should be run on CPU)
The dataset used can be found here: https://github.com/SullyChen/driving-datasets
project_src/
├── 3d_convnet/ # Temporal residual network with 3D convolutions
│ ├── train.py
│ └── inference.py
├── conv_lstm/ # Spatio-temporal LSTM network
│ ├── train.py
│ └── inference.py
├── conv_ncp/ # Neural Circuit Policy (NCP) with LTC
│ ├── train.py
│ └── inference.py
├── best_model_weights/ # Stores best weights for all models
├── kaggle_notebooks/ # Jupyter notebooks used for experimentation
├── full_environment.yml # Conda environment specification
├── 3d_convnet_config.json
├── conv_lstm_config.json
├── conv_ncp_config.json
└── README.md
Outside project_src/
├── checkpoints/ # [USER CREATED] Directory to save checkpoints
├── predictions/ # [USER CREATED] Directory to save prediction outputs
└── data/ # [USER CREATED] Input data directory
Ensure you have Miniconda installed.
To create and activate the conda environment:
conda env create -f full_environment.yml
conda activate steering_env
Before running any model, make sure these directories exist in the root of the project (outside the project_src directory):
mkdir -p checkpoints predictions data
Train each model by executing the following commands from outside the project_src directory :
# Train Temporal Residual Network (3D ConvNet)
python -m project_src.3d_convnet.train
# Train Spatio-Temporal LSTM
python -m project_src.conv_lstm.train
# Train NCP (Liquid Time-Constant)
python -m project_src.conv_ncp.train
Each model uses its own *_config.json
file in the root directory to manage hyperparameters.
Perform inference with the best model weights:
# Inference with Temporal Residual Network
python -m project_src.3d_convnet.inference
# Inference with Spatio-Temporal LSTM
python -m project_src.conv_lstm.inference
# Inference with NCP Model
python -m project_src.conv_ncp.inference
The dataset.py
and model.py
files can also be called similarly
- Trained model weights are stored in
best_model_weights/
- Output predictions are saved to
predictions/
- Training checkpoints are saved to
checkpoints/
Hyperparameters and training settings can be edited in the following files:
3d_convnet_config.json
conv_lstm_config.json
conv_ncp_config.json