Embodied AI encounters significant challenges in generalizing across varying environments, embodiments, and tasks. AIRSoul does not attempt to overcome these challenges through zero-shot generalization or parametric memory. Instead, we address these tasks using gradient-free black-box learning: Multi-Paradigm In-Context Learning. AIRSoul aims to construct a general-purpose learning machine with the following characteristics:
- Scalable ICL Incentivization: Incentivizing ICL in a training efficient manner.
- Multi-Paradigm ICL: The capability to tackle novel tasks by integrating reinforcement learning, imitation learning, self-supervised learning, and other learning methods within a unified model.
- Long-Context ICL: The ability to learn highly complex novel tasks that require a substantial number of steps with minimal effort.
- In-Context Continual Learning (ICCL): The potential for continual learning and even lifelong learning within context.
-
projects: implementations of model training and validating for different benchmarks and projects.
- MetaLM foundation model for Xenoverse MetaLM
- MazeWorld foundation model for Xenoverse MazeWorld
- OmniRL foundation model for Xenoverse AnyMDP
-
data: Contains the scripts to generate mega-datasets for training. -
airsoul: contains the building blocks and utils of different modelsmodules: contains the basic blocksutils: contains the utils for building networks, training, and evaluationmodels: contains higher-level models built from basic blocksdataloader: contains the dataloader for different tasks
To train a model run
pip install airsoulcheck the data directory to generate datasets for training and evaluation.
Basically you need to modify the configuration file to start the training. The config file basically need to contain three major parts:
log_config: configuration of the log directory, including the path to save the checkpoints and tensorboard logsmodel_config: configuration of the model structure and hyperparameterstrain_config: configuration of the training process, including learning rate, batch size, etc.test_config: configuration of the evaluation process, including the dataset to be evaluated
To train a model run
cd ./projects/PROJECT_NAME/
python train.py config.yamlYou might also overwrite the config file with command line arguments with --config
python train.py config.yaml --configs key1=value1 key2=value2 ...python validate.py config.yaml --configs key1=value1 key2=value2 ...The repo is under active development. Feel free to submit a pull request.