Enhancing Generalization Over Memorization: Recent research indicates that the generalization ability of learning agents primarily depends on the diversity of training environments. However, the real world imposes significant limitations on this diversity, such as physical laws and insufficient variety in environments, tasks, and embodiments. These limitations present a serious bottleneck to advancing artificial general intelligence (AGI). Xenoverse is a collection of extremely diverse worlds generated procedurally based on completely random parameters. We propose that AGI should not be trained and adapted within a single universe but rather within Xenoverse.
Avoid Overfitting Specific Benchmarks: Xenoverse can be used for both Meta-Training and Open-World Evaluation. Existing benchmarks are typically closed-set and tend to be overfitted soon after their introduction. In contrast, Xenoverse provides open-world benchmarks that are theoretically impossible to overfit, making them effective tools for evaluating generalization capabilities.
-
AnyMDP: Procedurally generated, unlimited, general-purpose (Partially Observable) Markov Decision Processes (MDPs) in discrete spaces.
-
LinDS: Procedurally generated, ulimited Linear Time-Invariante (LTI) control tasks.
-
AnyHVAC: Procedurally generated random rooms and equipment for Heating, Ventilation, and Air Conditioning (HVAC) control.
-
MetaLanguage: A pseudo-language generated by randomized neural networks, used for benchmarking in-context language learning (ICLL).
-
MazeWorld: Procedurally generated immersive mazes featuring diverse maze structures and textures.
-
MetaControl: Randomized environments for classic control tasks and locomotion studies.
pip install xenoversePlease refer to the following technical report / papers:
@article{wang2024benchmarking,
title={Benchmarking General Purpose In-Context Learning},
author={Wang, Fan and Lin, Chuan and Cao, Yang and Kang, Yu},
journal={arXiv preprint arXiv:2405.17234},
year={2024}
}
@inproceedings{
wang2025towards,
title={Towards Large-Scale In-Context Reinforcement Learning by Meta-Training in Randomized Worlds},
author={Fan Wang and Pengtao Shao and Yiming Zhang and Bo Yu and Shaoshan Liu and Ning Ding and Yang Cao and Yu Kang and Haifeng Wang},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=b6ASJBXtgP}
}
@article{fan2025putting,
title={Putting the smarts into robot bodies},
author={Fan, Wang and Liu, Shaoshan},
journal={Communications of the ACM},
volume={68},
number={3},
pages={6--8},
year={2025},
publisher={ACM New York, NY, USA}
}