Framework for experimenting with action model learning approaches and evaluating the learned domain models.
pip install amlgym
from amlgym.algorithms import get_algorithm
agent = get_algorithm('OffLAM')
model = agent.learn('path/to/domain.pddl', ['path/to/trace0', 'path/to/trace1'])
print(model)
Tutorials and API documentation is accessible on Read the Docs
AMLGym provides seamless integration with state-of-the-art algorithms for offline learning classical planning domains from an input set of trajectories in the following settings:
- full observability: SAM [1].
- partial observability: OffLAM [2].
- full and noisy observability: NOLAM [3], ROSAME [4].
PRs with new or existing state-of-the-art algorithms are welcome:
- Add the algorithm PyPI package in
requirements.txt - Create a Python class in
algorithmswhich inherits fromAlgorithmAdapter.pyand implements thelearnmethod
AMLGym can evaluate a PDDL model by means of several metrics:
- Syntactic similarity
- Problem solving
- Predicted applicability and predicted effects
See the benchmark package for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Not yet available