Developed in association with NeureonMindFlux Research Lab
This framework enables agents to dynamically model and adapt their confidence, fatigue, and behavioral mode via dedicated neural sub-models. It supports:
- Single-agent & multi-agent execution
- Meta-learner driven analysis & adaptation
- Full scientific metrics pipeline & visualization
- Modular architecture for extensibility & reproducibility
The framework is fully modular, extensible, and aligned with scientific reproducibility standards.
SELF_MODEL_AGENTS/
├── docs/
│ ├── meta_learner_memory/
│ ├── meta_learner_reports/
│ └── meta_learner_system/
├── outputs/
│ ├── logs/
│ ├── metrics/
│ ├── models/
│ ├── scientific_metrics/
│ ├── self_model_logs/
│ ├── self_model_weights/
│ └── visualizations/
├── scripts/
│ ├── run_gridworld_experiment.py
│ ├── run_multi_agent_experiment.py
│ ├── visualize_multi_agent.py
│ └── visualize_self_model.py
├── self_model_agents/
│ ├── policy/
│ ├── self_model/
│ ├── utils/
│ ├── agent.py
├── gui_main.py
├── requirements.txt
├── setup.py
├── LICENSE
└── README.md
-
Meta-Learner System (
meta_learner_system/
):- Meta-cognitive layer monitoring agent dynamics.
- Predictive models of confidence, fatigue, mode switching.
- Scientific metrics & visualizations.
-
Self-Model Agents (
self_model_agents/
):- SelfModel components (Simple / Advanced).
- Policy modules with varying meta-cognitive adaptation.
- Agent-environment interaction loop.
-
Experiment Runners (
scripts/
):- Single-agent & multi-agent pipelines.
- Visualization tools.
-
Outputs (
outputs/
):- Logs & scientific reports.
- Publication-ready visualizations.
- Meta-learner driven adaptive agents
- Modular SelfModel & Policy design
- Multi-agent execution & coordination
- Reproducible scientific metrics
- Visualization dashboards
git clone https://github.com/yourusername/self_model_agents.git
cd self_model_agents
pip install -r requirements.txt
python scripts/run_gridworld_experiment.py
python scripts/run_multi_agent_experiment.py
python scripts/visualize_self_model.py
python scripts/visualize_multi_agent.py
We welcome contributions!
- Fork the repository
- Create your branch (
git checkout -b feature/your-feature
) - Commit your changes (
git commit -am 'Add new feature'
) - Push to the branch (
git push origin feature/your-feature
) - Open a Pull Request
Please follow the existing coding style and include tests for new functionality.
This project is licensed under the Apache 2.0 License — see the LICENSE file for details.
This framework was developed as part of:
If you use or reference this project in your research or software, please cite the following preprint:
Mozo, H. E. (2025, June 27). A Modular Software Framework for Neural-Augmented Self-Modeling Agents with Explicit Internal State Representation. TechRxiv. https://doi.org/10.36227/techrxiv.175100030.06187560/v1
H. E. Mozo, "A Modular Software Framework for Neural-Augmented Self-Modeling Agents with Explicit Internal State Representation," TechRxiv, June 27, 2025. [Online]. Available: https://doi.org/10.36227/techrxiv.175100030.06187560/v1
@misc{mozo2025modular,
author = {Hector E. Mozo},
title = {A Modular Software Framework for Neural-Augmented Self-Modeling Agents with Explicit Internal State Representation},
year = {2025},
month = {June},
publisher = {TechRxiv},
doi = {10.36227/techrxiv.175100030.06187560.v1},
url = {https://doi.org/10.36227/techrxiv.175100030.06187560/v1}
}
---