Agents located in this repository should be used in the Network Security Game environment. They are intended for navigation and problem solving in the adversarial network-security based environment where they play the role of attackers or defenders.
Agents need their own set of libraries which are installed separatedly from the AiDojo environment.
To run an agent you need to install
- The library of the AIDojoCoordinator
- The libraries needed by your agent
We recommend to use virtual environment when installing.
python -m venv aidojo-agentsTo activat the venv, run:
source aidojo-agents/bin/activate
Be sure you are in the directory of this NetSecGameAgents repository.
Agents requires components of the NeSecGame to run properly so make sure it is installed first. The code for NetSecGame is assumed to be in the previous directory
python -m pip install -e ..
To install the required packages for each agent, you can run
python -m pip install -e .[<name-of-the-agent>]
For example python -m pip install -e ".[tui,llm]"
For a complete list of agents to install the dependencies see the pyproject.toml file.
To run the agents, use
python3 -m <path-to-the-agent>
For example, to run the random attackers:
python3 -m agents.attackers.random.random_agent
All future agents should extend BaseAgent - a minimal implementation of agent capable of interaction with the environment. The base agent also implements logging capabilities for the agent via the logging python module. The logger can be accessed by property logger.
For creating an instance of a BaseAgent, three parameters have to be used:
host: str- URL where the game server runsport: int- port number where game server runsrole: str- Intended role of the agent. Options areAttacker,Defender,Human
When extending the BaseAgent, these args should be passed to the constructor by calling:
super().__init__(host, port, role)
There are 4 important methods to be used for interaction with the environment:
register(): Should be used ONCE at the beginning of the interaction to register the agent in the game.- Uses the class name and
rolespecified in the initialization for the registration in the game - returns
Observationwhich contains the status of the registration and the initialGameStateif the registration was successful
- Uses the class name and
make_step(Action: action): Used for sending anActionobject to be used as a next step of the agent. ReturnsObservationwith new state of the environment after the action was applied.request_game_reset(): Used to RESET the state of the environment to its initial position (e.g. at the end of an episode). ReturnsObservationwith state of the environment.terminate_connection(): Should be used ONCE at the end of the interaction to properly disconnect the agent from the game server.
Examples of agents extending the BaseAgent can be found in:
- RandomAgent
- InteractiveAgent
- Q-learningAgent (Documentation here)
There are three types of roles an agent can play in NetSecEnv:
- Attacker
- Defender
- Benign
Agents of each type are stored in the corresponding directory within this repository:
├── agents
├── attackers
├── concepts_q_learning
├── double_q_learning
├── gnn_reinforce
├── interactive_tui
├── ...
├── defenders
├── random
├── probabilistic
├── benign
├── benign_random
Utility functions in agent_utils.py can be used by any agent to evaluate a GameState, and generate a set of valid Actions in a GameState, etc.
Additionally, there are several files with utils functions that can be used by any agents:
agent_utils.pyFormatting GameState and generation of valid actionsgraph_agent_utils.py: GameState -> graph conversionllm_utils.py: utility functions for LLM-based agents
| Agent | NetSecGame branch | Tag | Status |
|---|---|---|---|
| BaseAgent | main | HEAD |
✅ |
| Random Attacker | main | HEAD |
✅ |
| InteractiveAgent | main | HEAD |
✅ |
| Q-learning | main | HEAD |
✅ |
| LLM | main | realease_out_of_the_cage | ✅ |
| LLM_QA | main | realease_out_of_the_cage | ✅ |
| GNN_REINFORCE | main | realease_out_of_the_cage | ✅ |
| Random Defender | main | 👷🏼♀️ | |
| Probabilistic Defender | main | 👷🏼♀️ |
Every agent by default exports the experiment details to a local mlflow directory.
If you want to see the local mlflow data do
pip install mlflow
mlflow ui -p 5001If you want to export the local mlflow to a remote mlflow you can use our util
python utils/export_import_mlflow_exp.py --experiment_id 783457873620024898 --run_id 5f2e4a205b7745259a4ddedc12d71a74 --remote_mlflow_url http://127.0.0.1:8000 --mlruns_dir ./mlrunsThis code was developed at the Stratosphere Laboratory at the Czech Technical University in Prague as part of the AIDojo Project.