Skip to content

Conversation

@jvm123
Copy link
Contributor

@jvm123 jvm123 commented Jun 16, 2025

Prompt & response logging capability was added to controller.py, evaluator.py and database.py.
The resulting data is saved in the programs/<id>.json.

The visualization UI was improved for easy prompt & response viewing.

Internals

  • database.py:
    • A new log_prompt() method stores prompts & their optional response in memory.
    • The save_program() method saves all prompts & responses for a program to the ["prompts"] key in the programs json file.
  • evaluator.py
    • _llm_evaluate() stores its generated prompts and the LLMEnsemble responses
  • controller.py
    • run() stores prompts and the LLMEnsemble responses
  • config.py provides a Database: log_prompts setting

Usage

The config.yaml thereby now offers a setting to turn the prompt logging feature on and off, default is on. Accordingly, nothing has to be done to make use of this feature.

database:
    log_prompts: bool = True

Other included features

  • Visualizer UI
    • Show list item number graphically
    • When user pans away and the graph appears to be empty, show a "Recenter" button

Other included fixes

  • Fixed $ make help
  • Visualizer UI:
    • Performance graph did not automatically refresh when new data were available
    • Graceful handling of duplicate program IDs in the UI

Note: This PR includes PR #68.

@codelion codelion merged commit 3d783a2 into algorithmicsuperintelligence:main Jun 19, 2025
3 checks passed
0x0f0f0f pushed a commit to 0x0f0f0f/openevolve that referenced this pull request Jul 7, 2025
…rompt-export

Feature: Prompt & response saving
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants