Skip to content

Conversation

@codelion
Copy link
Member

No description provided.

codelion added 8 commits July 30, 2025 01:41
Renamed the example directory from 'llm_prompt_optimazation' to 'llm_prompt_optimization' for correct spelling and consistency. Updated all example files, added HuggingFace dataset support, new configuration files, and improved documentation. The new example supports prompt evolution for any HuggingFace dataset with custom templates and cascading evaluation. Removed the old example files and replaced them with the new structure.
Updated the evaluator to automatically match prompt files with their corresponding dataset configuration using a naming convention. Added emotion classification benchmark files (`emotion_prompt.txt`, `emotion_prompt_dataset.yaml`) and a wrapper script (`run_evolution.sh`) for easier execution. Deprecated and removed old example files, and improved documentation in the README to reflect the new workflow and dataset handling.
Introduces GSM8K prompt and dataset configuration files for grade school math problem evaluation. Updates evaluator.py to support GSM8K answer extraction and adjusts evaluation logic for numeric answers. Modifies config.yaml for new optimal parameters and documents GSM8K support in the README.
Adds a check to avoid re-migrating already migrated programs in the ProgramDatabase migration logic. This prevents exponential duplication of identical programs, conserves computational resources, and maintains diversity in the MAP-Elites + Island hybrid architecture. Also updates config.yaml to adjust LLM temperature and selection ratios for improved optimization.
When initializing an empty island, a new copy of the best program is now created with a unique ID, rather than reusing the same program instance. This prevents a program from being assigned to multiple islands and ensures correct lineage tracking. Additional tests were added to verify correct migration behavior, unique program assignment per island, and proper handling of empty island initialization.
Introduces a `calculate_prompt_features` function in the evaluator to bin prompts by length and reasoning strategy, returning these as features for MAP-Elites optimization. Updates config.yaml to specify these features and their binning. Evaluator now returns these features alongside the combined score in both evaluation stages.
codelion added 5 commits July 30, 2025 22:38
Updated the evaluator to prioritize 'combined_score' when checking thresholds for consistency with evolution, falling back to averaging metrics if not present. Increased evaluator timeout and cascade threshold in the config, and switched to using max_tokens from config in prompt evaluation. Also updated the LLM model name in the config.
@codelion codelion merged commit bc66c5b into main Jul 31, 2025
3 checks passed
@codelion codelion deleted the fix-llm-optimization branch July 31, 2025 04:09
wangcheng0825 pushed a commit to wangcheng0825/openevolve that referenced this pull request Sep 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants